Rectangle 27 69

Multipart Request with JSON Data (also called Mixed Multipart) :

Based on RESTful service in Spring 4.0.2 Release, HTTP request with a First part with XML or JSON formatted data and a second part with a file can be achieved with @RequestPart .Below is the sample implementation.

Rest service in Controller will have mixed @RequestPart and MultipartFile to serve such Multipart + JSON request.

@RequestMapping(value = "/executesampleservice", method = RequestMethod.POST,
    consumes = {"multipart/form-data"})
@ResponseBody
public boolean executeSampleService(
        @RequestPart("properties") @Valid ConnectionProperties properties,
        @RequestPart("file") @Valid @NotNull @NotBlank MultipartFile file) {
    return projectService.executeSampleService(project, file);
}

Create a FormData object.

Append the file to the FormData object using one of the below steps.

  • If the file has been uploaded using an input element of type "file", then append it to the FormData object. formData.append("file", document.forms[formName].file.files[0]);
formData.append("file", myFile, "myfile.txt"); OR  formData.append("file", myBob, "myfile.txt");

Create a blob with the stringified JSON data and append it to the FormData object. This causes the Content-type of the second part in the multipart request to be "application/json" instead of the file type.

Send the request to the server.

Request Details : i. Content-Type : undefined This causes the browser to set the Content-Type to multipart/form-data and fill the boundary correctly. Manually setting Content-Type to multipart/form-data will fail to fill in the boundary parameter of the request.

formData = new FormData();

formData.append("file", document.forms[formName].file.files[0]);
formData.append('properties', new Blob([JSON.stringify({
                "name": "root",
                "password": "root"                    
            })], {
                type: "application/json"
            }));
method: "POST",
headers: {
         "Content-Type": undefined
  },
data: formData
Accept:application/json, text/plain, */*
Content-Type:multipart/form-data; boundary=----WebKitFormBoundaryEBoJzS3HQ4PgE1QB

------WebKitFormBoundaryvijcWI2ZrZQ8xEBN
Content-Disposition: form-data; name="file"; filename="myfile.txt"
Content-Type: application/txt


------WebKitFormBoundaryvijcWI2ZrZQ8xEBN
Content-Disposition: form-data; name="properties"; filename="blob"
Content-Type: application/json


------WebKitFormBoundaryvijcWI2ZrZQ8xEBN--

Hi, what is ConnectionProperties, just a POJO?

Hi Liu, Yes Connection properties is just a POJO class.Our aim is to take some pojo object and multipart in the rest request.

processData: false, contentType: false
JQuery $ajax()

@SunilKumar, If i need to give file upload as optional..?with form Data.How can i do this. Because if image is not selected am getting Required request part file is not present

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

Spring MVC Multipart Request with JSON - Stack Overflow

json spring spring-mvc
Rectangle 27 69

Incredibly, no other answer has mentioned the fastest way to do pagination in all SQL Server versions. Offsets can be terribly slow for large page numbers as is benchmarked here. There is an entirely different, much faster way to perform pagination in SQL. This is often called the "seek method" or "keyset pagination" as described in this blog post here.

SELECT TOP 10 first_name, last_name, score, COUNT(*) OVER()
FROM players
WHERE (score < @previousScore)
   OR (score = @previousScore AND player_id < @previousPlayerId)
ORDER BY score DESC, player_id DESC

The @previousScore and @previousPlayerId values are the respective values of the last record from the previous page. This allows you to fetch the "next" page. If the ORDER BY direction is ASC, simply use > instead.

With the above method, you cannot immediately jump to page 4 without having first fetched the previous 40 records. But often, you do not want to jump that far anyway. Instead, you get a much faster query that might be able to fetch data in constant time, depending on your indexing. Plus, your pages remain "stable", no matter if the underlying data changes (e.g. on page 1, while you're on page 4).

Note, the "seek method" is also called keyset pagination.

The COUNT(*) OVER() window function will help you count the number of total records "before pagination". If you're using SQL Server 2000, you will have to resort to two queries for the COUNT(*).

@user960567: In terms of performance, keyset paging will always beat offset paging, no matter whether you implement offset paging with the SQL standard OFFSET .. FETCH, or with previous ROW_NUMBER() tricks.

WITH C AS(SELECT TOP(@rowsPerPage * @pageNum), ResultNum = ROW_NUMBER() OVER (ORDER BY id)...) SELECT * FROM C WHERE ResultNum > ((@pageNum - 1) * @rowsPerPage)

I have three issues with the seek method. [1] A user can't jump to page. [2] it assumes sequential keys i.e. if someone deletes some 3 rows, then I get a page of 7 items instead of 10. RowNumber gives me a consistent 10 items per page. [3] it doesn't work with existing grids that assume pagenumber and pagesize.

@Junto: keyset paging isn't appropriate for all cases. It's definitely not for data grids. But it's perfect for scenarios like infinite scrolling of Facebook feed page. Doesn't matter if new posts are being added at the top, your subsequent feed posts will be correctly added to the bottom while you're scrolling down. Perfect usage example for this... Such thing would be much much harder to implement using offset limit/fetch using numbers only.

I have to agree with Junto. This method completely rules out a client that has a pretty standard pagination ui of "Previous 1 2 3 (4) 5 6 Next" where users can jump ahead. This is not exactly an edge case in my experience...

What is the best way to paginate results in SQL Server - Stack Overflo...

sql sql-server performance pagination
Rectangle 27 67

Incredibly, no other answer has mentioned the fastest way to do pagination in all SQL Server versions. Offsets can be terribly slow for large page numbers as is benchmarked here. There is an entirely different, much faster way to perform pagination in SQL. This is often called the "seek method" or "keyset pagination" as described in this blog post here.

SELECT TOP 10 first_name, last_name, score, COUNT(*) OVER()
FROM players
WHERE (score < @previousScore)
   OR (score = @previousScore AND player_id < @previousPlayerId)
ORDER BY score DESC, player_id DESC

The @previousScore and @previousPlayerId values are the respective values of the last record from the previous page. This allows you to fetch the "next" page. If the ORDER BY direction is ASC, simply use > instead.

With the above method, you cannot immediately jump to page 4 without having first fetched the previous 40 records. But often, you do not want to jump that far anyway. Instead, you get a much faster query that might be able to fetch data in constant time, depending on your indexing. Plus, your pages remain "stable", no matter if the underlying data changes (e.g. on page 1, while you're on page 4).

Note, the "seek method" is also called keyset pagination.

The COUNT(*) OVER() window function will help you count the number of total records "before pagination". If you're using SQL Server 2000, you will have to resort to two queries for the COUNT(*).

@user960567: In terms of performance, keyset paging will always beat offset paging, no matter whether you implement offset paging with the SQL standard OFFSET .. FETCH, or with previous ROW_NUMBER() tricks.

WITH C AS(SELECT TOP(@rowsPerPage * @pageNum), ResultNum = ROW_NUMBER() OVER (ORDER BY id)...) SELECT * FROM C WHERE ResultNum > ((@pageNum - 1) * @rowsPerPage)

I have three issues with the seek method. [1] A user can't jump to page. [2] it assumes sequential keys i.e. if someone deletes some 3 rows, then I get a page of 7 items instead of 10. RowNumber gives me a consistent 10 items per page. [3] it doesn't work with existing grids that assume pagenumber and pagesize.

@Junto: keyset paging isn't appropriate for all cases. It's definitely not for data grids. But it's perfect for scenarios like infinite scrolling of Facebook feed page. Doesn't matter if new posts are being added at the top, your subsequent feed posts will be correctly added to the bottom while you're scrolling down. Perfect usage example for this... Such thing would be much much harder to implement using offset limit/fetch using numbers only.

I have to agree with Junto. This method completely rules out a client that has a pretty standard pagination ui of "Previous 1 2 3 (4) 5 6 Next" where users can jump ahead. This is not exactly an edge case in my experience...

What is the best way to paginate results in SQL Server - Stack Overflo...

sql sql-server performance pagination
Rectangle 27 145

Representing as a Fraction

In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:

Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.

9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.

First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):

def float_to_bin_parts(number, bits=64):
    if bits == 32:          # single precision
        int_pack      = 'I'
        float_pack    = 'f'
        exponent_bits = 8
        mantissa_bits = 23
        exponent_bias = 127
    elif bits == 64:        # double precision. all python floats are this
        int_pack      = 'Q'
        float_pack    = 'd'
        exponent_bits = 11
        mantissa_bits = 52
        exponent_bias = 1023
    else:
        raise ValueError, 'bits argument must be 32 or 64'
    bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
    return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]

There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.

Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.

When we call that function with our example, 9.2, here's what we get:

>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']

You'll see I've split the return value into three components. These components are:

  • Mantissa (also called Significand, or Fraction)

The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.

The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).

The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:

The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:

This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.

When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:

In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)

Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.

  • Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
  • Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa

Putting all three parts together, we're given this binary number:

Which we can then convert from binary to decimal:

And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:

Now that we've built the number, it's possible to reconstruct it into a simple fraction:

Shift mantissa to a whole number:

>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']

Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.

Assemble the binary scientific notation:

There is also a nice tutorial that shows how to go the other way - given a decimal representation of a number, how do you construct the floating point equivalent. The "long division" approach shows very clearly how you end up with a "remainder" after trying to represent the number. Should be added if you want to be truly "canonical" with your answer.

If you're talking about Python and floating-point, I'd suggest at least including the Python tutorial in your links: docs.python.org/3.4/tutorial/floatingpoint.html That's supposed to be the one-stop go-to resource for floating-point issues for Python programmers. If it's lacking in some way (and it almost surely is), please do open an issue on the Python bug tracker for updates or changes.

This answer should definitely also link to floating-point-gui.de, as it's probably the best introduction for beginners. IMO, it should even go above "What every computer scientist should know..." -- these days, people who can reasonably comprehend Goldberg's paper usually are already well aware of it.

"This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal". This is not true. All of these 'number over a power of two' ratios are exact in decimal. Any approximation is only to shorten the decimal number -- for convenience.

language agnostic - Why Are Floating Point Numbers Inaccurate? - Stack...

language-agnostic floating-point floating-point-precision
Rectangle 27 145

Representing as a Fraction

In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:

Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.

9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.

First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):

def float_to_bin_parts(number, bits=64):
    if bits == 32:          # single precision
        int_pack      = 'I'
        float_pack    = 'f'
        exponent_bits = 8
        mantissa_bits = 23
        exponent_bias = 127
    elif bits == 64:        # double precision. all python floats are this
        int_pack      = 'Q'
        float_pack    = 'd'
        exponent_bits = 11
        mantissa_bits = 52
        exponent_bias = 1023
    else:
        raise ValueError, 'bits argument must be 32 or 64'
    bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
    return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]

There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.

Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.

When we call that function with our example, 9.2, here's what we get:

>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']

You'll see I've split the return value into three components. These components are:

  • Mantissa (also called Significand, or Fraction)

The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.

The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).

The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:

The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:

This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.

When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:

In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)

Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.

  • Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
  • Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa

Putting all three parts together, we're given this binary number:

Which we can then convert from binary to decimal:

And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:

Now that we've built the number, it's possible to reconstruct it into a simple fraction:

Shift mantissa to a whole number:

>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']

Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.

Assemble the binary scientific notation:

There is also a nice tutorial that shows how to go the other way - given a decimal representation of a number, how do you construct the floating point equivalent. The "long division" approach shows very clearly how you end up with a "remainder" after trying to represent the number. Should be added if you want to be truly "canonical" with your answer.

If you're talking about Python and floating-point, I'd suggest at least including the Python tutorial in your links: docs.python.org/3.4/tutorial/floatingpoint.html That's supposed to be the one-stop go-to resource for floating-point issues for Python programmers. If it's lacking in some way (and it almost surely is), please do open an issue on the Python bug tracker for updates or changes.

This answer should definitely also link to floating-point-gui.de, as it's probably the best introduction for beginners. IMO, it should even go above "What every computer scientist should know..." -- these days, people who can reasonably comprehend Goldberg's paper usually are already well aware of it.

"This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal". This is not true. All of these 'number over a power of two' ratios are exact in decimal. Any approximation is only to shorten the decimal number -- for convenience.

language agnostic - Why Are Floating Point Numbers Inaccurate? - Stack...

language-agnostic floating-point floating-point-precision
Rectangle 27 145

Representing as a Fraction

In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:

Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.

9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.

First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):

def float_to_bin_parts(number, bits=64):
    if bits == 32:          # single precision
        int_pack      = 'I'
        float_pack    = 'f'
        exponent_bits = 8
        mantissa_bits = 23
        exponent_bias = 127
    elif bits == 64:        # double precision. all python floats are this
        int_pack      = 'Q'
        float_pack    = 'd'
        exponent_bits = 11
        mantissa_bits = 52
        exponent_bias = 1023
    else:
        raise ValueError, 'bits argument must be 32 or 64'
    bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
    return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]

There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.

Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.

When we call that function with our example, 9.2, here's what we get:

>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']

You'll see I've split the return value into three components. These components are:

  • Mantissa (also called Significand, or Fraction)

The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.

The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).

The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:

The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:

This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.

When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:

In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)

Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.

  • Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
  • Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa

Putting all three parts together, we're given this binary number:

Which we can then convert from binary to decimal:

And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:

Now that we've built the number, it's possible to reconstruct it into a simple fraction:

Shift mantissa to a whole number:

>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']

Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.

Assemble the binary scientific notation:

There is also a nice tutorial that shows how to go the other way - given a decimal representation of a number, how do you construct the floating point equivalent. The "long division" approach shows very clearly how you end up with a "remainder" after trying to represent the number. Should be added if you want to be truly "canonical" with your answer.

If you're talking about Python and floating-point, I'd suggest at least including the Python tutorial in your links: docs.python.org/3.4/tutorial/floatingpoint.html That's supposed to be the one-stop go-to resource for floating-point issues for Python programmers. If it's lacking in some way (and it almost surely is), please do open an issue on the Python bug tracker for updates or changes.

This answer should definitely also link to floating-point-gui.de, as it's probably the best introduction for beginners. IMO, it should even go above "What every computer scientist should know..." -- these days, people who can reasonably comprehend Goldberg's paper usually are already well aware of it.

"This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal". This is not true. All of these 'number over a power of two' ratios are exact in decimal. Any approximation is only to shorten the decimal number -- for convenience.

language agnostic - Why Are Floating Point Numbers Inaccurate? - Stack...

language-agnostic floating-point floating-point-precision
Rectangle 27 144

Representing as a Fraction

In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:

Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.

9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.

First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):

def float_to_bin_parts(number, bits=64):
    if bits == 32:          # single precision
        int_pack      = 'I'
        float_pack    = 'f'
        exponent_bits = 8
        mantissa_bits = 23
        exponent_bias = 127
    elif bits == 64:        # double precision. all python floats are this
        int_pack      = 'Q'
        float_pack    = 'd'
        exponent_bits = 11
        mantissa_bits = 52
        exponent_bias = 1023
    else:
        raise ValueError, 'bits argument must be 32 or 64'
    bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
    return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]

There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.

Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.

When we call that function with our example, 9.2, here's what we get:

>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']

You'll see I've split the return value into three components. These components are:

  • Mantissa (also called Significand, or Fraction)

The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.

The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).

The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:

The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:

This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.

When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:

In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)

Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.

  • Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
  • Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa

Putting all three parts together, we're given this binary number:

Which we can then convert from binary to decimal:

And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:

Now that we've built the number, it's possible to reconstruct it into a simple fraction:

Shift mantissa to a whole number:

>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']

Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.

Assemble the binary scientific notation:

There is also a nice tutorial that shows how to go the other way - given a decimal representation of a number, how do you construct the floating point equivalent. The "long division" approach shows very clearly how you end up with a "remainder" after trying to represent the number. Should be added if you want to be truly "canonical" with your answer.

If you're talking about Python and floating-point, I'd suggest at least including the Python tutorial in your links: docs.python.org/3.4/tutorial/floatingpoint.html That's supposed to be the one-stop go-to resource for floating-point issues for Python programmers. If it's lacking in some way (and it almost surely is), please do open an issue on the Python bug tracker for updates or changes.

This answer should definitely also link to floating-point-gui.de, as it's probably the best introduction for beginners. IMO, it should even go above "What every computer scientist should know..." -- these days, people who can reasonably comprehend Goldberg's paper usually are already well aware of it.

"This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal". This is not true. All of these 'number over a power of two' ratios are exact in decimal. Any approximation is only to shorten the decimal number -- for convenience.

language agnostic - Why Are Floating Point Numbers Inaccurate? - Stack...

language-agnostic floating-point floating-point-precision
Rectangle 27 16

Can also be called as

@Html.Partial("_PartialView", (ModelClass)View.Data)

This has the downside that it generates a temporary (and potentially large) MvcHtmlString on the fly, rather than just writing to the output directly.

Will this not work with RenderPartial?

I found that I needed to cast my model like this, even though my model wasn't declared as a dynamic. It was probably because my model was a list.

Render partial view with dynamic model in Razor view engine and ASP.NE...

asp.net-mvc razor asp.net-mvc-3
Rectangle 27 10

While LINQ-to-SQL will generate an OFFSET clause (possibly emulated using ROW_NUMBER() OVER() as others have mentioned), there is an entirely different, much faster way to perform paging in SQL. This is often called the "seek method" as described in this blog post here.

SELECT TOP 10 first_name, last_name, score
FROM players
WHERE (score < @previousScore)
   OR (score = @previousScore AND player_id < @previousPlayerId)
ORDER BY score DESC, player_id DESC

The @previousScore and @previousPlayerId values are the respective values of the last record from the previous page. This allows you to fetch the "next" page. If the ORDER BY direction is ASC, simply use > instead.

With the above method, you cannot immediately jump to page 4 without having first fetched the previous 40 records. But often, you do not want to jump that far anyway. Instead, you get a much faster query that might be able to fetch data in constant time, depending on your indexing. Plus, your pages remain "stable", no matter if the underlying data changes (e.g. on page 1, while you're on page 4).

Note, the "seek method" is also called keyset paging.

sql - efficient way to implement paging - Stack Overflow

sql sql-server asp.net-mvc linq-to-sql pagination
Rectangle 27 16

Left-Right parsing also called as bottom-up parsing is actually efficient for the browser.

#menu ul li a { color: #00f; }

The browser first checks for a, then li, then ul, and then #menu.

This is because as the browser is scanning the page it just needs to look at the current element/node and all the previous nodes/elements that it has scanned.

The thing to note is that the browser starts processing moment it gets a complete tag/node and needn't have to wait for the whole page except when it finds a script, in which case it temporarily pauses and completes execution of the script and then goes forward.

If it does the other way round it will be inefficient because the browser found the element it was scanning on the first check, but was then forced to continue looking through the document for all the additional selectors. For this the browser needs to have the entire html and may need to scan the whole page before it starts css painting.

This is contrary to how most libs parse dom. There the dom is constructed and it doesn't need to scan the entire page just find the first element and then go on matching others inside it .

html - Why do browsers match CSS selectors from right to left? - Stack...

html css browser css-selectors
Rectangle 27 32

@ - Instance variable of a class@@ - Class variable, also called as static variable in some cases

A class variable is a variable that is shared amongst all instances of a class. This means that only one variable value exists for all objects instantiated from this class. If one object instance changes the value of the variable, that new value will essentially change for all other object instances.

Another way of thinking of thinking of class variables is as global variables within the context of a single class. Class variables are declared by prefixing the variable name with two @ characters (@@). Class variables must be initialized at creation time

syntax - What does @@variable mean in Ruby? - Stack Overflow

ruby syntax instance-variables class-variables
Rectangle 27 72

While viewWillAppear() and viewDidDisappear() are called when the back button is tapped, they are also called at other times. See end of answer for more on that.

Detecting the back button is better done when the VC is removed from it's parent (the NavigationController) with the help of willMoveToParentViewController(_:) OR didMoveToParentViewController()

If parent is nil, the view controller is being popped off the navigation stack and dismissed. If parent is not nil, it is being added to the stack and presented.

// Objective-C
-(void)willMoveToParentViewController:(UIViewController *)parent {
     [super willMoveToParentViewController:parent];
    if (!parent){
       // The back button was pressed or interactive gesture used
    }
}


// Swift
override func willMove(toParentViewController parent: UIViewController?) {
    super.willMove(toParentViewController:parent)
    if parent == nil {
        // The back button was pressed or interactive gesture used
    }
}

Swap out willMove for didMove and check self.parent to do work after the view controller is dismissed.

Do note, checking the parent doesn't allow you to "pause" the transition if you need to do some sort of async save. To do that you could implement the following. Only downside here is you lose the fancy iOS styled/animated back button. Also be careful here with the interactive swipe gesture. Use the following to handle this case.

var backButton : UIBarButtonItem!

override func viewDidLoad() {
    super.viewDidLoad()

     // Disable the swipe to make sure you get your chance to save
     self.navigationController?.interactivePopGestureRecognizer.enabled = false

     // Replace the default back button
    self.navigationItem.setHidesBackButton(true, animated: false)
    self.backButton = UIBarButtonItem(title: "Back", style: UIBarButtonItemStyle.Plain, target: self, action: "goBack")
    self.navigationItem.leftBarButtonItem = backButton
}

// Then handle the button selection
func goBack() {
    // Here we just remove the back button, you could also disabled it or better yet show an activityIndicator
    self.navigationItem.leftBarButtonItem = nil
    someData.saveInBackground { (success, error) -> Void in
        if success {
            self.navigationController?.popViewControllerAnimated(true)
            // Don't forget to re-enable the interactive gesture
            self.navigationController?.interactivePopGestureRecognizer.enabled = true
        }
        else {
            self.navigationItem.leftBarButtonItem = self.backButton
            // Handle the error
        }
    }
}

If you didn't get the viewWillAppear viewDidDisappear issue, Let's run through an example. Say you have three view controllers:

Lets follow the calls on the detailVC as you go from the listVC to settingsVC and back to listVC

Detail.viewDidAppear
Detail.viewDidDisappear

Notice that viewDidDisappear is called multiple times, not only when going back, but also when going forward. For a quick operation that may be desired, but for a more complex operation like a network call to save, it may not.

Just a note, user didMoveToParantViewController: to do work when the view is no longer visible. Helpful for iOS7 with the interactiveGesutre

The parent parameter is nil when you are popping to the parent view controller, and non-nil when the view this method appears in is being shown. You can use that fact to do an action only when the Back button is pressed, and not when arriving at the view. That was, after all, the original question. :)

iphone - Detecting when the 'back' button is pressed on a navbar - Sta...

iphone objective-c ios xcode
Rectangle 27 71

While viewWillAppear() and viewDidDisappear() are called when the back button is tapped, they are also called at other times. See end of answer for more on that.

Detecting the back button is better done when the VC is removed from it's parent (the NavigationController) with the help of willMoveToParentViewController(_:) OR didMoveToParentViewController()

If parent is nil, the view controller is being popped off the navigation stack and dismissed. If parent is not nil, it is being added to the stack and presented.

// Objective-C
-(void)willMoveToParentViewController:(UIViewController *)parent {
     [super willMoveToParentViewController:parent];
    if (!parent){
       // The back button was pressed or interactive gesture used
    }
}


// Swift
override func willMove(toParentViewController parent: UIViewController?) {
    super.willMove(toParentViewController:parent)
    if parent == nil {
        // The back button was pressed or interactive gesture used
    }
}

Swap out willMove for didMove and check self.parent to do work after the view controller is dismissed.

Do note, checking the parent doesn't allow you to "pause" the transition if you need to do some sort of async save. To do that you could implement the following. Only downside here is you lose the fancy iOS styled/animated back button. Also be careful here with the interactive swipe gesture. Use the following to handle this case.

var backButton : UIBarButtonItem!

override func viewDidLoad() {
    super.viewDidLoad()

     // Disable the swipe to make sure you get your chance to save
     self.navigationController?.interactivePopGestureRecognizer.enabled = false

     // Replace the default back button
    self.navigationItem.setHidesBackButton(true, animated: false)
    self.backButton = UIBarButtonItem(title: "Back", style: UIBarButtonItemStyle.Plain, target: self, action: "goBack")
    self.navigationItem.leftBarButtonItem = backButton
}

// Then handle the button selection
func goBack() {
    // Here we just remove the back button, you could also disabled it or better yet show an activityIndicator
    self.navigationItem.leftBarButtonItem = nil
    someData.saveInBackground { (success, error) -> Void in
        if success {
            self.navigationController?.popViewControllerAnimated(true)
            // Don't forget to re-enable the interactive gesture
            self.navigationController?.interactivePopGestureRecognizer.enabled = true
        }
        else {
            self.navigationItem.leftBarButtonItem = self.backButton
            // Handle the error
        }
    }
}

If you didn't get the viewWillAppear viewDidDisappear issue, Let's run through an example. Say you have three view controllers:

Lets follow the calls on the detailVC as you go from the listVC to settingsVC and back to listVC

Detail.viewDidAppear
Detail.viewDidDisappear

Notice that viewDidDisappear is called multiple times, not only when going back, but also when going forward. For a quick operation that may be desired, but for a more complex operation like a network call to save, it may not.

Just a note, user didMoveToParantViewController: to do work when the view is no longer visible. Helpful for iOS7 with the interactiveGesutre

The parent parameter is nil when you are popping to the parent view controller, and non-nil when the view this method appears in is being shown. You can use that fact to do an action only when the Back button is pressed, and not when arriving at the view. That was, after all, the original question. :)

iphone - Detecting when the 'back' button is pressed on a navbar - Sta...

iphone objective-c ios xcode
Rectangle 27 10

A GUID is a "Globally Unique ID". Also called a UUID (Universally Unique ID).

It's basically a 128 bit number that is generated in a way (see RFC 4112 http://www.ietf.org/rfc/rfc4122.txt) that makes it nearly impossible for duplicates to be generated. This way, I can generate GUIDs without some third party organization having to give them to me to ensure they are unique.

One widespread use of GUIDs is as identifiers for COM entities on Windows (classes, typelibs, interfaces, etc.). Using GUIDs, developers could build their COM components without going to Microsoft to get a unique identifier. Even though identifying COM entities is a major use of GUIDs, they are used for many things that need unique identifiers. Some developers will generate GUIDs for database records to provide them an ID that can be used even when they must be unique across many different databases.

Generally, you can think of a GUID as a serial number that can be generated by anyone at anytime and they'll know that the serial number will be unique.

Other ways to get unique identifiers include getting a domain name. To ensure the uniqueness of domain names, you have to get it from some organization (ultimately administered by ICANN).

Because GUIDs can be unwieldy (from a human readable point of view they are a string of hexadecimal numbers, usually grouped like so: aaaaaaaa-bbbb-cccc-dddd-ffffffffffff), some namespaces that need unique names across different organization use another scheme (often based on Internet domain names).

So, the namespace for Java packages by convention starts with the orgnaization's domain name (reversed) followed by names that are determined in some organization specfic way. For example, a Java package might be named:

com.example.jpackage

This means that dealing with name collisions becomes the responsibility of each organization.

XML namespaces are also made unique in a similar way - by convention, someone creating an XML namespace is supposed to make it 'underneath' a registered domain name under their control. For example:

xmlns="http://www.w3.org/1999/xhtml"

Another way that unique IDs have been managed is for Ethernet MAC addresses. A company that makes Ethernet cards has to get a block of addresses assigned to them by the IEEE (I think it's the IEEE). In this case the scheme has worked pretty well, and even if a manufacturer screws up and issues cards with duplicate MAC addresses, things will still work OK as long as those cards are not on the same subnet, since beyond a subnet, only the IP address is used to route packets. Although there are some other uses of MAC addresses that might be affected - one of the algorithms for generating GUIDs uses the MAC address as one parameter. This GUID generation method is not as widely used anymore because it is considered a privacy threat.

One example of a scheme to come up with unique identifiers that didn't work very well was the Microsoft provided ID's for 'VxD' drivers in Windows 9x. Developers of third party VxD drivers were supposed to ask Microsoft for a set of IDs to use for any drivers the third party wrote. This way, Microsoft could ensure there were not duplicate IDs. Unfortunately, many driver writers never bothered, and simply used whatever ID was in the example VxD they used as a starting point. I'm not sure how much trouble this caused - I don't think VxD ID uniqueness was absolutely necessary, but it probably affected some functionality in some APIs.

language agnostic - What exactly is GUID? Why and where I should use i...

language-agnostic guid
Rectangle 27 20

HSV ( Hue / Saturation / Value ) also called HSL ( Hue / Saturation / Lightness ) is just a different color representation.

Using this representation is it easier to adjust the brightness. So convert from RGB to HSV, brighten the 'V', then convert back to RGB.

void RGBToHSV(unsigned char cr, unsigned char cg, unsigned char cb,double *ph,double *ps,double *pv)
{
double r,g,b;
double max, min, delta;

/* convert RGB to [0,1] */

r = (double)cr/255.0f;
g = (double)cg/255.0f;
b = (double)cb/255.0f;

max = MAXx(r,(MAXx(g,b)));
min = MINx(r,(MINx(g,b)));

pv[0] = max;

/* Calculate saturation */

if (max != 0.0)
    ps[0] = (max-min)/max;
else
    ps[0] = 0.0; 

if (ps[0] == 0.0)
{
    ph[0] = 0.0f;   //UNDEFINED;
    return;
}
/* chromatic case: Saturation is not 0, so determine hue */
delta = max-min;

if (r==max)
{
    ph[0] = (g-b)/delta;
}
else if (g==max)
{
    ph[0] = 2.0 + (b-r)/delta;
}
else if (b==max)
{
    ph[0] = 4.0 + (r-g)/delta;
}
ph[0] = ph[0] * 60.0;
if (ph[0] < 0.0)
    ph[0] += 360.0;
}

void HSVToRGB(double h,double s,double v,unsigned char *pr,unsigned char *pg,unsigned char *pb)
{
int i;
double f, p, q, t;
double r,g,b;

if( s == 0 )
{
    // achromatic (grey)
    r = g = b = v;
}
else
{
    h /= 60;            // sector 0 to 5
    i = (int)floor( h );
    f = h - i;          // factorial part of h
    p = v * ( 1 - s );
    q = v * ( 1 - s * f );
    t = v * ( 1 - s * ( 1 - f ) );
    switch( i )
    {
    case 0:
        r = v;
        g = t;
        b = p;
    break;
    case 1:
        r = q;
        g = v;
        b = p;
    break;
    case 2:
        r = p;
        g = v;
        b = t;
    break;
    case 3:
        r = p;
        g = q;
        b = v;
    break;
    case 4:
        r = t;
        g = p;
        b = v;
    break;
    default:        // case 5:
        r = v;
        g = p;
        b = q;
    break;
    }
}
r*=255;
g*=255;
b*=255;

pr[0]=(unsigned char)r;
pg[0]=(unsigned char)g;
pb[0]=(unsigned char)b;
}

But HSV is sometimes called HSB (brightness). Which is still different to HSL, as @romkyns says.

@KPexEA you wrote "So convert from RGB to HSV, brighten the 'V', then convert back to RGB." . How do I do the *brightening the V portion?I need to generate n` shades of the same color. The shades should be increasingly brighter.

@Geek Convert the color from RGB to HSV, then throw away the V. Then call HSVtoRGB with the HS values and iterate with increasing values of V to generate the different RGB shades. Or if you want to start with your initial value and just make them a bit brighter, just add a small amount to each V for each iteration instead of starting at 0.

@KPexEA so if the initial V=0.25 and I need 4 lighter shades starting the given color the values of V would be 0.25,0.5,0.75 and 1? Do these numbers always have to be in Arithmetic Progression(AP)?

c# - How do I determine darker or lighter color variant of a given col...

c# colors
Rectangle 27 9

  • One is also called CAMP too, and is based on the same API.
  • Ponder is a partial rewrite, and shall be preferred as it does not requires Boost ; it's using C++11.

CAMP is an MIT licensed library (formerly LGPL) that adds reflection to the C++ language. It doesn't require a specific preprocessing step in the compilation, but the binding has to be made manually.

The current Tegesoft library uses Boost, but there is also a fork using C++11 that no longer requires Boost.

Ponder

How can I add reflection to a C++ application? - Stack Overflow

c++ reflection templates sfinae
Rectangle 27 9

  • One is also called CAMP too, and is based on the same API.
  • Ponder is a partial rewrite, and shall be preferred as it does not requires Boost ; it's using C++11.

CAMP is an MIT licensed library (formerly LGPL) that adds reflection to the C++ language. It doesn't require a specific preprocessing step in the compilation, but the binding has to be made manually.

The current Tegesoft library uses Boost, but there is also a fork using C++11 that no longer requires Boost.

Ponder

How can I add reflection to a C++ application? - Stack Overflow

c++ reflection templates sfinae
Rectangle 27 334

In Python 3.0, 5 / 2 will return 2.5 and 5 // 2 will return 2. The former is floating point division, and the latter is floor division, sometimes also called integer division.

In Python 2.2 or later in the 2.x line, there is no difference for integers unless you perform a from __future__ import division, which causes Python 2.x to adopt the behavior of 3.0

Regardless of the future import, 5.0 // 2 will return 2.0 since that's the floor division result of the operation.

You can do 'from future import division' in python 2.5.

Now it's not correct that "In Python 2.5 or later, there is no difference for integers..." because 3.0 is later than 2.5 :)

python -Qnew
-Qold
-Qwarn
-Qwarnall

math - In Python 2, what is the difference between '/' and '//' when u...

python math syntax operators
Rectangle 27 15

  • and an older, original: indent syntax, wich is the original Sass and is also called Sass.

So they are both part of Sass preprocessor with two different possible syntaxes.

The most important differents between SCSS and original Sass:

{}
  • Here no uses of braces.
  • Variable sign in Sass is ! instead of $.
  • Assignment sign in Sass is = instead of :.
  • Files using this syntax have the .sass extension.

These are two syntaxes available for Sass.

# Convert Sass to SCSS
$ sass-convert style.sass style.scss

# Convert SCSS to Sass
$ sass-convert style.scss style.sass

css - What's the difference between SCSS and Sass? - Stack Overflow

css sass
Rectangle 27 11

The viewWillLayoutSubviews method is also called after the view is resized and positioned by its parent.

Given viewWillLayoutSubviews is called whenever the bounds change on the controller's view, there's no guarantee that it'll be invoked once only. It'll be called whenever rotation occurs for example.

Your configureView method is probably better called from somewhere else, perhaps in viewWillAppear, viewDidAppear or even a custom mutator for BirdDetail item as per Hermann's suggestion.

objective c - Method called in -viewWillLayoutSubviews inexplicably ru...

objective-c xcode cocoa-touch