Rectangle 27 10

That's because range and other functional-style methods, such as map, reduce, and filter, return iterators in Python 3. In Python 2 they returned lists.

range() now behaves like xrange() used to behave, except it works with values of arbitrary size. The latter no longer exists.

To convert an iterator to a list you can use the list function:

>>> list(range(5)) #you can use list()
[0, 1, 2, 3, 4]

python - Why Is the Output of My Range Function Not a List? - Stack Ov...

python python-3.x
Rectangle 27 90

Assuming Python 3 (in Python 2, this difference is a little less well-defined) - a string is a sequence of characters, ie unicode codepoints; these are an abstract concept, and can't be directly stored on disk. A byte string is a sequence of, unsurprisingly, bytes - things that can be stored on disk. The mapping between them is an encoding - there are quite a lot of these (and infinitely many are possible) - and you need to know which applies in the particular case in order to do the conversion, since a different encoding may map the same bytes to a different string:

>>> b'\xcf\x84o\xcf\x81\xce\xbdo\xcf\x82'.decode('utf-16')
''
>>> b'\xcf\x84o\xcf\x81\xce\xbdo\xcf\x82'.decode('utf-8')
'oo'

Once you know which one to use, you can use the .decode() method of the byte string to get the right character string from it as above. For completeness, the .encode() method of a character string goes the opposite way:

>>> 'oo'.encode('utf-8')
b'\xcf\x84o\xcf\x81\xce\xbdo\xcf\x82'

To clarify for Python 2 users: the str type is the same as the bytes type; this answer is equivalently comparing the unicode type (does not exist in Python 3) to the str type.

To be technically correct, unicode is not the default encoding, rather the utf-8 encoding is the default character encoding to store unicode strings in memory.

@KshitijSaraogi that isn't quite true either; that whole sentence was edited in and is a bit unfortunate. The in-memory representation of Python 3 str objects is not accessible or relevant from the Python side; the data structure is just a sequence of codepoints. Under PEP 393, the exact internal encoding is one of Latin-1, UCS2 or UCS4, and a utf-8 representation may be cached after it is first requested, but even C code is discouraged from relying on these internal details.

If they can't be directly stored on disk, so how are they stored in memory?

python - What is the difference between a string and a byte string? - ...

python string byte
Rectangle 27 89

Assuming Python 3 (in Python 2, this difference is a little less well-defined) - a string is a sequence of characters, ie unicode codepoints; these are an abstract concept, and can't be directly stored on disk. A byte string is a sequence of, unsurprisingly, bytes - things that can be stored on disk. The mapping between them is an encoding - there are quite a lot of these (and infinitely many are possible) - and you need to know which applies in the particular case in order to do the conversion, since a different encoding may map the same bytes to a different string:

>>> b'\xcf\x84o\xcf\x81\xce\xbdo\xcf\x82'.decode('utf-16')
''
>>> b'\xcf\x84o\xcf\x81\xce\xbdo\xcf\x82'.decode('utf-8')
'oo'

Once you know which one to use, you can use the .decode() method of the byte string to get the right character string from it as above. For completeness, the .encode() method of a character string goes the opposite way:

>>> 'oo'.encode('utf-8')
b'\xcf\x84o\xcf\x81\xce\xbdo\xcf\x82'

To clarify for Python 2 users: the str type is the same as the bytes type; this answer is equivalently comparing the unicode type (does not exist in Python 3) to the str type.

To be technically correct, unicode is not the default encoding, rather the utf-8 encoding is the default character encoding to store unicode strings in memory.

@KshitijSaraogi that isn't quite true either; that whole sentence was edited in and is a bit unfortunate. The in-memory representation of Python 3 str objects is not accessible or relevant from the Python side; the data structure is just a sequence of codepoints. Under PEP 393, the exact internal encoding is one of Latin-1, UCS2 or UCS4, and a utf-8 representation may be cached after it is first requested, but even C code is discouraged from relying on these internal details.

python - What is the difference between a string and a byte string? - ...

python string byte
Rectangle 27 126

You can either use itertools.izip_longest (Python 2.6+), or you can use map with None. It is a little known feature of map (but map changed in Python 3.x, so this only works in Python 2.x).

>>> map(None, a, b, c)
[('a1', 'b1', 'c1'), (None, 'b2', 'c2'), (None, 'b3', None)]
>>> list(map(None, a, b, c)) Traceback (most recent call last):   File "<pyshell#10>", line 1, in <module>     list(map(None, a, b, c)) TypeError: 'NoneType' object is not callable

Do we not have a non itertools Python 3 solution?

@PascalvKooten it is not required. itertools is a builtin C module anyway.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

list - Python: zip-like function that pads to longest length? - Stack ...

python list zip
Rectangle 27 126

You can either use itertools.izip_longest (Python 2.6+), or you can use map with None. It is a little known feature of map (but map changed in Python 3.x, so this only works in Python 2.x).

>>> map(None, a, b, c)
[('a1', 'b1', 'c1'), (None, 'b2', 'c2'), (None, 'b3', None)]
>>> list(map(None, a, b, c)) Traceback (most recent call last):   File "<pyshell#10>", line 1, in <module>     list(map(None, a, b, c)) TypeError: 'NoneType' object is not callable

Do we not have a non itertools Python 3 solution?

@PascalvKooten it is not required. itertools is a builtin C module anyway.

list - Python: zip-like function that pads to longest length? - Stack ...

python list zip
Rectangle 27 188

You can read about the changes in What's New In Python 3.0. You should read it thoroughly when you move from 2.x to 3.x since a lot has been changed.

The whole answer here are quotes from the documentation.

  • map() and filter() return iterators. If you really need a list, a quick fix is e.g. list(map(...)), but a better fix is often to use a list comprehension (especially when the original code uses lambda), or rewriting the code so it doesnt need a list at all. Particularly tricky is map() invoked for the side effects of the function; the correct transformation is to use a regular for loop (since creating a list would just be wasteful).
  • Removed reduce(). Use functools.reduce() if you really need it; however, 99 percent of the time an explicit for loop is more readable.

@FernandoPelliccioni: Can't be help. It comes directly from the official documentation.

"99 percent of the time an explicit for loop is more readable."

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

How to use filter, map, and reduce in Python 3 - Stack Overflow

python python-3.x
Rectangle 27 8

There's a fork of multiprocessing called pathos (note: use the version on github) that doesn't need starmap -- the map functions mirror the API for python's map, thus map can take multiple arguments. With pathos, you can also generally do multiprocessing in the interpreter, instead of being stuck in the __main__ block. Pathos is due for a release, after some mild updating -- mostly conversion to python 3.x.

Python 2.7.5 (default, Sep 30 2013, 20:15:49) 
  [GCC 4.2.1 (Apple Inc. build 5566)] on darwin
  Type "help", "copyright", "credits" or "license" for more information.
  >>> def func(a,b):
  ...     print a,b
  ...
  >>>
  >>> from pathos.multiprocessing import ProcessingPool    
  >>> pool = ProcessingPool(nodes=4)
  >>> pool.map(func, [1,2,3], [1,1,1])
  1 1
  2 1
  3 1
  [None, None, None]
  >>>
  >>> # also can pickle stuff like lambdas 
  >>> result = pool.map(lambda x: x**2, range(10))
  >>> result
  [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
  >>>
  >>> # also does asynchronous map
  >>> result = pool.amap(pow, [1,2,3], [4,5,6])
  >>> result.get()
  [1, 32, 729]
  >>>
  >>> # or can return a map iterator
  >>> result = pool.imap(pow, [1,2,3], [4,5,6])
  >>> result
  <processing.pool.IMapIterator object at 0x110c2ffd0>
  >>> list(result)
  [1, 32, 729]

Python multiprocessing pool.map for multiple arguments - Stack Overflo...

python multiprocessing
Rectangle 27 8

There's a fork of multiprocessing called pathos (note: use the version on github) that doesn't need starmap -- the map functions mirror the API for python's map, thus map can take multiple arguments. With pathos, you can also generally do multiprocessing in the interpreter, instead of being stuck in the __main__ block. Pathos is due for a release, after some mild updating -- mostly conversion to python 3.x.

Python 2.7.5 (default, Sep 30 2013, 20:15:49) 
  [GCC 4.2.1 (Apple Inc. build 5566)] on darwin
  Type "help", "copyright", "credits" or "license" for more information.
  >>> def func(a,b):
  ...     print a,b
  ...
  >>>
  >>> from pathos.multiprocessing import ProcessingPool    
  >>> pool = ProcessingPool(nodes=4)
  >>> pool.map(func, [1,2,3], [1,1,1])
  1 1
  2 1
  3 1
  [None, None, None]
  >>>
  >>> # also can pickle stuff like lambdas 
  >>> result = pool.map(lambda x: x**2, range(10))
  >>> result
  [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
  >>>
  >>> # also does asynchronous map
  >>> result = pool.amap(pow, [1,2,3], [4,5,6])
  >>> result.get()
  [1, 32, 729]
  >>>
  >>> # or can return a map iterator
  >>> result = pool.imap(pow, [1,2,3], [4,5,6])
  >>> result
  <processing.pool.IMapIterator object at 0x110c2ffd0>
  >>> list(result)
  [1, 32, 729]

Python multiprocessing pool.map for multiple arguments - Stack Overflo...

python multiprocessing
Rectangle 27 34

>>> x=['a','a','b','c','c','c']
>>> map(x.count,x)
[2, 2, 1, 3, 3, 3]
>>> dict(zip(x,map(x.count,x)))
{'a': 2, 'c': 3, 'b': 1}
>>>

The result looks pretty, but it has potential O(n^2) runtime behaviour.

But he never said anything about performance... this solves the problem, if it's too inefficient that's a different problem to solve.

That is right -- but did you ever realize what can happen, wenn you run into the O(n^2)-trap? I experienced it, when a simple algorithm killed program performance totally because it happened to be a bigger dataset and the algorithm had such behaviour.

Relax. You're most likely writing O(n^3) and O(2^n) algorithms all the time, and never even notice them being executed. Solve those problems if they happen.

How to map one list to another in python? - Stack Overflow

python
Rectangle 27 2

There's a fork of multiprocessing called pathos (note: use the version on github) that doesn't need starmap or helpers or all of that other stuff -- the map functions mirror the API for python's map, thus map can take multiple arguments. With pathos, you can also generally do multiprocessing in the interpreter, instead of being stuck in the __main__ block. pathos is due for a release, after some mild updating -- mostly conversion to python 3.x.

Python 2.7.5 (default, Sep 30 2013, 20:15:49) 
  [GCC 4.2.1 (Apple Inc. build 5566)] on darwin
  Type "help", "copyright", "credits" or "license" for more information.
  >>> from pathos.multiprocessing import ProcessingPool    
  >>> pool = ProcessingPool(nodes=4)
  >>>
  >>> def func(g,h,i):
  ...   return g+h+i
  ... 
  >>> p.map(func, [1,2,3],[4,5,6],[7,8,9])
  [12, 15, 18]
  >>>
  >>> # also can pickle stuff like lambdas 
  >>> result = pool.map(lambda x: x**2, range(10))
  >>> result
  [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
  >>>
  >>> # also does asynchronous map
  >>> result = pool.amap(pow, [1,2,3], [4,5,6])
  >>> result.get()
  [1, 32, 729]
  >>>
  >>> # or can return a map iterator
  >>> result = pool.imap(pow, [1,2,3], [4,5,6])
  >>> result
  <processing.pool.IMapIterator object at 0x110c2ffd0>
  >>> list(result)
  [1, 32, 729]

Multiprocessing with multiple arguments to function in Python 2.7 - St...

python python-2.7 multiprocessing main
Rectangle 27 2

There's a fork of multiprocessing called pathos (note: use the version on github) that doesn't need starmap or helpers or all of that other stuff -- the map functions mirror the API for python's map, thus map can take multiple arguments. With pathos, you can also generally do multiprocessing in the interpreter, instead of being stuck in the __main__ block. pathos is due for a release, after some mild updating -- mostly conversion to python 3.x.

Python 2.7.5 (default, Sep 30 2013, 20:15:49) 
  [GCC 4.2.1 (Apple Inc. build 5566)] on darwin
  Type "help", "copyright", "credits" or "license" for more information.
  >>> from pathos.multiprocessing import ProcessingPool    
  >>> pool = ProcessingPool(nodes=4)
  >>>
  >>> def func(g,h,i):
  ...   return g+h+i
  ... 
  >>> p.map(func, [1,2,3],[4,5,6],[7,8,9])
  [12, 15, 18]
  >>>
  >>> # also can pickle stuff like lambdas 
  >>> result = pool.map(lambda x: x**2, range(10))
  >>> result
  [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
  >>>
  >>> # also does asynchronous map
  >>> result = pool.amap(pow, [1,2,3], [4,5,6])
  >>> result.get()
  [1, 32, 729]
  >>>
  >>> # or can return a map iterator
  >>> result = pool.imap(pow, [1,2,3], [4,5,6])
  >>> result
  <processing.pool.IMapIterator object at 0x110c2ffd0>
  >>> list(result)
  [1, 32, 729]

Multiprocessing with multiple arguments to function in Python 2.7 - St...

python python-2.7 multiprocessing main
Rectangle 27 3

dict(zip(['a','a','b','c','c','c'], [2, 2, 1, 3, 3, 3]))

How to map one list to another in python? - Stack Overflow

python
Rectangle 27 54

The functionality of map and filter was intentionally changed to return iterators, and reduce was removed from being a built-in and placed in functools.reduce.

So, for filter and map, you can wrap them with list() to see the results like you did before.

The recommendation now is that you replace your usage of map and filter with generators expressions or list comprehensions. Example:

>>> def f(x): return x % 2 != 0 and x % 3 != 0
...
>>> [i for i in range(2, 25) if f(i)]
[5, 7, 11, 13, 17, 19, 23]
>>> def cube(x): return x*x*x
...
>>> [cube(i) for i in range(1, 11)]
[1, 8, 27, 64, 125, 216, 343, 512, 729, 1000]
>>>

They say that for loops are 99 percent of the time easier to read than reduce, but I'd just stick with functools.reduce.

Edit: The 99 percent figure is pulled directly from the Whats New In Python 3.0 page authored by Guido van Rossum.

You do not need to create extra functions in list comprehensions. Just use [i*i*i for i in range(1,11)]

You are absolutely correct. I kept the function in the list comprehension examples to keep it looking similar to the filter/map examples.

i**3 is also equivalent of i*i*i

@Breezer actually i**3 will call i.__pow__(3) and i*i*i i.__mul__(i).__mul__(i) (or something like that). With ints it doesn't matter but with numpy numbers/custom classes it might even produce different results.

How to use filter, map, and reduce in Python 3 - Stack Overflow

python python-3.x
Rectangle 27 4

As an addendum to the other answers, this sounds like a fine use-case for a context manager that will re-map the names of these functions to ones which return a list and introduce reduce in the global namespace.

from contextlib import contextmanager    

@contextmanager
def noiters(*funcs):
    if not funcs: 
        funcs = [map, filter, zip] # etc
    from functools import reduce
    globals()[reduce.__name__] = reduce
    for func in funcs:
        globals()[func.__name__] = lambda *ar, func = func, **kwar: list(func(*ar, **kwar))
    try:
        yield
    finally:
        del globals()[reduce.__name__]
        for func in funcs: globals()[func.__name__] = func
with noiters(map):
    from operator import add
    print(reduce(add, range(1, 20)))
    print(map(int, ['1', '2']))
190
[1, 2]

How to use filter, map, and reduce in Python 3 - Stack Overflow

python python-3.x
Rectangle 27 3

I think this is the kind of thing you want - plotting lines of constant latitude on a 3d axis. I've explained what each section does in comments

import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import itertools

#read in data from csv organised in columns labelled 'lat','lon','elevation'
data = np.recfromcsv('elevation-sample.csv', delimiter=',')

# create a 3d axis on a figure
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')

# Find unique (i.e. constant) latitude points
id_list = np.unique(data['lat'])

# stride is how many lines to miss.  set to 1 to get every line
# higher to miss more
stride = 5

# Extract each line from the dataset and plot it on the axes
for id in id_list[::stride]:
    this_line_data = data[np.where(data['lat'] == id)]
    lat,lon,ele = zip(*this_line_data)
    ax.plot(lon,lat,ele, color='black')

# set the viewpoint so we're looking straight at the longitude (x) axis
ax.view_init(elev=45., azim=90)

ax.set_xlabel('Longitude')
ax.set_ylabel('Latitude')
ax.set_zlabel('Elevation')
ax.set_zlim([0,1500])

plt.show()

The data set I used to test is not mine, but I found it on github here.

This gives output as follows:

Note - you can swap latitude and longitude if I've misinterpreted the axis labels in your sketch.

Great! I remember seeing something like this on a t-shirt - can't find it now though :)

I also saw that and I think it was the topography map of the earth. But I cannot find it too.

Geographical data plot/map with lines in python and matplotlib - Stack...

python matplotlib plot geospatial geo
Rectangle 27 1

Are you thinking a 3D plot similar to this? Possibly you could also do a cascade plot like this? The code for the last type of plot is something like this:

# Input parameters:
padding = 1    # Relative distance between plots
ax = gca()     # Matplotlib axes to plot in
spectra = np.random.rand((10, 100))   # Series of Y-data
x_data = np.arange(len(spectra[0]))   # X-data

# Figure out distance between plots:
max_value = 0
for spectrum in spectra:
    spectrum_yrange = (np.nanmax(spectrum) -
                       np.nanmin(spectrum))
    if spectrum_yrange > max_value:
        max_value = spectrum_yrange
# Plot the individual lines
for i, spectrum in enumerate(spectra):
    # Normalize the data to max_value
    data = (spectrum - spectrum.min()) / float(max_value)
    # Offset the individual lines
    data += i * padding
    ax.plot(x_data, data)

This on is close to the blog post I was searching but not quite. It gets the thing done, but before I accept it as correct I'd like to wait for other answers. Thanks though.

Geographical data plot/map with lines in python and matplotlib - Stack...

python matplotlib plot geospatial geo
Rectangle 27 0

In Python 3, the map() builtin returns an iterator rather than a list, behaving somewhat like the Python 2 itertools.imap() function.

If you need a list, you can simply pass that iterator to list(). For example:

>>> x = map(lambda x: x + 1, [1, 2, 3])
>>> x
<map object at 0x7f8571319b90>
>>> list(x)
[2, 3, 4]

using map in python 3 - Stack Overflow

python
Rectangle 27 0

>>> import itertools
>>> l = [[1,2],[3,4]]
>>> list(itertools.chain(*l))
[1, 2, 3, 4]

map in python, that map multiple objects to each object in list - Stac...

python list-comprehension
Rectangle 27 0

''' printf( "... %.3g ... %.1f  ...", arg, arg ... ) for numpy arrays too

Example:
    printf( """ x: %.3g   A: %.1f   s: %s   B: %s """,
                   x,        A,        "str",  B )

If `x` and `A` are numbers, this is like `"format" % (x, A, "str", B)` in python.
If they're numpy arrays, each element is printed in its own format:
    `x`: e.g. [ 1.23 1.23e-6 ... ]  3 digits
    `A`: [ [ 1 digit after the decimal point ... ] ... ]
with the current `np.set_printoptions()`. For example, with
    np.set_printoptions( threshold=100, edgeitems=3, suppress=True )
only the edges of big `x` and `A` are printed.
`B` is printed as `str(B)`, for any `B` -- a number, a list, a numpy object ...

`printf()` tries to handle too few or too many arguments sensibly,
but this is iffy and subject to change.

How it works:
numpy has a function `np.array2string( A, "%.3g" )` (simplifying a bit).
`printf()` splits the format string, and for format / arg pairs
    format: % d e f g
    arg: try `np.asanyarray()`
-->  %s  np.array2string( arg, format )
Other formats and non-ndarray args are left alone, formatted as usual.

Notes:

`printf( ... end= file= )` are passed on to the python `print()` function.

Only formats `% [optional width . precision] d e f g` are implemented,
not `%(varname)format` .

%d truncates floats, e.g. 0.9 and -0.9 to 0; %.0f rounds, 0.9 to 1 .
%g is the same as %.6g, 6 digits.
%% is a single "%" character.

The function `sprintf()` returns a long string. For example,
    title = sprintf( "%s  m %g  n %g  X %.3g",
                    __file__, m, n, X )
    print( title )
    ...
    pl.title( title )

Module globals:
_fmt = "%.3g"  # default for extra args
_squeeze = np.squeeze  # (n,1) (1,n) -> (n,) print in 1 line not n

See also:
http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html
http://docs.python.org/2.7/library/stdtypes.html#string-formatting

'''
# http://stackoverflow.com/questions/2891790/pretty-printing-of-numpy-array


#...............................................................................
from __future__ import division, print_function
import re
import numpy as np

__version__ = "2014-02-03 feb denis"

_splitformat = re.compile( r'''(
    %
    (?<! %% )  # not %%
    -? [ \d . ]*  # optional width.precision
    \w
    )''', re.X )
    # ... %3.0f  ... %g  ... %-10s ...
    # -> ['...' '%3.0f' '...' '%g' '...' '%-10s' '...']
    # odd len, first or last may be ""

_fmt = "%.3g"  # default for extra args
_squeeze = np.squeeze  # (n,1) (1,n) -> (n,) print in 1 line not n

#...............................................................................
def printf( format, *args, **kwargs ):
    print( sprintf( format, *args ), **kwargs )  # end= file=

printf.__doc__ = __doc__


def sprintf( format, *args ):
    """ sprintf( "text %.3g text %4.1f ... %s ... ", numpy arrays or ... )
        %[defg] array -> np.array2string( formatter= )
    """
    args = list(args)
    if not isinstance( format, basestring ):
        args = [format] + args
        format = ""

    tf = _splitformat.split( format )  # [ text %e text %f ... ]
    nfmt = len(tf) // 2
    nargs = len(args)
    if nargs < nfmt:
        args += (nfmt - nargs) * ["?arg?"]
    elif nargs > nfmt:
        tf += (nargs - nfmt) * [_fmt, " "]  # default _fmt

    for j, arg in enumerate( args ):
        fmt = tf[ 2*j + 1 ]
        if arg is None \
        or isinstance( arg, basestring ) \
        or (hasattr( arg, "__iter__" ) and len(arg) == 0):
            tf[ 2*j + 1 ] = "%s"  # %f -> %s, not error
            continue
        args[j], isarray = _tonumpyarray(arg)
        if isarray  and fmt[-1] in "defgEFG":
            tf[ 2*j + 1 ] = "%s"
            fmtfunc = (lambda x: fmt % x)
            formatter = dict( float_kind=fmtfunc, int=fmtfunc )
            args[j] = np.array2string( args[j], formatter=formatter )
    try:
        return "".join(tf) % tuple(args)
    except TypeError:  # shouldn't happen
        print( "error: tf %s  types %s" % (tf, map( type, args )))
        raise


def _tonumpyarray( a ):
    """ a, isarray = _tonumpyarray( a )
        ->  scalar, False
            np.asanyarray(a), float or int
            a, False
    """
    a = getattr( a, "value", a )  # cvxpy
    if np.isscalar(a):
        return a, False
    if hasattr( a, "__iter__" )  and len(a) == 0:
        return a, False
    try:
        # map .value ?
        a = np.asanyarray( a )
    except ValueError:
        return a, False
    if hasattr( a, "dtype" )  and a.dtype.kind in "fi":  # complex ?
        if callable( _squeeze ):
            a = _squeeze( a )  # np.squeeze
        return a, True
    else:
        return a, False


#...............................................................................
if __name__ == "__main__":
    import sys

    n = 5
    seed = 0
        # run this.py n= ...  in sh or ipython
    for arg in sys.argv[1:]:
        exec( arg )
    np.set_printoptions( 1, threshold=4, edgeitems=2, linewidth=80, suppress=True )
    np.random.seed(seed)

    A = np.random.exponential( size=(n,n) ) ** 10
    x = A[0]

    printf( "x: %.3g  \nA: %.1f  \ns: %s  \nB: %s ",
                x,         A,         "str",   A )
    printf( "x %%d: %d", x )
    printf( "x %%.0f: %.0f", x )
    printf( "x %%.1e: %.1e", x )
    printf( "x %%g: %g", x )
    printf( "x %%s uses np printoptions: %s", x )

    printf( "x with default _fmt: ", x )
    printf( "no args" )
    printf( "too few args: %g %g", x )
    printf( x )
    printf( x, x )
    printf( None )
    printf( "[]:", [] )
    printf( "[3]:", [3] )
    printf( np.array( [] ))
    printf( [[]] )  # squeeze

python - Pretty-printing of numpy.array - Stack Overflow

python numpy pretty-print
Rectangle 27 0

There's a fork of multiprocessing called pathos (note: use the version on github) that doesn't need starmap or helpers or all of that other stuff -- the map functions mirror the API for python's map, thus map can take multiple arguments. With pathos, you can also generally do multiprocessing in the interpreter, instead of being stuck in the __main__ block. pathos is due for a release, after some mild updating -- mostly conversion to python 3.x.

Python 2.7.5 (default, Sep 30 2013, 20:15:49) 
  [GCC 4.2.1 (Apple Inc. build 5566)] on darwin
  Type "help", "copyright", "credits" or "license" for more information.
  >>> from pathos.multiprocessing import ProcessingPool    
  >>> pool = ProcessingPool(nodes=4)
  >>>
  >>> def func(g,h,i):
  ...   return g+h+i
  ... 
  >>> p.map(func, [1,2,3],[4,5,6],[7,8,9])
  [12, 15, 18]
  >>>
  >>> # also can pickle stuff like lambdas 
  >>> result = pool.map(lambda x: x**2, range(10))
  >>> result
  [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
  >>>
  >>> # also does asynchronous map
  >>> result = pool.amap(pow, [1,2,3], [4,5,6])
  >>> result.get()
  [1, 32, 729]
  >>>
  >>> # or can return a map iterator
  >>> result = pool.imap(pow, [1,2,3], [4,5,6])
  >>> result
  <processing.pool.IMapIterator object at 0x110c2ffd0>
  >>> list(result)
  [1, 32, 729]

Multiprocessing with multiple arguments to function in Python 2.7 - St...

python python-2.7 multiprocessing main