Rectangle 27 201

You need to read the file in chunks of suitable size:

def md5_for_file(f, block_size=2**20):
    md5 = hashlib.md5()
    while True:
        data = f.read(block_size)
        if not data:
            break
        md5.update(data)
    return md5.digest()

NOTE: Make sure you open your file with the 'rb' to the open - otherwise you will get the wrong result.

def generate_file_md5(rootdir, filename, blocksize=2**20):
    m = hashlib.md5()
    with open( os.path.join(rootdir, filename) , "rb" ) as f:
        while True:
            buf = f.read(blocksize)
            if not buf:
                break
            m.update( buf )
    return m.hexdigest()

The update above was based on the comments provided by Frerich Raabe - and I tested this and found it to be correct on my Python 2.7.2 windows installation

I cross-checked the results using the 'jacksum' tool.

jacksum -a md5 <filename>

What's important to notice is that the file which is passed to this function must be opened in binary mode, i.e. by passing rb to the open function.

This is a simple addition, but using hexdigest instead of digest will produce a hexadecimal hash that "looks" like most examples of hashes.

Erik, no, why would it be? The goal is to feed all bytes to MD5, until the end of the file. Getting a partial block does not mean all the bytes should not be fed to the checksum.

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 136

Break the file into 128-byte chunks and feed them to MD5 consecutively using update().

This takes advantage of the fact that MD5 has 128-byte digest blocks. Basically, when MD5 digest()s the file, this is exactly what it is doing.

If you make sure you free the memory on each iteration (i.e. not read the entire file to memory), this shall take no more than 128 bytes of memory.

One example is to read the chunks like so:

f = open(fileName)
while not endOfFile:
    f.read(128)

Python is garbage-collected, so there's (usually) not really a need to worry about memory. Unless you explicitly keep around references to all the strings you read from the file, python will free and/or reuse as it sees fit.

@kjeitikor: If you read the entire file into e.g. a Python string, then Python won't have much of a choice. That's why "worrying" about memory makes total sense in this case, where the choice to read it in chunks must be made by the programmer.

You can just as effectively use a block size of any multiple of 128 (say 8192, 32768, etc.) and that will be much faster than reading 128 bytes at a time.

Thanks jmanning2k for this important note, a test on 184MB file takes (0m9.230s, 0m2.547s, 0m2.429s) using (128, 8192, 32768), I will use 8192 as the higher value gives non-noticeable affect.

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 95

if you care about more pythonic (no 'while True') way of reading the file check this code:

import hashlib

def checksum_md5(filename):
    md5 = hashlib.md5()
    with open(filename,'rb') as f: 
        for chunk in iter(lambda: f.read(8192), b''): 
            md5.update(chunk)
    return md5.digest()

Note that the iter() func needs an empty byte string for the returned iterator to halt at EOF, since read() returns b'' (not just '').

Better still, use something like 128*md5.block_size instead of 8192.

iter(func, '')
iter(func)

the b'' syntax was new to me. Explained here.

@ThorSummoner: Not really, but from my working finding optimum block sizes for flash memory, I'd suggest just picking a number like 32k or something easily divisible by 4, 8, or 16k. For example, if your block size is 8k, reading 32k will be 4 reads at the correct block size. If it's 16, then 2. But in each case, we're good because we happen to be reading an integer multiple number of blocks.

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 2

Your text file can have duplicates which will overwrite existing keys in your dictionary (the python name for a hash table). You can create a unique set of your keys, and then use a dictionary comprehension to populate the dictionary.

a
b
c
c
with open("sample_file.txt") as f:
    keys = set(line.strip() for line in f.readlines())
my_dict = {key: 1 for key in keys if key}
>>> my_dict
{'a': 1, 'b': 1, 'c': 1}

Here is an implementation with 1 million random alpha characters of length 10. The timing is relatively trivial at under half a second.

import string
import numpy as np

letter_map = {n: letter for n, letter in enumerate(string.ascii_lowercase, 1)}
long_alpha_list = ["".join([letter_map[number] for number in row]) + "\n" 
                   for row in np.random.random_integers(1, 26, (1000000, 10))]
>>> long_alpha_list[:5]
['mfeeidurfc\n',
 'njbfzpunzi\n',
 'yrazcjnegf\n',
 'wpuxpaqhhs\n',
 'fpncybprrn\n']

>>> len(long_alpha_list)
1000000

# Write list to file.
with open('sample_file.txt', 'wb') as f:
    f.writelines(long_alpha_list)

# Read them back into a dictionary per the method above.
with open("sample_file.txt") as f:
    keys = set(line.strip() for line in f.readlines())

>>> %%timeit -n 10
>>> my_dict = {key: 1 for key in keys if key}

10 loops, best of 3: 379 ms per loop

How create a hash table for large data in python? - Stack Overflow

python hash
Rectangle 27 47

def md5sum(filename):
    md5 = hashlib.md5()
    with open(filename, 'rb') as f:
        for chunk in iter(lambda: f.read(128 * md5.block_size), b''):
            md5.update(chunk)
    return md5.hexdigest()

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 27

Using multiple comment/answers in this thread, here is my solution :

import hashlib
def md5_for_file(path, block_size=256*128, hr=False):
    '''
    Block size directly depends on the block size of your filesystem
    to avoid performances issues
    Here I have blocks of 4096 octets (Default NTFS)
    '''
    md5 = hashlib.md5()
    with open(path,'rb') as f: 
        for chunk in iter(lambda: f.read(block_size), b''): 
             md5.update(chunk)
    if hr:
        return md5.hexdigest()
    return md5.digest()

- This has been built by a community, thanks all for your advices/ideas.

One suggestion: make your md5 object an optional parameter of the function to allow alternate hashing functions, such as sha256 to easily replace MD5. I'll propose this as an edit, as well.

also: digest is not human-readable. hexdigest() allows a more understandable, commonly recogonizable output as well as easier exchange of the hash

Others hash formats are out of the scope of the question, but the suggestion is relevant for a more generic function. I added a "human readable" option according to your 2nd suggestion.

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 9

You're correct - the scrypt functions those two links are playing with are the scrypt file encryption utility, not the underlying kdf. I've been slowly working on creating a standalone scrypt-based password hash for python, and ran into this issue myself.

The scrypt file utility does the following: picks scrypt's n/r/p parameters specific to your system & the "min time" parameter. It then generates a 32 byte salt, and then calls scrypt(n,r,p,salt,pwd) to create a 64 bytes key. The binary string the tool returns is composed of: 1) a header containing n, r, p values, and the salt encoded in binary; 2) an sha256 checksum of the header; and 3) a hmac-sha256 signed copy of the checksum, using the first 32 bytes of the key. Following that, it uses the remaining 32 bytes of the key to AES encrypt the input data.

There are a couple of implications of this that I can see:

All in all, what is needed is a nice hash format that can store scrypt, and an implementation that exposes the underlying kdf and parameter-choosing algorithm. I'm currently working on this myself for passlib, but it hasn't seen much attention :(

Thanks. I timed some decrypt's and they seem highly variable and time-consuming even for one character strings. The other issues I could live with, but not knowing what value to put so that decryption won't return a "decrypting file would take too long " error makes it unusable for me. bcrypt looks a lot more friendly and will probably be fine for me.

How to use scrypt to generate hash for password and salt in Python - S...

python password-encryption scrypt
Rectangle 27 27

I wrote a piece of python code that verifies the hashes of downloaded files against what's in a .torrent file. Assuming you want to check a download for corruption you may find this useful.

You need the bencode package to use this. Bencode is the serialization format used in .torrent files. It can marshal lists, dictionaries, strings and numbers somewhat like JSON.

The code takes the hashes contained in the info['pieces'] string:

torrent_file = open(sys.argv[1], "rb")
metainfo = bencode.bdecode(torrent_file.read())
info = metainfo['info']
pieces = StringIO.StringIO(info['pieces'])

That string contains a succession of 20 byte hashes (one for each piece). These hashes are then compared with the hash of the pieces of on-disk file(s).

The only complicated part of this code is handling multi-file torrents because a single torrent piece can span more than one file (internally BitTorrent treats multi-file downloads as a single contiguous file). I'm using the generator function pieces_generator() to abstract that away.

You may want to read the BitTorrent spec to understand this in more details.

import sys, os, hashlib, StringIO, bencode

def pieces_generator(info):
    """Yield pieces from download file(s)."""
    piece_length = info['piece length']
    if 'files' in info: # yield pieces from a multi-file torrent
        piece = ""
        for file_info in info['files']:
            path = os.sep.join([info['name']] + file_info['path'])
            print path
            sfile = open(path.decode('UTF-8'), "rb")
            while True:
                piece += sfile.read(piece_length-len(piece))
                if len(piece) != piece_length:
                    sfile.close()
                    break
                yield piece
                piece = ""
        if piece != "":
            yield piece
    else: # yield pieces from a single file torrent
        path = info['name']
        print path
        sfile = open(path.decode('UTF-8'), "rb")
        while True:
            piece = sfile.read(piece_length)
            if not piece:
                sfile.close()
                return
            yield piece

def corruption_failure():
    """Display error message and exit"""
    print("download corrupted")
    exit(1)

def main():
    # Open torrent file
    torrent_file = open(sys.argv[1], "rb")
    metainfo = bencode.bdecode(torrent_file.read())
    info = metainfo['info']
    pieces = StringIO.StringIO(info['pieces'])
    # Iterate through pieces
    for piece in pieces_generator(info):
        # Compare piece hash with expected hash
        piece_hash = hashlib.sha1(piece).digest()
        if (piece_hash != pieces.read(20)):
            corruption_failure()
    # ensure we've read all pieces 
    if pieces.read():
        corruption_failure()

if __name__ == "__main__":
    main()

Don't know if this solved the OP's problem, but it definitely solved mine (once I got past the bencode package's brokenness: stackoverflow.com/questions/2693963/). Thanks!

I always wanted to have such a tool, and was about to dig into the old official python client to find out how to write one. Thanks!!

python - Extract the SHA1 hash from a torrent file - Stack Overflow

python hash extract sha1 bittorrent
Rectangle 27 5

To calculate a checksum (md5, sha1, etc.), you must open the file in binary mode, because you'll sum bytes values:

io
import hashlib
import io


def md5sum(src):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        content = fd.read()
        md5.update(content)
    return md5

If your files are big, you may prefer to read the file by chunks to avoid storing the whole file content in memory:

def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
    return md5

The trick here is to use the iter() function with a sentinel (the empty string).

The iterator created in this case will call o [the lambda function] with no arguments for each call to its next() method; if the value returned is equal to sentinel, StopIteration will be raised, otherwise the value will be returned.

If your files are really big, you may also need to display progress information. You can do that by calling a callback function which prints or logs the amount of calculated bytes:

def md5sum(src, callback, length=io.DEFAULT_BUFFER_SIZE):
    calculated = 0
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
            calculated += len(chunk)
            callback(calculated)
    return md5

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 4

A remix of Bastien Semene code that take Hawkwing comment about generic hashing function into consideration...

def hash_for_file(path, algorithm=hashlib.algorithms[0], block_size=256*128, human_readable=True):
    """
    Block size directly depends on the block size of your filesystem
    to avoid performances issues
    Here I have blocks of 4096 octets (Default NTFS)

    Linux Ext4 block size
    sudo tune2fs -l /dev/sda5 | grep -i 'block size'
    > Block size:               4096

    Input:
        path: a path
        algorithm: an algorithm in hashlib.algorithms
                   ATM: ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
        block_size: a multiple of 128 corresponding to the block size of your filesystem
        human_readable: switch between digest() or hexdigest() output, default hexdigest()
    Output:
        hash
    """
    if algorithm not in hashlib.algorithms:
        raise NameError('The algorithm "{algorithm}" you specified is '
                        'not a member of "hashlib.algorithms"'.format(algorithm=algorithm))

    hash_algo = hashlib.new(algorithm)  # According to hashlib documentation using new()
                                        # will be slower then calling using named
                                        # constructors, ex.: hashlib.md5()
    with open(path, 'rb') as f:
        for chunk in iter(lambda: f.read(block_size), b''):
             hash_algo.update(chunk)
    if human_readable:
        file_hash = hash_algo.hexdigest()
    else:
        file_hash = hash_algo.digest()
    return file_hash

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 16

Here how I've extracted HASH value from torrent file:

#!/usr/bin/python

import sys, os, hashlib, StringIO
import bencode



def main():
    # Open torrent file
    torrent_file = open(sys.argv[1], "rb")
    metainfo = bencode.bdecode(torrent_file.read())
    info = metainfo['info']
    print hashlib.sha1(bencode.bencode(info)).hexdigest()    

if __name__ == "__main__":
    main()

It is the same as running command:

transmissioncli -i test.torrent 2>/dev/null | grep "^hash:" | awk '{print $2}'

What that gives you is the info hash of the torrent.

+1 since that's exactly what I wanted to do when I visited a question about "extracting the SHA1 hash from a torrent file".

Nice small piece of code, bencode is not in Debian/Ubuntu dist, so you have to pip install it, or I find easier to use the bzrlib.bencode module from python-bzrlib.

python - Extract the SHA1 hash from a torrent file - Stack Overflow

python hash extract sha1 bittorrent
Rectangle 27 6

No, the random value is assigned to the uc field of the _Py_HashSecret union, but this is never exposed to Python code. That's because the number of possible values is far greater than what setting PYTHONHASHSEED can produce.

When you don't set PYTHONHASHSEED or set it to random, Python generates a random 24-byte value to use as the seed. If you set PYTHONHASHSEED to an integer then that number is passed through a linear congruential generator to produce the actual seed (see the lcg_urandom() function). The problem is that PYTHONHASHSEED is limited to 4 bytes only. There are 256 ** 20 times more possible seed values than you could set via PYTHONHASHSEED alone.

You can access the internal hash value in the _Py_HashSecret struct using ctypes:

from ctypes import (
    c_size_t,
    c_ubyte,
    c_uint64,
    pythonapi,
    Structure,
    Union,
)


class FNV(Structure):
    _fields_ = [
        ('prefix', c_size_t),
        ('suffix', c_size_t)
    ]


class SIPHASH(Structure):
    _fields_ = [
        ('k0', c_uint64),
        ('k1', c_uint64),
    ]


class DJBX33A(Structure):
    _fields_ = [
        ('padding', c_ubyte * 16),
        ('suffix', c_size_t),
    ]


class EXPAT(Structure):
    _fields_ = [
        ('padding', c_ubyte * 16),
        ('hashsalt', c_size_t),
    ]


class _Py_HashSecret_t(Union):
    _fields_ = [
        # ensure 24 bytes
        ('uc', c_ubyte * 24),
        # two Py_hash_t for FNV
        ('fnv', FNV),
        # two uint64 for SipHash24
        ('siphash', SIPHASH),
        # a different (!) Py_hash_t for small string optimization
        ('djbx33a', DJBX33A),
        ('expat', EXPAT),
    ]


hashsecret = _Py_HashSecret_t.in_dll(pythonapi, '_Py_HashSecret')
hashseed = bytes(hashsecret.uc)

However, you can't actually do anything with this information. You can't set _Py_HashSecret.uc in a new Python process as doing so would break most dictionary keys set before you could do so from Python code (Python internals rely heavily on dictionaries), and your chances of the hash being equal to one of the 256**4 possible LCG values is vanishingly small.

PYTHONHASHSEED

Looking at the code, I don't think all possible states of uc even correspond to PYTHONHASHSEED values. uc is 24 bytes, while PYTHONHASHSEED is only 4. With no PYTHONHASHSEED, Python initializes uc in such a way that it's unlikely any PYTHONHASHSEED could produce the same result.

@user2357112 very good point. And you can't set the uc value either, as by the time you could do so from a Python program plenty of dictionary keys would have been hashed already. Setting the uc hash seed would invalidate anything that is not an interned string.

Great answer! Special thanks for the extra information on how PYTHONHASHSEED is actually used.

python - extract hash seed in unit testing - Stack Overflow

python python-3.x unit-testing hash
Rectangle 27 1

u can't get it's md5 without read full content. but u can use update function to read the files content block by block. m.update(a); m.update(b) is equivalent to m.update(a+b)

Get MD5 hash of big files in Python - Stack Overflow

python md5 hashlib
Rectangle 27 8

a User object

<algorithm>$<iterations>$<salt>$<hash> Those are the components used for storing a Users password, separated by the dollar-sign character and consist of: the hashing algorithm, the number of algorithm iterations (work factor), the random salt, and the resulting password hash. The algorithm is one of a number of one-way hashing or password storage algorithms Django can use; see below. Iterations describe the number of times the algorithm is run over the hash. Salt is the random seed used and the hash is the result of the one-way function.

I installed the Bcrypted library in the settings.py file... What else do I need to do to use Bcrypt?

I'm not sure what that first sentence means. You need to put the following in settings.py:

use Bcrypt to validate a password a user provides upon login against the hashed version stored in the database.

You can do that manually:

The django.contrib.auth.hashers module provides a set of functions to create and validate hashed password. You can use them independently from the User model.

check_password(password, encoded) If youd like to manually authenticate a user by comparing a plain-text password to the hashed password in the database, use the convenience function check_password(). It takes two arguments: the plain-text password to check, and the full value of a users password field in the database to check against, and returns True if they match, False otherwise.

authenticate()

authenticate(**credentials) To authenticate a given username and password, use authenticate(). It takes credentials in the form of keyword arguments, for the default configuration this is username and password, and it returns a User object if the password is valid for the given username. If the password is invalid, authenticate() returns None. Example:

from django.contrib.auth import authenticate

user = authenticate(username='john', password='password to check')

if user is not None:
    # the password verified for the user
    if user.is_active:
        print("User is valid, active and authenticated")
    else:
        print("The password is valid, but the account has been disabled!")
else:
    # the authentication system was unable to verify the username and password
    print("The username and password were incorrect.")

Those are the defaults: there is no entry in my settings.py for PASSWORD_HASHERS.

PASSWORD_HASHERS = (
    'django.contrib.auth.hashers.BCryptSHA256PasswordHasher',
    'django.contrib.auth.hashers.BCryptPasswordHasher',
    'django.contrib.auth.hashers.PBKDF2PasswordHasher',
    'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
    'django.contrib.auth.hashers.SHA1PasswordHasher',
    'django.contrib.auth.hashers.MD5PasswordHasher',
    'django.contrib.auth.hashers.CryptPasswordHasher',
)
(django186p34)~/django_projects/dj1$ python manage.py shell

Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 23 2015, 02:52:03) 
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)

>>> from django.conf import settings
>>> print(settings.PASSWORD_HASHERS)
('django.contrib.auth.hashers.BCryptSHA256PasswordHasher',
 'django.contrib.auth.hashers.BCryptPasswordHasher',
 'django.contrib.auth.hashers.PBKDF2PasswordHasher',
 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
 'django.contrib.auth.hashers.SHA1PasswordHasher',
 'django.contrib.auth.hashers.MD5PasswordHasher', 
 'django.contrib.auth.hashers.CryptPasswordHasher')

Note the bcrypt hashers at the front of the tuple.

>>> from django.contrib.auth.models import User

>>> user = User.objects.get(username='ea87')
>>> user
<User: ea87>

>>> user.password
'pbkdf2_sha256$20000$DS20ZOCWTBFN$AFfzg3iC24Pkj5UtEu3O+J8KOVBQvaLVx43D0Wsr4PY='

>>> user.set_password('666monkeysAndDogs777')
>>> user.password
'bcrypt_sha256$$2b$12$QeWvpi7hQ8cPQBF0LzD4C.89R81AV4PxK0kjVXG73fkLoQxYBundW'

You can see that the password has changed to a bcrypt version.

I upvoted, because a lot of the ideas helped me to find out what confused me; put in short for those confused in future: the hashes produced by BCrypt (which is what php currently uses) are not at all in the form <algorithm>$<iterations>$<salt>$<hash> as stated in the docs, but as you can see at the end of the answer (only not with "_sha256", at least that's how my php hashes were encrypted -> imported into django! :))

python - How to use Bcrypt to encrypt passwords in Django - Stack Over...

python django encryption bcrypt
Rectangle 27 3

IMHO __call__ method and closures give us a natural way to create STRATEGY design pattern in Python. We define a family of algorithms, encapsulate each one, make them interchangeable and in the end we can execute a common set of steps and, for example, calculate a hash for a file.

Python __call__ special method practical example - Stack Overflow

python methods call magic-methods
Rectangle 27 4

You didn't specify to open the file in binary mode, so f.read() is trying to read the file as a UTF-8-encoded text file, which doesn't seem to be working. But since we take the hash of bytes, not of strings, it doesn't matter what the encoding is, or even whether the file is text at all: just open it, and then read it, as a binary file.

but

>>> with open("test.h5.bz2","rb") as f: print(hashlib.sha1(f.read()).hexdigest())
21bd89480061c80f347e34594e71c6943ca11325

After so many tryouts, it was that b.

python - 'utf-8' codec can't decode byte 0x80 - Stack Overflow

python utf-8 caffe
Rectangle 27 4

You didn't specify to open the file in binary mode, so f.read() is trying to read the file as a UTF-8-encoded text file, which doesn't seem to be working. But since we take the hash of bytes, not of strings, it doesn't matter what the encoding is, or even whether the file is text at all: just open it, and then read it, as a binary file.

but

>>> with open("test.h5.bz2","rb") as f: print(hashlib.sha1(f.read()).hexdigest())
21bd89480061c80f347e34594e71c6943ca11325

After so many tryouts, it was that b.

python - 'utf-8' codec can't decode byte 0x80 - Stack Overflow

python utf-8 caffe
Rectangle 27 93

python -c "import SimpleHTTPServer;SimpleHTTPServer.test()"
Serving HTTP on 0.0.0.0 port 8000 ...
localhost - - [02/Jun/2009 12:48:47] code 404, message File not found
localhost - - [02/Jun/2009 12:48:47] "GET /hello?foo=bar HTTP/1.1" 404 -

The server receives the request without the #appendage - anything after the hash tag is simply an anchor lookup on the client.

You can find the anchor name used within the URL via javascript using, as an example:

<script>alert(window.location.hash);</script>

The parse_url() function in PHP can work if you already have the needed URL string including the fragment (http://codepad.org/BDqjtXix):

<?
echo parse_url("http://foo?bar#fizzbuzz",PHP_URL_FRAGMENT);
?>

Output: fizzbuzz

But I don't think PHP receives the fragment information because it's client-only.

http - Can I read the hash portion of the URL on my server-side applic...

http url web language-agnostic uri-fragment
Rectangle 27 93

python -c "import SimpleHTTPServer;SimpleHTTPServer.test()"
Serving HTTP on 0.0.0.0 port 8000 ...
localhost - - [02/Jun/2009 12:48:47] code 404, message File not found
localhost - - [02/Jun/2009 12:48:47] "GET /hello?foo=bar HTTP/1.1" 404 -

The server receives the request without the #appendage - anything after the hash tag is simply an anchor lookup on the client.

You can find the anchor name used within the URL via javascript using, as an example:

<script>alert(window.location.hash);</script>

The parse_url() function in PHP can work if you already have the needed URL string including the fragment (http://codepad.org/BDqjtXix):

<?
echo parse_url("http://foo?bar#fizzbuzz",PHP_URL_FRAGMENT);
?>

Output: fizzbuzz

But I don't think PHP receives the fragment information because it's client-only.

http - Can I read the hash portion of the URL on my server-side applic...

http url web language-agnostic uri-fragment
Rectangle 27 1

import bcrypt
import hmac
from getpass import getpass
master_secret_key = getpass('tell me the master secret key you are going to use')    
# Calculating a hash
hashed = bcrypt.hashpw(password, bcrypt.gensalt())
# Validating a hash (don't use ==)
if (hmac.compare_digest(bcrypt.hashpw(password, hashed), hashed)):
    # Login successful

Now that have the salt and hashed password, you need to store it somewhere on disk. Where ever you do store it, you should set the file permissions to 600 (read/write by user only). If you plan on not allowing password changes, then 400 is better.

import os
import stat

# Define file params
fname = '/tmp/myfile'
flags = os.O_WRONLY | os.O_CREAT | os.O_EXCL  # Refer to "man 2 open".
mode = stat.S_IRUSR | stat.S_IWUSR  # This is 0o600 in octal and 384 in decimal.

# For security, remove file with potentially elevated mode
try:
    os.remove(fname)
except OSError:
    pass

# Open file descriptor
umask_original = os.umask(0)
try:
    fdesc = os.open(fname, flags, mode)
finally:
    os.umask(umask_original)

# Open file handle and write to file
with os.fdopen(fdesc, 'w') as fout:
    fout.write('something\n')

is it secure for python use pickle file to store the username and pass...

python pickle