Rectangle 27 14

You can treat your text file as a python module and load it dynamically using imp.load_source:

import imp
imp.load_source( name, pathname[, file])
// mydata.txt
var1 = 'hi'
var2 = 'how are you?'
var3 = { 1:'elem1', 2:'elem2' }
//...

// In your script file
def getVarFromFile(filename):
    import imp
    f = open(filename)
    global data
    data = imp.load_source('data', '', f)
    f.close()

# path to "config" file
getVarFromFile('c:/mydata.txt')
print data.var1
print data.var2
print data.var3
...

This is exactly the solution that I was looking for to provide an elegant solution to a gnarly problem. Thanks, Nick.

Best way to retrieve variable values from a text file - Python - Json ...

python json variables text-files
Rectangle 27 14

You can treat your text file as a python module and load it dynamically using imp.load_source:

import imp
imp.load_source( name, pathname[, file])
// mydata.txt
var1 = 'hi'
var2 = 'how are you?'
var3 = { 1:'elem1', 2:'elem2' }
//...

// In your script file
def getVarFromFile(filename):
    import imp
    f = open(filename)
    global data
    data = imp.load_source('data', '', f)
    f.close()

# path to "config" file
getVarFromFile('c:/mydata.txt')
print data.var1
print data.var2
print data.var3
...

This is exactly the solution that I was looking for to provide an elegant solution to a gnarly problem. Thanks, Nick.

Best way to retrieve variable values from a text file - Python - Json ...

python json variables text-files
Rectangle 27 425

ASCII is a TEXT file so you would use Readers for reading. Java also supports reading from a binary file using InputStreams. If the files being read are huge then you would want to use a BufferedReader on top of a FileReader to improve read performance.

Go through this article on how to use a Reader

I'd also recommend you download and read this wonderful (yet free) book called Thinking In Java

Picking a Reader really depends on what you need the content of the file for. If the file is small(ish) and you need it all, it's faster (benchmarked by us: 1.8-2x) to just use a FileReader and read everything (or at least large enough chunks). If you're processing it line by line then go for the BufferedReader.

Will the line order be preserved when using "Files.lines(..).forEach(...)". My understanding is that the order will be arbitrary after this operation.

Files.lines().forEach() does not preserve order of lines but is executed in parallel, @Dash. If the order is important, you can use Files.lines().forEachOrdered(), which should preserve the order (did not verify though).

@Palec this is interesting, but can you quote from the docs where it says that Files.lines(...).forEach(...) is executed in parallel? I thought this was only the case when you explicitly make the stream parallel using Files.lines(...).parallel().forEach(...).

My original formulation is not bulletproof, @KlitosKyriacou. The point is that forEach does not guarantee any order and the reason is easy parallelization. If order is to be preserved, use forEachOrdered.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

Reading a plain text file in Java - Stack Overflow

java file-io ascii
Rectangle 27 421

ASCII is a TEXT file so you would use Readers for reading. Java also supports reading from a binary file using InputStreams. If the files being read are huge then you would want to use a BufferedReader on top of a FileReader to improve read performance.

Go through this article on how to use a Reader

I'd also recommend you download and read this wonderful (yet free) book called Thinking In Java

Picking a Reader really depends on what you need the content of the file for. If the file is small(ish) and you need it all, it's faster (benchmarked by us: 1.8-2x) to just use a FileReader and read everything (or at least large enough chunks). If you're processing it line by line then go for the BufferedReader.

Will the line order be preserved when using "Files.lines(..).forEach(...)". My understanding is that the order will be arbitrary after this operation.

Files.lines().forEach() does not preserve order of lines but is executed in parallel, @Dash. If the order is important, you can use Files.lines().forEachOrdered(), which should preserve the order (did not verify though).

@Palec this is interesting, but can you quote from the docs where it says that Files.lines(...).forEach(...) is executed in parallel? I thought this was only the case when you explicitly make the stream parallel using Files.lines(...).parallel().forEach(...).

My original formulation is not bulletproof, @KlitosKyriacou. The point is that forEach does not guarantee any order and the reason is easy parallelization. If order is to be preserved, use forEachOrdered.

Reading a plain text file in Java - Stack Overflow

java file-io ascii
Rectangle 27 420

ASCII is a TEXT file so you would use Readers for reading. Java also supports reading from a binary file using InputStreams. If the files being read are huge then you would want to use a BufferedReader on top of a FileReader to improve read performance.

Go through this article on how to use a Reader

I'd also recommend you download and read this wonderful (yet free) book called Thinking In Java

Picking a Reader really depends on what you need the content of the file for. If the file is small(ish) and you need it all, it's faster (benchmarked by us: 1.8-2x) to just use a FileReader and read everything (or at least large enough chunks). If you're processing it line by line then go for the BufferedReader.

Will the line order be preserved when using "Files.lines(..).forEach(...)". My understanding is that the order will be arbitrary after this operation.

Files.lines().forEach() does not preserve order of lines but is executed in parallel, @Dash. If the order is important, you can use Files.lines().forEachOrdered(), which should preserve the order (did not verify though).

@Palec this is interesting, but can you quote from the docs where it says that Files.lines(...).forEach(...) is executed in parallel? I thought this was only the case when you explicitly make the stream parallel using Files.lines(...).parallel().forEach(...).

My original formulation is not bulletproof, @KlitosKyriacou. The point is that forEach does not guarantee any order and the reason is easy parallelization. If order is to be preserved, use forEachOrdered.

Reading a plain text file in Java - Stack Overflow

java file-io ascii
Rectangle 27 33

If you want to read files on the client using HTML5's FileReader, you must use Firefox, Chrome or IE 10+. If that is true, the following example reads a text file on the client.

your example attempts to use fopen that I have never heard of (on the client)

document.getElementById('file').addEventListener('change', readFile, false);

   function readFile (evt) {
       var files = evt.target.files;
       var file = files[0];           
       var reader = new FileReader();
       reader.onload = function(event) {
         console.log(event.target.result);            
       }
       reader.readAsText(file)
    }

For IE<10 support you need to look into using an ActiveX Object like ADO.Stream Scripting.FileSystemObject http://msdn.microsoft.com/en-us/library/2z9ffy99(v=vs.85).aspx but you'll run into a security problem. If you run IE allowing all ActiveX objects (for your website), it should work.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

Reading client side text file using Javascript - Stack Overflow

javascript javascript-events
Rectangle 27 33

If you want to read files on the client using HTML5's FileReader, you must use Firefox, Chrome or IE 10+. If that is true, the following example reads a text file on the client.

your example attempts to use fopen that I have never heard of (on the client)

document.getElementById('file').addEventListener('change', readFile, false);

   function readFile (evt) {
       var files = evt.target.files;
       var file = files[0];           
       var reader = new FileReader();
       reader.onload = function(event) {
         console.log(event.target.result);            
       }
       reader.readAsText(file)
    }

For IE<10 support you need to look into using an ActiveX Object like ADO.Stream Scripting.FileSystemObject http://msdn.microsoft.com/en-us/library/2z9ffy99(v=vs.85).aspx but you'll run into a security problem. If you run IE allowing all ActiveX objects (for your website), it should work.

Reading client side text file using Javascript - Stack Overflow

javascript javascript-events
Rectangle 27 33

If you want to read files on the client using HTML5's FileReader, you must use Firefox, Chrome or IE 10+. If that is true, the following example reads a text file on the client.

your example attempts to use fopen that I have never heard of (on the client)

document.getElementById('file').addEventListener('change', readFile, false);

   function readFile (evt) {
       var files = evt.target.files;
       var file = files[0];           
       var reader = new FileReader();
       reader.onload = function(event) {
         console.log(event.target.result);            
       }
       reader.readAsText(file)
    }

For IE<10 support you need to look into using an ActiveX Object like ADO.Stream Scripting.FileSystemObject http://msdn.microsoft.com/en-us/library/2z9ffy99(v=vs.85).aspx but you'll run into a security problem. If you run IE allowing all ActiveX objects (for your website), it should work.

Reading client side text file using Javascript - Stack Overflow

javascript javascript-events
Rectangle 27 1

You should specify in your question that you want client side file reading as I see a lot are referring to server side reading.

You should have a look in FileAPI - an HTML 5 Javascript addition that allows JavaScript to read file content via the file input.

I am working on code example for you - but here is a good site you should read

Without FileAPI - you can still use the file input field in form with target="some iframe" - then let the server upload the file and return the text. ( FormData allows uploading files in Ajax but it is not supported in all browsers ).

So File API is your way to go Here is how you do it with File API

<input type="file"/>
<script>
$(function(){
            $("input").change(function(e){
                    console.log(["file changed",e]);
                var myFile = e.target.files[0];
                var reader = new FileReader();
                reader.onload = function(e){
                    console.log(["this is the contents of the file",e.target.result]);
                };
                reader.readAsText(myFile)

            });
        }
)
</script>

You can also implement a drag/drop interface (like google gmail has )

$("div").on("dragover",function(e){
            e.dataTransfer = e.originalEvent.dataTransfer;
                e.stopPropagation();
                e.preventDefault();
                e.dataTransfer.dropEffect = 'copy'; // Explicitly show this is a copy.

        }).on("drop",function(e){
                    e.dataTransfer = e.originalEvent.dataTransfer;
                    e.stopPropagation();
                    e.preventDefault();
                    console.log(["selected files", e.dataTransfer.files])});

Reading values from text file using javascript - Stack Overflow

javascript file onclick text-files
Rectangle 27 1

The most efficient way of doing what you want to do is to read directly from the text file using textscan:

If the formatting in the text files are the same, you can read from one file at a time, do your process then change the name and run again.

You can make the process more automated by changing the name of the file from which data is read dynamically in a loop around your main program. But the way to do this depends on the name of the text files.

import - importing variables from one file to another MATLAB - Stack O...

matlab import
Rectangle 27 34

If your text file, which is a JSON string, is going to be read by some program, how difficult would it be to strip out either C or C++ style comments before using it?

Answer: It would be a one liner. If you do that then JSON files could be used as configuration files.

Probably the best suggestion so far, though still an issue for keeping files as an interchange format, as they need pre-processing before use.

I agree and have written a JSON parser in Java, available at www.SoftwareMonkey.org, that does exactly that.

Despite I think, it is not a good idea to extend JSON (without calling it a different exchange format): make sure to ignore "comments" within strings. { "foo": "/* This is not a comment.*/" }

"...would be a one liner" umm, no, actually, JSON is not a regular grammar where a regular expression can simply find matching pairs of /*. You have to parse the file to find if a /* appears inside a string (and ignore it), or if it's escaped (and ignore it), etc. Also, your answer is unhelpful because you simply speculate (incorrectly) rather than providing any solution.

What @kyle-simpson said. Also, he's too modest to direct readers to his own answer about using JSON.minify as an alternative to ad hoc regexps. Do that, not this.

Can comments be used in JSON? - Stack Overflow

json comments
Rectangle 27 34

If your text file, which is a JSON string, is going to be read by some program, how difficult would it be to strip out either C or C++ style comments before using it?

Answer: It would be a one liner. If you do that then JSON files could be used as configuration files.

Probably the best suggestion so far, though still an issue for keeping files as an interchange format, as they need pre-processing before use.

I agree and have written a JSON parser in Java, available at www.SoftwareMonkey.org, that does exactly that.

Despite I think, it is not a good idea to extend JSON (without calling it a different exchange format): make sure to ignore "comments" within strings. { "foo": "/* This is not a comment.*/" }

"...would be a one liner" umm, no, actually, JSON is not a regular grammar where a regular expression can simply find matching pairs of /*. You have to parse the file to find if a /* appears inside a string (and ignore it), or if it's escaped (and ignore it), etc. Also, your answer is unhelpful because you simply speculate (incorrectly) rather than providing any solution.

What @kyle-simpson said. Also, he's too modest to direct readers to his own answer about using JSON.minify as an alternative to ad hoc regexps. Do that, not this.

Can comments be used in JSON? - Stack Overflow

json comments
Rectangle 27 3

Remember, if you have a string which was read as a line from a text file using the fgets() function, you need to use substr($string, -3, 1) so that you get the actual character and not part of the CRLF (Carriage Return Line Feed).

I don't think the person who asked the question needed this, but for me, I was having trouble getting that last character from a string from a text file so I'm sure others will come across similar problems.

how to get the last char of a string in PHP? - Stack Overflow

php string
Rectangle 27 3

Remember, if you have a string which was read as a line from a text file using the fgets() function, you need to use substr($string, -3, 1) so that you get the actual character and not part of the CRLF (Carriage Return Line Feed).

I don't think the person who asked the question needed this, but for me, I was having trouble getting that last character from a string from a text file so I'm sure others will come across similar problems.

how to get the last char of a string in PHP? - Stack Overflow

php string
Rectangle 27 70

I have been using OleDb provider. However, it has problems if you are reading in rows that have look like numeric values but you want them treated as text. However you can get around that issue by creating a schema.ini file. Here is my method I used:

// using System.Data;
// using System.Data.OleDb;
// using System.Globalization;
// using System.IO;

static DataTable GetDataTableFromCsv(string path, bool isFirstRowHeader)
{
    string header = isFirstRowHeader ? "Yes" : "No";

    string pathOnly = Path.GetDirectoryName(path);
    string fileName = Path.GetFileName(path);

    string sql = @"SELECT * FROM [" + fileName + "]";

    using(OleDbConnection connection = new OleDbConnection(
              @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + pathOnly + 
              ";Extended Properties=\"Text;HDR=" + header + "\""))
    using(OleDbCommand command = new OleDbCommand(sql, connection))
    using(OleDbDataAdapter adapter = new OleDbDataAdapter(command))
    {
        DataTable dataTable = new DataTable();
        dataTable.Locale = CultureInfo.CurrentCulture;
        adapter.Fill(dataTable);
        return dataTable;
    }
}

Thanks buddy. That helped for me. I had a CSV file in which commas weren't only separators, they were everywhere inside many columns values, so coming up with a regex that would split the line was kinda challenging. The OleDbProvider inferred the schema correctly.

The implementation makes sense but how do we deal with cells containing mixed data types. For example, 40C and etc.?

GKED, if the data you are reading in always has an expected set of columns and types you can place in the same folder a shema.ini file that tells OleDb provider information about the columns. Here is a link to a Microsoft article that provides details of how to structure the file. msdn.microsoft.com/en-us/library/

While this answer will work, I would strongly advise against it. You introduce an external dependency which may conflict with other installations of office on the same machine (use Excel on your local dev environment?), dependent on versions installed. There are NuGet packages out there (ExcelDataReader, CsvHelper) that do this in more efficient, more portable ways.

@A.Murray - What exactly do you mean? This uses the built in OleDb provider in System.Data.dll. You don't need to install any additional "drivers". And I'd be shocked in this day and age if any windows installation didn't have the basic Jet driver installed. This is 1990's CSV....

c# - How to read a CSV file into a .NET Datatable - Stack Overflow

c# .net csv datatable
Rectangle 27 171

//Find the directory for the SD Card using the API
//*Don't* hardcode "/sdcard"
File sdcard = Environment.getExternalStorageDirectory();

//Get the text file
File file = new File(sdcard,"file.txt");

//Read text from file
StringBuilder text = new StringBuilder();

try {
    BufferedReader br = new BufferedReader(new FileReader(file));
    String line;

    while ((line = br.readLine()) != null) {
        text.append(line);
        text.append('\n');
    }
    br.close();
}
catch (IOException e) {
    //You'll need to add proper error handling here
}

//Find the view by its id
TextView tv = (TextView)findViewById(R.id.text_view);

//Set the text
tv.setText(text.toString());

following links can also help you :

your link would be helps me to achieve

while ((line = br.readLine()) != null) {     if(line.length() > 0) {       //do your stuff     }            }

close() should be done on a finally block.

@Shruti how to add the file into SD card

java - How can I read a text file in Android? - Stack Overflow

java android exception-handling inputstream
Rectangle 27 104

Reading text files backwards is really tricky unless you're using a fixed-size encoding (e.g. ASCII). When you've got variable-size encoding (such as UTF-8) you will keep having to check whether you're in the middle of a character or not when you fetch data.

There's nothing built into the framework, and I suspect you'd have to do separate hard coding for each variable-width encoding.

EDIT: This has been somewhat tested - but that's not to say it doesn't still have some subtle bugs around. It uses StreamUtil from MiscUtil, but I've included just the necessary (new) method from there at the bottom. Oh, and it needs refactoring - there's one pretty hefty method, as you'll see:

using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Text;

namespace MiscUtil.IO
{
    /// <summary>
    /// Takes an encoding (defaulting to UTF-8) and a function which produces a seekable stream
    /// (or a filename for convenience) and yields lines from the end of the stream backwards.
    /// Only single byte encodings, and UTF-8 and Unicode, are supported. The stream
    /// returned by the function must be seekable.
    /// </summary>
    public sealed class ReverseLineReader : IEnumerable<string>
    {
        /// <summary>
        /// Buffer size to use by default. Classes with internal access can specify
        /// a different buffer size - this is useful for testing.
        /// </summary>
        private const int DefaultBufferSize = 4096;

        /// <summary>
        /// Means of creating a Stream to read from.
        /// </summary>
        private readonly Func<Stream> streamSource;

        /// <summary>
        /// Encoding to use when converting bytes to text
        /// </summary>
        private readonly Encoding encoding;

        /// <summary>
        /// Size of buffer (in bytes) to read each time we read from the
        /// stream. This must be at least as big as the maximum number of
        /// bytes for a single character.
        /// </summary>
        private readonly int bufferSize;

        /// <summary>
        /// Function which, when given a position within a file and a byte, states whether
        /// or not the byte represents the start of a character.
        /// </summary>
        private Func<long,byte,bool> characterStartDetector;

        /// <summary>
        /// Creates a LineReader from a stream source. The delegate is only
        /// called when the enumerator is fetched. UTF-8 is used to decode
        /// the stream into text.
        /// </summary>
        /// <param name="streamSource">Data source</param>
        public ReverseLineReader(Func<Stream> streamSource)
            : this(streamSource, Encoding.UTF8)
        {
        }

        /// <summary>
        /// Creates a LineReader from a filename. The file is only opened
        /// (or even checked for existence) when the enumerator is fetched.
        /// UTF8 is used to decode the file into text.
        /// </summary>
        /// <param name="filename">File to read from</param>
        public ReverseLineReader(string filename)
            : this(filename, Encoding.UTF8)
        {
        }

        /// <summary>
        /// Creates a LineReader from a filename. The file is only opened
        /// (or even checked for existence) when the enumerator is fetched.
        /// </summary>
        /// <param name="filename">File to read from</param>
        /// <param name="encoding">Encoding to use to decode the file into text</param>
        public ReverseLineReader(string filename, Encoding encoding)
            : this(() => File.OpenRead(filename), encoding)
        {
        }

        /// <summary>
        /// Creates a LineReader from a stream source. The delegate is only
        /// called when the enumerator is fetched.
        /// </summary>
        /// <param name="streamSource">Data source</param>
        /// <param name="encoding">Encoding to use to decode the stream into text</param>
        public ReverseLineReader(Func<Stream> streamSource, Encoding encoding)
            : this(streamSource, encoding, DefaultBufferSize)
        {
        }

        internal ReverseLineReader(Func<Stream> streamSource, Encoding encoding, int bufferSize)
        {
            this.streamSource = streamSource;
            this.encoding = encoding;
            this.bufferSize = bufferSize;
            if (encoding.IsSingleByte)
            {
                // For a single byte encoding, every byte is the start (and end) of a character
                characterStartDetector = (pos, data) => true;
            }
            else if (encoding is UnicodeEncoding)
            {
                // For UTF-16, even-numbered positions are the start of a character.
                // TODO: This assumes no surrogate pairs. More work required
                // to handle that.
                characterStartDetector = (pos, data) => (pos & 1) == 0;
            }
            else if (encoding is UTF8Encoding)
            {
                // For UTF-8, bytes with the top bit clear or the second bit set are the start of a character
                // See http://www.cl.cam.ac.uk/~mgk25/unicode.html
                characterStartDetector = (pos, data) => (data & 0x80) == 0 || (data & 0x40) != 0;
            }
            else
            {
                throw new ArgumentException("Only single byte, UTF-8 and Unicode encodings are permitted");
            }
        }

        /// <summary>
        /// Returns the enumerator reading strings backwards. If this method discovers that
        /// the returned stream is either unreadable or unseekable, a NotSupportedException is thrown.
        /// </summary>
        public IEnumerator<string> GetEnumerator()
        {
            Stream stream = streamSource();
            if (!stream.CanSeek)
            {
                stream.Dispose();
                throw new NotSupportedException("Unable to seek within stream");
            }
            if (!stream.CanRead)
            {
                stream.Dispose();
                throw new NotSupportedException("Unable to read within stream");
            }
            return GetEnumeratorImpl(stream);
        }

        private IEnumerator<string> GetEnumeratorImpl(Stream stream)
        {
            try
            {
                long position = stream.Length;

                if (encoding is UnicodeEncoding && (position & 1) != 0)
                {
                    throw new InvalidDataException("UTF-16 encoding provided, but stream has odd length.");
                }

                // Allow up to two bytes for data from the start of the previous
                // read which didn't quite make it as full characters
                byte[] buffer = new byte[bufferSize + 2];
                char[] charBuffer = new char[encoding.GetMaxCharCount(buffer.Length)];
                int leftOverData = 0;
                String previousEnd = null;
                // TextReader doesn't return an empty string if there's line break at the end
                // of the data. Therefore we don't return an empty string if it's our *first*
                // return.
                bool firstYield = true;

                // A line-feed at the start of the previous buffer means we need to swallow
                // the carriage-return at the end of this buffer - hence this needs declaring
                // way up here!
                bool swallowCarriageReturn = false;

                while (position > 0)
                {
                    int bytesToRead = Math.Min(position > int.MaxValue ? bufferSize : (int)position, bufferSize);

                    position -= bytesToRead;
                    stream.Position = position;
                    StreamUtil.ReadExactly(stream, buffer, bytesToRead);
                    // If we haven't read a full buffer, but we had bytes left
                    // over from before, copy them to the end of the buffer
                    if (leftOverData > 0 && bytesToRead != bufferSize)
                    {
                        // Buffer.BlockCopy doesn't document its behaviour with respect
                        // to overlapping data: we *might* just have read 7 bytes instead of
                        // 8, and have two bytes to copy...
                        Array.Copy(buffer, bufferSize, buffer, bytesToRead, leftOverData);
                    }
                    // We've now *effectively* read this much data.
                    bytesToRead += leftOverData;

                    int firstCharPosition = 0;
                    while (!characterStartDetector(position + firstCharPosition, buffer[firstCharPosition]))
                    {
                        firstCharPosition++;
                        // Bad UTF-8 sequences could trigger this. For UTF-8 we should always
                        // see a valid character start in every 3 bytes, and if this is the start of the file
                        // so we've done a short read, we should have the character start
                        // somewhere in the usable buffer.
                        if (firstCharPosition == 3 || firstCharPosition == bytesToRead)
                        {
                            throw new InvalidDataException("Invalid UTF-8 data");
                        }
                    }
                    leftOverData = firstCharPosition;

                    int charsRead = encoding.GetChars(buffer, firstCharPosition, bytesToRead - firstCharPosition, charBuffer, 0);
                    int endExclusive = charsRead;

                    for (int i = charsRead - 1; i >= 0; i--)
                    {
                        char lookingAt = charBuffer[i];
                        if (swallowCarriageReturn)
                        {
                            swallowCarriageReturn = false;
                            if (lookingAt == '\r')
                            {
                                endExclusive--;
                                continue;
                            }
                        }
                        // Anything non-line-breaking, just keep looking backwards
                        if (lookingAt != '\n' && lookingAt != '\r')
                        {
                            continue;
                        }
                        // End of CRLF? Swallow the preceding CR
                        if (lookingAt == '\n')
                        {
                            swallowCarriageReturn = true;
                        }
                        int start = i + 1;
                        string bufferContents = new string(charBuffer, start, endExclusive - start);
                        endExclusive = i;
                        string stringToYield = previousEnd == null ? bufferContents : bufferContents + previousEnd;
                        if (!firstYield || stringToYield.Length != 0)
                        {
                            yield return stringToYield;
                        }
                        firstYield = false;
                        previousEnd = null;
                    }

                    previousEnd = endExclusive == 0 ? null : (new string(charBuffer, 0, endExclusive) + previousEnd);

                    // If we didn't decode the start of the array, put it at the end for next time
                    if (leftOverData != 0)
                    {
                        Buffer.BlockCopy(buffer, 0, buffer, bufferSize, leftOverData);
                    }
                }
                if (leftOverData != 0)
                {
                    // At the start of the final buffer, we had the end of another character.
                    throw new InvalidDataException("Invalid UTF-8 data at start of stream");
                }
                if (firstYield && string.IsNullOrEmpty(previousEnd))
                {
                    yield break;
                }
                yield return previousEnd ?? "";
            }
            finally
            {
                stream.Dispose();
            }
        }

        IEnumerator IEnumerable.GetEnumerator()
        {
            return GetEnumerator();
        }
    }
}


// StreamUtil.cs:
public static class StreamUtil
{
    public static void ReadExactly(Stream input, byte[] buffer, int bytesToRead)
    {
        int index = 0;
        while (index < bytesToRead)
        {
            int read = input.Read(buffer, index, bytesToRead - index);
            if (read == 0)
            {
                throw new EndOfStreamException
                    (String.Format("End of stream reached with {0} byte{1} left to read.",
                                   bytesToRead - index,
                                   bytesToRead - index == 1 ? "s" : ""));
            }
            index += read;
        }
    }
}

Feedback very welcome. This was fun :)

Righto - I'm planning to start in about an hour. I should be able to support single-byte encodings, Encoding.Unicode, and Encoding.UTF8. Other dobule-byte encodings won't be supported. I'm expecting testing to be a pain :(

wow! I know it's more than three years old, but this piece of code rocks! Thanks!! (p.s. I just changed File.OpenRead(filename) with File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.ReadWrite) to let the iterator read already opened files

@GrimaceofDespair: More "because then I would have had to design for inheritance, which adds a very significant cost in terms of both design time and future flexibility". Often it's not even clear how inheritance could sensibly be used for a type - better to prohibit it until that clarity has been found, IMO.

.net - How to read a text file reversely with iterator in C# - Stack O...

c# .net
Rectangle 27 2

New Way (using a class)

/*
 * The array is associative, where the keys are headers
 * and the values are the items in that column.
 * 
 * Because the array is by column, this function is probably costly.
 * Consider a different layout for your array and use a better function.
 * 
 * @param $array array The array to convert to csv.
 * @param $file string of the path to write the file.
 * @param $delimeter string a character to act as glue.
 * @param $enclosure string a character to wrap around text that contains the delimeter
 * @param $escape string a character to escape the enclosure character.
 * @return mixed int|boolean result of file_put_contents.
 */

function array_to_csv($array, $file, $delimeter = ',', $enclosure = '"', $escape = '\\'){
    $max_rows = get_max_array_values($array);
    $row_array = array();
    $content = '';
    foreach ($array as $header => $values) {
    $row_array[0][] = $header;
    $count = count($values);
    for ($c = 1; $c <= $count; $c++){
        $value = $values[$c - 1];
        $value = preg_replace('#"#', $escape.'"', $value);
        $put_value = (preg_match("#$delimeter#", $value)) ? $enclosure.$value.$enclosure : $value;
        $row_array[$c][] = $put_value;
    }
    // catch extra rows that need to be blank
    for (; $c <= $max_rows; $c++) {
        $row_array[$c][] = '';
    }
    }
    foreach ($row_array as $cur_row) {
    $content .= implode($delimeter,$cur_row)."\n";
    }
    return file_put_contents($file, $content);
}
/*
 * Get maximum number of values in the entire array.
 */
function get_max_array_values($array){
    $max_rows = 0;
    foreach ($array as $cur_array) {
    $cur_count = count($cur_array);
    $max_rows = ($max_rows < $cur_count) ? $cur_count : $max_rows;
    }
    return $max_rows;
}

I wrote a class for this a while later, which I'll provide for anyone looking for one now:

class CSVService {

    protected $csvSyntax;

    public function __construct()
    {
        return $this;
    }

    public function renderCSV($contents, $filename = 'data.csv')
    {
        header('Content-type: text/csv');
        header('Content-Disposition: attachment; filename="' . $filename . '"');

        echo $contents;
    }

    public function CSVtoArray($filename = '', $delimiter = ',') {
        if (!file_exists($filename) || !is_readable($filename)) {
            return false;
        }

        $headers = null;
        $data = array();
        if (($handle = fopen($filename, 'r')) !== false) {
            while (($row = fgetcsv($handle, 0, $delimiter, '"')) !== false) {
                if (!$headers) {
                    $headers = $row;
                    array_walk($headers, 'trim');
                    $headers = array_unique($headers);
                } else {
                    for ($i = 0, $j = count($headers); $i < $j;  ++$i) {
                        $row[$i] = trim($row[$i]);
                        if (empty($row[$i]) && !isset($data[trim($headers[$i])])) {
                            $data[trim($headers[$i])] = array();
                        } else if (empty($row[$i])) {
                            continue;
                        } else {
                            $data[trim($headers[$i])][] = stripcslashes($row[$i]);
                        }
                    }
                }
            }
            fclose($handle);
        }
        return $data;
    }

    protected function getMaxArrayValues($array)
    {
        return array_reduce($array, function($carry, $item){
            return ($carry > $c = count($item)) ? $carry : $c;
        }, 0);
    }

    private function getCSVHeaders($array)
    {
        return array_reduce(
                array_keys($array),
                function($carry, $item) {
                    return $carry . $this->prepareCSVValue($item) . $this->csvSyntax->delimiter;
                }, '') . "\n";
    }

    private function prepareCSVValue($value, $delimiter = ',', $enclosure = '"', $escape = '\\')
    {
        $valueEscaped = preg_replace('#"#', $escape . '"', $value);
        return (preg_match("#$delimiter#", $valueEscaped)) ?
                $enclosure . $valueEscaped . $enclosure : $valueEscaped;
    }

    private function setUpCSVSyntax($delimiter, $enclosure, $escape)
    {
        $this->csvSyntax = (object) [
            'delimiter' => $delimiter,
            'enclosure' => $enclosure,
            'escape'    => $escape,
        ];
    }

    private function getCSVRows($array)
    {
        $n = $this->getMaxArrayValues($array);
        $even = array_values(
            array_map(function($columnArray) use ($n) {
                for ($i = count($columnArray); $i <= $n; $i++) {
                    $columnArray[] = '';
                }
                return $columnArray;
            }, $array)
        );

        $rowString = '';

        for ($row = 0; $row < $n; $row++) {
            for ($col = 0; $col < count($even); $col++) {
                $value = $even[$col][$row];
                $rowString .=
                        $this->prepareCSVValue($value) .
                        $this->csvSyntax->delimiter;
            }
            $rowString .= "\n";
        }

        return $rowString;
    }

    public function arrayToCSV($array, $delimiter = ',', $enclosure = '"', $escape = '\\', $headers = true) {
        $this->setUpCSVSyntax($delimiter, $enclosure, $escape);

        $headersString = ($headers) ? $this->getCSVHeaders($array) : '';

        $rowsString = $this->getCSVRows($array);


        return $headersString . $rowsString;
    }

}

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

PHP - Array to CSV by Column - Stack Overflow

php arrays csv fputcsv
Rectangle 27 8

Instead of messing with Excel, you should be able to read the text file directly into MATLAB (using the functions FOPEN, FGETL, FSCANF, and FCLOSE):

fid = fopen('file.dat','rt');  %# Open the data file
headerChars = fgetl(fid);      %# Read the first line of characters
data = fscanf(fid,'%f,%f,%f,%f',[4 inf]).';  %'# Read the data into a
                                              %# 65536-by-4 matrix
fclose(fid);  %# Close the data file

How do you create a matrix from a text file in MATLAB? - Stack Overflo...

matlab file-io matrix
Rectangle 27 8

Instead of messing with Excel, you should be able to read the text file directly into MATLAB (using the functions FOPEN, FGETL, FSCANF, and FCLOSE):

fid = fopen('file.dat','rt');  %# Open the data file
headerChars = fgetl(fid);      %# Read the first line of characters
data = fscanf(fid,'%f,%f,%f,%f',[4 inf]).';  %'# Read the data into a
                                              %# 65536-by-4 matrix
fclose(fid);  %# Close the data file

How do you create a matrix from a text file in MATLAB? - Stack Overflo...

matlab file-io matrix