Rectangle 27 2

You misunderstand how parameters in PowerShell functions work. The output of Get-Content is an array of strings (one string for each line in the file), but the entire array is passed to the first parameter. Also, a string isn't magically split so that the substrings can go to several parameters. How should PowerShell know which way to split the string?

A better way to deal with such input data is to have your function accept input from the pipeline:

Function DoSomething {
  [CmdletBinding()]
  Param(
    [Parameter(
      Mandatory=$false,
      ValueFromPipeline=$true,
      ValueFromPipelineByPropertyName=$true
    )]
    [string]$v1,

    [Parameter(
      Mandatory=$false,
      ValueFromPipeline=$true,
      ValueFromPipelineByPropertyName=$true
    )]
    [string]$v2
  )

  Process {
    Write-Output "hello $v1 | $v2"
  }
}

and define the data as a CSV (column names matching the parameter names):

so that you can pipe the data into the function:

Import-Csv content.csv | DoSomething

With the function built like this you could also define the data as a hashtable and splat it:

$data = @{
  'v1' = '-v1 "t1"'
  'v2' = '-v2 ""qwe"'
}

DoSomething @data
about_Parameters
about_Functions_Advanced_Parameters

Ah ok, makes sense and thanks for the explanation. So for my scenario there are lot of cmdlets that are like DoSomething for whom the ValuefromPipeline property is not set. Also, so far, the input has been handled by directly passing it to the cmdlet like "DoSomething <parameter(s)>", but just recently we discovered that the input may grow bigger than 8k hence need a different approach in which we don't have to modify the way input is brought in as well how the existing cmdlets are written. So I was thinking if there was something like "DoSomething < Filename" which probably would ...

... have been perfect for me if it worked. I understand that may not be possible but just wanted to know if there is any way that can be done since changing the cmdlets and the inputs may not be feasible for my scenario.

How to pass large input to powershell - Stack Overflow

powershell
Rectangle 27 2

Conceptually simple and memory-efficient, but slow PowerShell solution...

This PowerShell (v2+) solution is slow, but it is conceptually simple and you shouldn't run out of memory, because the input lines are processed one at a time, using #@#@# as the line delimiter.

# Create sample input file.
@'
line 1 starts here
and
ends here#@#@#line 2 is all on one line#@#@#line 3 spans
two lines#@#@#
'@ > file

# Determine the input file.
$inFile = 'file'
# Create the output file.
$outFile = 'out'
$null = New-Item -Type File $outFile

Get-Content -Delimiter '#@#@#' $inFile | % {
  Add-Content -Value ($_.Replace("`r`n", " ").Replace($sep, '')) $outFile      
}

C# code inside a PowerShell script that is compiled on demand

Compilation is surprisingly quick (on the order of 0.3 seconds on my late-2012 iMac), and the use of compiled code to process the file results in a significant performance gain. Also note that compilation is only performed once per session, so subsequent invocations do not pay this penalty.

Processing a ~ 1 GB file (created by repetition of the contents of the above sample file) with the script printed below yields the following:

Compiling...
Processing file...
Completed:
  Compilation time:      00:00:00.2343647
  File-processing time:  00:00:26.0714467
  Total:                 00:00:26.3278546

Execution times in real-world applications will differ based on many factors, but based on @dbenham's timings mentioned in the comments below, the the on-demand compilation solutions is about twice as fast as the batch+JScript solution.

Source code of the fast PowerShell solution:

# Determine the input and output files.
$inFile = 'file'
$outFile = 'out'

# Get current time stamp for measuring duration.
$dtStart = [datetimeoffset]::UtcNow

# How many characters to read at a time.
# !! Make sure that this at least as large as the max. input.line length.
$kCHUNK_SIZE = 1000000 

Write-Host 'Compiling...'

# Note: This statement performs on-demand compilation, but only 
#       on *first* invocation in a given session.
$tsCompilation = Measure-Command {

    Add-Type @"
  using System;
  using System.IO;

  namespace net.same2u.so
  {
    public static class Helper
    {

      public static void TransformFile(string inFile, string outFile, string sep)
      {
        char[] bufChars = new char[$kCHUNK_SIZE];
        using (var sw = new StreamWriter(outFile))
        using (var sr = new StreamReader(inFile))
        {
          int pos = 0; bool eof = false;
          string bufStr, rest = String.Empty;
          while (!(eof = sr.EndOfStream) || rest.Length > 0)
          {
            if (eof)
            {
              bufStr = rest;
            }
            else
            {
              int count = sr.Read(bufChars, 0, $kCHUNK_SIZE);
              bufStr = rest.Length > 0 ? rest + new string(bufChars, 0, count) : new string(bufChars, 0, count);
            }
            if (-1 == (pos = bufStr.LastIndexOf(sep))) // should only happen at the very end
            {
              sw.Write(bufStr);
              rest = String.Empty;
            }
            else
            {
              pos += sep.Length; rest = bufStr.Substring(pos);
              sw.Write(bufStr.Substring(0, pos).Replace(Environment.NewLine, " ").Replace(sep, Environment.NewLine));
            }
          }

        }
      }

    }

  } // class Helper

"@
    if (-not $?) { exit 1 }
}

Write-Host 'Processing file...'

# Make sure the .NET framework sees the same current dir. as PS.
[System.IO.Directory]::SetCurrentDirectory($PWD)

$tsFileProcessing = Measure-Command {
  [net.same2u.so.Helper]::TransformFile($inFile, $outFile, '#@#@#')
}

Write-Host @"
Completed:
  Compilation time:      $tsCompilation
  File-processing time:  $tsFileProcessing
  Total:                 $([datetimeoffset]::UtcNow - $dtStart) 
"@

Add some parallel processing to that script and should be able to cut the time down dramatically, while preserving memory :) -- upvoted for minimal impact on system resources. Still prone to leaks (as most large file operations are), but overall a workable solution in my opinion (just needs some PP)

Thanks, @SamuelJackson. I assume you're kidding about the parallel processing (do tell me if you're not, and what you had in mind). What leaks could there be?

Scroll down a ways on this site to see the parallel processing examples for reading in c# -- I believe PowerShell has some parallel processing capabilities (but I could be wrong). The memory leaks with manipulating large files come naturally with buffering data (which is what is necessary to 'look' for something to replace with something else. It would almost be faster to do byte replacement, so replace '#@#@#' with '\n ' (5 spaces) as this wouldn't require the initial pointer to move.

Ugh, only marginally better than my original 1 byte at a time batch/JScript solution: 39 min to process a 155 MB file vs 52 min. That still scales to 6.5 hrs for a 1.5 GB file - not very practical. Compare that to my latest batch/JScript solution at 2 min 48 sec for a 1.8 GB file. I imagine you could translate my JScript algorithm into powershell and get a bit more performance.

@dbenham: Thanks for running a benchmark. I knew it wouldn't be fast, but I also didn't expect it to be that slow. PowerShell is the champion of abstraction, which is great, but at the expense of performance, which sometimes makes its use impractical.

Set custom row delimiter using batch/powershell script - Stack Overflo...

powershell batch-file
Rectangle 27 2

Conceptually simple and memory-efficient, but slow PowerShell solution...

This PowerShell (v2+) solution is slow, but it is conceptually simple and you shouldn't run out of memory, because the input lines are processed one at a time, using #@#@# as the line delimiter.

# Create sample input file.
@'
line 1 starts here
and
ends here#@#@#line 2 is all on one line#@#@#line 3 spans
two lines#@#@#
'@ > file

# Determine the input file.
$inFile = 'file'
# Create the output file.
$outFile = 'out'
$null = New-Item -Type File $outFile

Get-Content -Delimiter '#@#@#' $inFile | % {
  Add-Content -Value ($_.Replace("`r`n", " ").Replace($sep, '')) $outFile      
}

C# code inside a PowerShell script that is compiled on demand

Compilation is surprisingly quick (on the order of 0.3 seconds on my late-2012 iMac), and the use of compiled code to process the file results in a significant performance gain. Also note that compilation is only performed once per session, so subsequent invocations do not pay this penalty.

Processing a ~ 1 GB file (created by repetition of the contents of the above sample file) with the script printed below yields the following:

Compiling...
Processing file...
Completed:
  Compilation time:      00:00:00.2343647
  File-processing time:  00:00:26.0714467
  Total:                 00:00:26.3278546

Execution times in real-world applications will differ based on many factors, but based on @dbenham's timings mentioned in the comments below, the the on-demand compilation solutions is about twice as fast as the batch+JScript solution.

Source code of the fast PowerShell solution:

# Determine the input and output files.
$inFile = 'file'
$outFile = 'out'

# Get current time stamp for measuring duration.
$dtStart = [datetimeoffset]::UtcNow

# How many characters to read at a time.
# !! Make sure that this at least as large as the max. input.line length.
$kCHUNK_SIZE = 1000000 

Write-Host 'Compiling...'

# Note: This statement performs on-demand compilation, but only 
#       on *first* invocation in a given session.
$tsCompilation = Measure-Command {

    Add-Type @"
  using System;
  using System.IO;

  namespace net.same2u.so
  {
    public static class Helper
    {

      public static void TransformFile(string inFile, string outFile, string sep)
      {
        char[] bufChars = new char[$kCHUNK_SIZE];
        using (var sw = new StreamWriter(outFile))
        using (var sr = new StreamReader(inFile))
        {
          int pos = 0; bool eof = false;
          string bufStr, rest = String.Empty;
          while (!(eof = sr.EndOfStream) || rest.Length > 0)
          {
            if (eof)
            {
              bufStr = rest;
            }
            else
            {
              int count = sr.Read(bufChars, 0, $kCHUNK_SIZE);
              bufStr = rest.Length > 0 ? rest + new string(bufChars, 0, count) : new string(bufChars, 0, count);
            }
            if (-1 == (pos = bufStr.LastIndexOf(sep))) // should only happen at the very end
            {
              sw.Write(bufStr);
              rest = String.Empty;
            }
            else
            {
              pos += sep.Length; rest = bufStr.Substring(pos);
              sw.Write(bufStr.Substring(0, pos).Replace(Environment.NewLine, " ").Replace(sep, Environment.NewLine));
            }
          }

        }
      }

    }

  } // class Helper

"@
    if (-not $?) { exit 1 }
}

Write-Host 'Processing file...'

# Make sure the .NET framework sees the same current dir. as PS.
[System.IO.Directory]::SetCurrentDirectory($PWD)

$tsFileProcessing = Measure-Command {
  [net.same2u.so.Helper]::TransformFile($inFile, $outFile, '#@#@#')
}

Write-Host @"
Completed:
  Compilation time:      $tsCompilation
  File-processing time:  $tsFileProcessing
  Total:                 $([datetimeoffset]::UtcNow - $dtStart) 
"@

Add some parallel processing to that script and should be able to cut the time down dramatically, while preserving memory :) -- upvoted for minimal impact on system resources. Still prone to leaks (as most large file operations are), but overall a workable solution in my opinion (just needs some PP)

Thanks, @SamuelJackson. I assume you're kidding about the parallel processing (do tell me if you're not, and what you had in mind). What leaks could there be?

Scroll down a ways on this site to see the parallel processing examples for reading in c# -- I believe PowerShell has some parallel processing capabilities (but I could be wrong). The memory leaks with manipulating large files come naturally with buffering data (which is what is necessary to 'look' for something to replace with something else. It would almost be faster to do byte replacement, so replace '#@#@#' with '\n ' (5 spaces) as this wouldn't require the initial pointer to move.

Ugh, only marginally better than my original 1 byte at a time batch/JScript solution: 39 min to process a 155 MB file vs 52 min. That still scales to 6.5 hrs for a 1.5 GB file - not very practical. Compare that to my latest batch/JScript solution at 2 min 48 sec for a 1.8 GB file. I imagine you could translate my JScript algorithm into powershell and get a bit more performance.

@dbenham: Thanks for running a benchmark. I knew it wouldn't be fast, but I also didn't expect it to be that slow. PowerShell is the champion of abstraction, which is great, but at the expense of performance, which sometimes makes its use impractical.

Set custom row delimiter using batch/powershell script - Stack Overflo...

powershell batch-file
Rectangle 27 1

Ok, so the easiest way to approach this is to read the file in with Get-Content and then split each line where the commas are not inside quotes. I borrowed the regex from this solution for this.

$filedata = Get-Content C:\temp\test.csv
$asObject = ForEach($singlerow in ($filedata | Select-Object -Skip 1)){
    $props = @{}
    $singlerow = $singlerow -split ',(?=(?:[^"]*"[^"]*")*[^"]*$)'
    [pscustomobject][ordered]@{
        Server = $singlerow[0]
        Client = $singlerow[1]
        "JobDirectory/Script" = $singlerow[2]
        Policy = $singlerow[3]
    }
}
$asObject | Format-List
Server              : server1
Client              : test.domain.com
JobDirectory/Script : "vmware:/?filter= VMHostName AnyOf "server2.domain.com", "server3.domain.com""
Policy              : TEST
If ($thirdfield | Select-String -Pattern '"",') { 	$thirdfield = $string -replace ",",";" }

Not sure what context you are calling this. How is $thirdfield defined? Why would you replace that last comma with a semicolon. If you don't want to use my code and have another avenue then update this question or ask another one.

No, no... your code is perfect and works very well! I am jealous.. :) I was rambling. Thanks a lot!

Powershell Parsing CSV file that contains comma - Stack Overflow

parsing powershell csv
Rectangle 27 1

#!c:/Python/python.exe -u
# Or point it at your desired Python path
# Or on Unix something like: !/usr/bin/python

# Function to reformat the data as requested.
def reformat_data(input_file, output_file):

    # Define the comment lines you'll write.
    header_str = "Transform\tPosition\n"
    column_str = "\tFrame\tX pixels\tY pixels\tZ pixels\n"
    closer_str = "End of Keyframe Data\n"

    # Open the file for reading and close after getting lines.
    try:
        infile = open(input_file)
    except IOError:
        print "Invalid input file name..."
        exit()

    lines = infile.readlines()
    infile.close()

    # Open the output for writing. Write data then close.
    try:
        outfile = open(output_file,'w')
    except IOError:
        print "Invalid output file name..."
        exit()

    outfile.write(header_str)
    outfile.write(column_str)

    # Reformat each line to be tab-separated.
    for line in lines:
        line_data = line.split()
        if not (len(line_data) == 4):
            # This skips bad data lines, modify behavior if skipping not desired.
            pass 
        else:
            outfile.write("\t".join(line_data)+"\n")

    outfile.write(closer_str)
    outfile.close()

#####
# This below gets executed if you call
# python <name_of_this_script>.py
# from the Powershell/Cygwin/other terminal.
#####
if __name__ == "__main__":
    reformat_data("/path/to/input.txt", "/path/to/output.txt")

Formatting text data with C++ or any windows scripting language - Stac...

c++ windows file text formatting
Rectangle 27 0

I'm not a Windows user, so take my answer with a grain of salt. According to the Windows PowerShell Cookbook, PowerShell preprocesses the output of git diff, splitting it in lines. Documentation of the Out-File Cmdlet suggests, that > is the same as | Out-File without parameters. We also find this comment in the PowerShell documentation:

The results of using the Out-File cmdlet may not be what you expect if you are used to traditional output redirection. To understand its behavior, you must be aware of the context in which the Out-File cmdlet operates.

By default, the Out-File cmdlet creates a Unicode file. This is the best default in the long run, but it means that tools that expect ASCII files will not work correctly with the default output format. You can change the default output format to ASCII by using the Encoding parameter:

Out-file formats file contents to look like console output. This causes the output to be truncated just as it is in a console window in most circumstances. [...]

To get output that does not force line wraps to match the screen width, you can use the Width parameter to specify line width.

So, apparently it is not Git which chooses the character encoding, but Out-File. This suggests a) that PowerShell redirection really should only be used for text and b) that

| Out-File -encoding ASCII -Width 2147483647 my.patch

will avoid the encoding problems. However, this still does not solve the problem with Windows vs. Unix line-endings . There are Cmdlets (see the PowerShell Community Extensions) to do conversion of line-endings.

However, all this recoding does not increase my confidence in a patch (which has no encoding itself, but is just a string of bytes). The aforementioned Cookbook contains a script Invoke-BinaryProcess, which can be used redirect the output of a command unmodified.

To sidestep this whole issue, an alternative would be to use git format-patch instead of git diff. format-patch writes directly to a file (and not to stdout), so its output is not recoded. However, it can only create patches from commits, not arbitrary diffs.

format-patch takes a commit range (e.g. master^10..master^5) or a single commit (e.g. X, meaning X..HEAD) and creates patch files of the form NNNN-SUBJECT.patch, where NNNN is an increasing 4-digit number and subject is the (mangled) subject of the patch. An output directory can be specified with -o.

I found that an authoritative answer is given in the Windows PowerShell Cookbook.

Invoke-BinaryProcess git -RedirectOutput diff | Out-File -encoding OEM my.patch

Thank you very much for this nicely detailed answer. "However, it can only create patches from commits, not arbitrary diffs." Yes, unfortunately this means a problem when trying to create Drupal patches as suggestions for being committed. As you suggested, I tried Out-File pipe with defining the encoding as ASCII: i.imgur.com/2Nx9Z.png; here's the one with the default character encoding (using > for output redirection): i.imgur.com/QdyAN.png, and here's the one with your solution: i.imgur.com/7Fpz0.png, and yeah, the encoding is ANSI now. :)

Invoke-BinaryProcess
core.autocrlf
core.eol

powershell - Git Shell in Windows: patch's default character encoding ...

powershell git encoding github
Rectangle 27 0

Other answers are great, I just want to add... a different approach usable in PowerShell: Install GNUWin32 utils and use grep to view the lines / redirect the output to file http://gnuwin32.sourceforge.net/

grep "step[49]" logIn.log > logOut.log

This appends the log output, in case you overwrite the logIn file and want to keep the data:

grep "step[49]" logIn.log >> logOut.log

Loop through files in a directory using PowerShell - Stack Overflow

powershell
Rectangle 27 0

Here is powershell script that is able to output xml into a file without inserting line breaks after the 2034 character:

# Retrieving XML Data Through an XML Reader
# Create SqlConnection object and define connection string
$con = New-Object System.Data.SqlClient.SqlConnection
$con.ConnectionString = "Server=your.servername.local\yourInstanceName;Database=yourDBname;Integrated Security=True"
$con.open()

# Create SqlCommand object, define command text, and set the connection
$cmd = New-Object System.Data.SqlClient.SqlCommand
$cmd.CommandText = "EXEC your query"
$cmd.Connection = $con

# Create XmlReader
$xr = $cmd.ExecuteXmlReader()

# Create XmlDataDocument object for dataset
$xd = New-Object System.Xml.XmlDataDocument

# Load the XML data document
$xd.Load($xr)

# Close the DataReader and the connection
$xr.Close()
$con.Close()

$xd.InnerXml > .\yourFileName.xml

SQL Server Bcp Export XML Format - Stack Overflow

sql-server xml bcp
Rectangle 27 0

Yeah. Really. That's the whole thing.

I loved the idea of using PowerShell, but the pure XML solution doesn't work because it only outputs targets defined in that project file, and not imports. Of course, the C# code everyone keeps referring to is dead simple, and with .Net 4.5 it's two lines (the first of which you should consider just adding to your profile):

Add-Type -As Microsoft.Build
New-Object Microsoft.Build.Evaluation.Project $Project | Select -Expand Targets

Since the output is very verbose, you might want to restrict what you're looking at:

New-Object Microsoft.Build.Evaluation.Project $Project | 
    Select -Expand Targets |
    Format-Table Name, DependsOnTargets -Wrap

When you load builds like that, they stick in the GlobalProjectCollection as long as you leave that PowerShell window open and you can't re-open them until you unload them. To unload them:

[Microsoft.Build.Evaluation.ProjectCollection]::GlobalProjectCollection.UnloadAllProjects()

Considering that, it might be worth wrapping that in a function that can accept partial and relative paths or even piped project files as input:

Add-Type -As Microsoft.Build
Update-TypeData -DefaultDisplayPropertySet Name, DependsOnTargets -TypeName Microsoft.Build.Execution.ProjectTargetInstance

function Get-Target {
    param(
        # Path to project file (supports pipeline input and wildcards)
        [Parameter(ValueFromPipelineByPropertyName=$true, ValueFromPipeline=$true, Position=1)]
        [Alias("PSPath")]
        [String]$Project,

        # Filter targets by name. Supports wildcards
        [Parameter(Position=2)]
        [String]$Name = "*"

    )
    begin {
        # People do funny things with parameters
        # Lets make sure they didn't pass a Project file as the name ;)
        if(-not $Project -and $Name -ne "*") {
            $Project = Resolve-Path $Name
            if($Project) { $Name = "*" }
        }
        if(-not $Project) {
            $Project = Get-Item *.*proj
        }
    }
    process {
        Write-Host "Project: $_ Target: $Name"
        Resolve-Path $Project | % {
            # Unroll the ReadOnlyDictionary to get the values so we can filter ...
            (New-Object Microsoft.Build.Evaluation.Project "$_").Targets.Values.GetEnumerator()
        } | Where { $_.Name -like $Name }
    }
    end {
        [microsoft.build.evaluation.projectcollection]::globalprojectcollection.UnloadAllProjects()
    }
}

Obviously you can add anything you want to the output with that Update-TypeData, for instance, if you wanted to see Conditions, or maybe BeforeTargets or AfterTargets...

You could even pull nested information. For instance, you could replace the Update-TypeData call above with these two:

Update-TypeData -MemberName CallTargets -MemberType ScriptProperty -Value {
    $this.Children | ? Name -eq "CallTarget" | %{ $_.Parameters["Targets"] } 
} -TypeName Microsoft.Build.Execution.ProjectTargetInstance

Update-TypeData -DefaultDisplayPropertySet Name, DependsOnTargets, CallTargets -TypeName Microsoft.Build.Execution.ProjectTargetInstance

You see the first one adds a calculated CallTargets property that enumerates the direct children and looks for CallTarget tasks to print their Targets, and then we just include that ind the DefaultDisplayPropertySet.

NOTE: It would take a lot of logic on top of this to see every target that's going to be executed when you build any particular target (for that, we'd have to recursively process the DependsOnTargets and we'd also need to look for any targets that have this target in their BeforeTargets or AfterTargets (also recursively), and that's before we get to tasks that can actually just call targets, like CallTargets and MSBuild ... and all of these things can depend on conditions so complex that it's impossible to tell what's going to happen without actually executing it ;)

Your solution does indeed, provide a more complete reporting... but: You show DependsOnTargets but you should also include CallTargets to get a full accounting. (Some time ago I adapted @knut's answer and added CallTargets to it.) Is it possible to report CallTargets with your solution?

For myself, I don't think I care about CallTargets -- I'm not sure what you're trying to accomplish, but I just want to list the targets available to call on the command-line. Having said that, let me append something to the answer for you, before I run out of space in this box.

Does that help? I'm pretty sure that as a general rule, you should not care about CallTargets though -- it is part of the nitty-gritty of how a target does whatever it's supposed to do. Why do you care about this particular task? I mean, you're not going to look at all the tasks from the command-line, and (just for instance), MSBuild is required instead of CallTargets if you just want to call "the default targets" in your build.

To my mind, DependsOnTargets tell me that when I call target A, before it processes anything it will call target B and target C. Whereas CallTargets tell me that when I call target A, sometime during its processing it will call target B and target C. They seem to be equivalently-useful pieces of information. Or is my view of this skewed in some fashion...?

The thing is that there are lots of ways to do the same thing differently ;) A can DependsOnTargets B or A can After B or B can Before A ... or B can CallTarget A or it can MSBuild Project="$(MSBuildProjectFile)" Targets="A" ... or it could use Exec or there could be third-party tasks ...

How to query MSBUILD file for list of supported targets? - Stack Overf...

msbuild
Rectangle 27 0

This should give you the desired output (reformatted whitespace for readability):

$OSInfo `
    | Format-Table -AutoSize -Property `
        @{ Name = "Tag # Entered";  Expression = { $tag1 } }, `
        @{ Name = "Resolved Tag #"; Expression = { $_.CSName } };

Note that if $OSInfo contains multiple items the value of the first column will be the same for all of them ("c63001").

Works! Is there a way to add more space between them or is that not possible?

Remember that the output consists of objects. When you have objects, the actual appearance of how PowerShell outputs the objects is of minor importance (the data is all there; the spacing really isn't important).

Width
Hashtable
@{ Name = "Tag # Entered";  Expression = { $tag1 }; Width = 10 }
-Property
Format-Table

Hmm I tried that but it didnt do anything but also didn't get any errors.

@Aaron Make sure you remove the -AutoSize parameter because that will likely override the Width values.

powershell - Output two times in same line - Stack Overflow

powershell output
Rectangle 27 0

$CompInfo `
    | Format-Table -AutoSize -Property `
        @{ Name = "Domain\user";  Expression = { $CompInfo.username } }, `
        @{ Name = "Full Name"; Expression = { $fullname } };

The reason it didn't work was because I was using $OSInfo which is the win32_operatingsystem class.

$OSInfo = get-wmiobject -class win32_operatingsystem -computername $tag1;
$CompInfo = get-wmiobject -class win32_computersystem -Computer $tag1;

The Domain I was trying to output is pulled with the win32_computersystem class and I was trying to do it under win32_operatingsystem.

powershell - Output two times in same line - Stack Overflow

powershell output
Rectangle 27 0

I'm not a Windows user, so take my answer with a grain of salt. According to the Windows PowerShell Cookbook, PowerShell preprocesses the output of git diff, splitting it in lines. Documentation of the Out-File Cmdlet suggests, that > is the same as | Out-File without parameters. We also find this comment in the PowerShell documentation:

The results of using the Out-File cmdlet may not be what you expect if you are used to traditional output redirection. To understand its behavior, you must be aware of the context in which the Out-File cmdlet operates.

By default, the Out-File cmdlet creates a Unicode file. This is the best default in the long run, but it means that tools that expect ASCII files will not work correctly with the default output format. You can change the default output format to ASCII by using the Encoding parameter:

Out-file formats file contents to look like console output. This causes the output to be truncated just as it is in a console window in most circumstances. [...]

To get output that does not force line wraps to match the screen width, you can use the Width parameter to specify line width.

So, apparently it is not Git which chooses the character encoding, but Out-File. This suggests a) that PowerShell redirection really should only be used for text and b) that

| Out-File -encoding ASCII -Width 2147483647 my.patch

will avoid the encoding problems. However, this still does not solve the problem with Windows vs. Unix line-endings . There are Cmdlets (see the PowerShell Community Extensions) to do conversion of line-endings.

However, all this recoding does not increase my confidence in a patch (which has no encoding itself, but is just a string of bytes). The aforementioned Cookbook contains a script Invoke-BinaryProcess, which can be used redirect the output of a command unmodified.

To sidestep this whole issue, an alternative would be to use git format-patch instead of git diff. format-patch writes directly to a file (and not to stdout), so its output is not recoded. However, it can only create patches from commits, not arbitrary diffs.

format-patch takes a commit range (e.g. master^10..master^5) or a single commit (e.g. X, meaning X..HEAD) and creates patch files of the form NNNN-SUBJECT.patch, where NNNN is an increasing 4-digit number and subject is the (mangled) subject of the patch. An output directory can be specified with -o.

I found that an authoritative answer is given in the Windows PowerShell Cookbook.

Invoke-BinaryProcess git -RedirectOutput diff | Out-File -encoding OEM my.patch

Thank you very much for this nicely detailed answer. "However, it can only create patches from commits, not arbitrary diffs." Yes, unfortunately this means a problem when trying to create Drupal patches as suggestions for being committed. As you suggested, I tried Out-File pipe with defining the encoding as ASCII: i.imgur.com/2Nx9Z.png; here's the one with the default character encoding (using > for output redirection): i.imgur.com/QdyAN.png, and here's the one with your solution: i.imgur.com/7Fpz0.png, and yeah, the encoding is ANSI now. :)

Invoke-BinaryProcess
core.autocrlf
core.eol

powershell - Git Shell in Windows: patch's default character encoding ...

powershell git encoding github
Rectangle 27 0

If it was me I would use Add-Content for this with a pipe.

$Users = Get-Content -Path "C:\Users\XXXX\Desktop\Password Reset\Users5.txt"
$Users | Add-Content -Path "C:\Users\XXXX\Desktop\Password Reset\RefUsers.txt"

Pay attention to encoding. Add-Content uses ascii by default I believe. Also if you are not doing anything with the data you can skip the variable all together.

GC "C:\Users\XXXX\Desktop\Password Reset\Users5.txt" | 
    AC "C:\Users\XXXX\Desktop\Password Reset\RefUsers.txt"
Get-Content
Add-Content

This worked! Thanks for the suggestion. I used a Foreach loop instead of piping the information in.

Powershell Out-File output all on the same line - Stack Overflow

powershell
Rectangle 27 0

#!c:/Python/python.exe -u
# Or point it at your desired Python path
# Or on Unix something like: !/usr/bin/python

# Function to reformat the data as requested.
def reformat_data(input_file, output_file):

    # Define the comment lines you'll write.
    header_str = "Transform\tPosition\n"
    column_str = "\tFrame\tX pixels\tY pixels\tZ pixels\n"
    closer_str = "End of Keyframe Data\n"

    # Open the file for reading and close after getting lines.
    try:
        infile = open(input_file)
    except IOError:
        print "Invalid input file name..."
        exit()

    lines = infile.readlines()
    infile.close()

    # Open the output for writing. Write data then close.
    try:
        outfile = open(output_file,'w')
    except IOError:
        print "Invalid output file name..."
        exit()

    outfile.write(header_str)
    outfile.write(column_str)

    # Reformat each line to be tab-separated.
    for line in lines:
        line_data = line.split()
        if not (len(line_data) == 4):
            # This skips bad data lines, modify behavior if skipping not desired.
            pass 
        else:
            outfile.write("\t".join(line_data)+"\n")

    outfile.write(closer_str)
    outfile.close()

#####
# This below gets executed if you call
# python <name_of_this_script>.py
# from the Powershell/Cygwin/other terminal.
#####
if __name__ == "__main__":
    reformat_data("/path/to/input.txt", "/path/to/output.txt")

Formatting text data with C++ or any windows scripting language - Stac...

c++ windows file text formatting