Capture and Examine Server Certificate from AD Connections

Have you ever deployed an app into a Windows domain that uses Active Directory authentication, and sometimes it works and other times it doesn’t? This can be an especially annoying issue if you aren’t a domain admin and you can’t log into your domain controllers to examine their settings.

Fortunately, if you suspect that the problem may be the SSL Certificate the server is using when you connect to it there is something you can to do conclusively troubleshoot the issue. First let’s talk about the issue I ran into.

When your app wants to talk to Active Directory to validate credentials it can do so without using SSL if it connects to port 389. If your app is using that port then read on for curiosity’s sake only because this isn’t your problem. If your connection string to AD specifies port 636, you are using SSL. This will only work if the server you are connecting to hands you a certificate in the connection attempt that your machine trusts.

In a normal domain setup this isn’t a problem. The AD server should have a certificate issued by either a domain CA that all machines trust, or a third party cert that again all machines trust, this time because the CA is recognized by Windows automatically. These are certs issued by Verisign or something like that.

If the certificate you receive from the AD server is not trusted, your connection will fail, authentication will fail, and the smoking gun will be a very easy to read Windows Event Log entry like the one below.

CertErrorLogCrop

That error log entry tells you very specifically what the issue was, but it leaves out some important information.

  • What server did I connect to that failed?
    • Connection strings often just specify the domain name not the specific server to connect to. If you have multiple domain controllers you could end up connecting to any of them for any given connection attempt.
  • What certificate did it give me that just failed?
    • Admins hate it when you just point fingers and say “your thing is broken” with no other details. It would be nice to be able to say to your domain admins “hey, I got this certificate from this issuer and my machine doesn’t trust it. Let’s work this out.”

To get that information, and to get to the point of this article, we are going to do a couple things. We are going to find all of the domain controllers on the network, and then connect to them one by one and grab the certificate out of the connection to examine it.

To find the domain controllers we can ask directory services to just give us the list.

$controllers = [directoryServices.ActiveDirectory.Domain]::GetComputerDomain().FindAllDomainControllers()

In a moment we’ll iterate over that list to get the certs, but first we need the function to get it.

function Get-ADCert {
    param(
        [string]$server
    )

    [system.reflection.assembly]::loadwithpartialname('System.DirectoryServices.Protocols') | Out-NULL

    # This script block is run instead of the normal system cert validation code.    
    $DelegateScriptBlock = {
        PARAM(
            [System.DirectoryServices.Protocols.LdapConnection] $LDAPConnection,
            [System.Security.Cryptography.X509Certificates.X509Certificate2] $Certificate
        )
    
        PROCESS{
            if ($Certificate -eq $null){
                Write-Verbose "Error - No Certificate"
                return $false
            }
    
            $collection = New-Object System.Security.Cryptography.X509Certificates.X509Chain
            $collection.Build($Certificate)
            # populate variable with script scope to make it accessible outside this code block
            $Script:adCert = $collection
    
            return $true
        }
    }
    
    
    $connection=new-object System.DirectoryServices.Protocols.LDAPConnection("$server`:636")
    $options=$connection.SessionOptions;
    $options.ProtocolVersion=3
    $options.SecureSocketLayer=$true
    
    # This property allows us to substitute the normal cert validation for our own code block.
    $options.VerifyServerCertificate = $DelegateScriptBlock
    $connection.AuthType='Basic'
    try{
        # Call bind to establish the connection. Our validation code is run during execution of bind(). We then throw the connection away.
        $connection.Bind()
        $connection.Dispose()
    }
    catch{ 
        "`r`nBind failed for controller: $controller - `r`n With Error: $_"
    }

    if($adCert){
        # passing the cert as output is just an easy way to populate a variable with the result of this function.
        Write-Output $adCert
    } else {
        Write-Error "Certificate Not Found for Controller: $server"
    }

}

Now we can loop over the domain controllers and get the certs.

$certs = @{}

foreach($dc in $controllers)
{
    $certs."$($dc.name)" = Get-ADCert -server $dc.name
}

Each index of the $certs hash table is now the server certificate along with each certificate in the certification path. To examine the certs we can issue a command like below.

$certs.Values | ForEach-Object {$_.chainelements[0].certificate} | Format-Table issuer,subject,@{L='SelfSigned';E={$_.issuer -eq $_.subject}} -AutoSize

That command would show you very clearly if a domain controller is trying to use a self signed certificate, and yeah, that happens to me a lot. You can also look at the issuer and see if you trust that issuer in your local machines Root certificate store.

Thanks and I hope you find this useful.

Advertisements

Detect Duplicate MSDTC CID

Any time two Windows Servers need to communicate to support application data requests, there’s a good chance the Distributed Transaction Coordinator or MSDTC will be involved. When two servers are configured and functioning correctly, like most things, you won’t even notice this layer of coordination between the two exists.

The problem is that in modern VM based corporate environments, it is very common for these DTC’s to be unable to communicate with one another if a VM admin builds machines from templates that are not properly sysprep’ed.

The bug is that if two MSDTC’s have the same CID (a GUID identifier), they cannot communicate with one another. They both believe they should have the same name effectively, so that can’t talk with one another.

As you can imagine this can be an annoying bug to track down as some servers will have issues connecting to only some servers, and only if the communication goes through MSDTC.

Fortunately, with PowerShell and a little bit of registry foo, we can test conclusively for this issue, and it’s an easy one to fix. Below is the script to detect it.

$machines = 'Server1','Server2'

$cidCol = @()

ForEach( $machine in $machines ) {
$objReg = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('ClassesRoot',$machine);
$objRegKeys = $objReg.OpenSubKey('CID');
$keys = $objRegKeys.GetSubKeyNames();

$objRegKeys = $keys | %{$objRegKeys.OpenSubKey($_)};

$cid = New-Object PSObject
        $cid | add-member Noteproperty computername $machine
        $cid | add-member Noteproperty id ($objRegKeys | %{$_.OpenSubKey('Description')} | ?{$_.GetValue("") -eq 'MSDTC' } | %{$_.Name.Replace('\Description','').Replace('HKEY_CLASSES_ROOT\CID\','')})
$cidCol += $cid
}

$cidCol | Sort-Object -Property id,computername | Format-Table

The output will be a table showing you the list of servers and their MSDTC CID’s. Any two or more entries with the same CID will be unable to participate in distributed transactions with one another.

The immediate fix is to uninstall and reinstall the DTC on each of the affected machines or at least $numberOfAffectedMachines – 1 machines, to ensure they all have unique ID’s.

The long term fix is to have a chat with your VM admin about sysprepping machines so you don’t have to deal with this anymore.

This could easily be turned into a function that will detect and then remotely fix this, but since I don’t run into this problem on a regular basis, I’ll leave that task as an exercise for any readers that do.

Visualizing Operational Tests with Jenkins and Pester

The Problem

I love Pester, and I really want to get on the operational testing band wagon. But one of the perpetual issues involved with testing is how do I visualize my test metrics, and can I take action on failed tests automatically?

Enter Jenkins and Pester. You can express a lot of concepts in the form of Pester tests and Jenkins is more than happy to take Pester’s output and not only visualize it for you, but take down stream action based on failures and show you those results too.

The Tools

Jenkins

There are lots of great tutorials out there for installing Jenkins, but it’s a big subject that I’m not going to cover here. We’re going to use Jenkins for this, but I’ll assume you have an instance running already.

Pester

Installing pester is very easy, but I like to use the Install-Module CmdLet from PSGet and that needs to be installed too if it isn’t already.

InstallPester

That was easy. Don’t get complacent though, take a look at the directory the module was installed into. The problem here is that if you try to load the module from Jenkins right now, it won’t be able to find the module. To make it discover able for Jenkins and for other users we need to copy that module folder to the system wide module folders.


# Get the Pester Module Path
$pesterPath = Get-Module -ListAvailable Pester | Split-Path

# Create variable for our system wide module paths
$modulePaths = "C:\windows\system32\windowspowershell\v1.0\Modules","C:\windows\SysWOW64\WindowsPowerShell\v1.0\Modules"

# Copy Pester to the module paths. We want it available in both 32 and 64 bit PowerShell
$modulePaths.ForEach({Copy-Item -Path $pesterPath -Destination $_ -Recurse})

Notice that I copied the module to the SysWOW64 modules folder. This is because in your own testing you will mostly use 64bit PowerShell, but at the same time, most of you will have downloaded and installed 32bit Jenkins. Using 32Bit Jenkins means you must copy any modules you want to use into the SysWOW64 folder because it can’t see the 64 bit folder. We are going to run a check shortly to make sure the module will be available within Jenkins before we start to run into confusing errors.

Plugins

We are going to need three plugins to make this workflow happen. Go to Manage Jenkins > Manage Plugins and select the Available tab to select the plugins we need to install: Nunit plugin, Parameterized Trigger plugin, Copy Artifact plugin, and the PowerShell plugin, as depicted in the screenshots below.

Creating the First Project

Our first project is going to be called Test Permissions Job.

createJob.PNG

The only thing we are going to ask this job to do at first is confirm that the Pester module is going to be available to Jenkins as we expect. It’s worth discussing for just a second though how I intend to accomplish this. Below is a screenshot of the script I’m going to run in the project.

JobConfigExecuteScriptGetModules

 

That’s not much right? Here’s the thing. Writing actual PowerShell code in that little text box would suck really really bad. I don’t want to do it. So instead all I do is use the automatic environment variables that Jenkins gives me to find the path to scripts that I will edit in PowerShell ISE instead.

Using this strategy buys me a lot of nice things. The config is nice, clean and easy to read. The scripts are easy to edit in the normal PowerShell ISE. Finally, if I want to make a change to the way the job behaves, I can just edit the script, hit save in the ISE, and execute the job again, without ever having to actually change the job configuration at all! It really makes your testing iterations go much much faster. I think you’ll like it.

If you’re working with a brand new project, the workspace won’t exist yet so you won’t be able to save anything there, but just execute the project with no configuration so it doesn’t do anything, and Jenkins will create the workspace folder for you. Using the the workspace folder is usually a good choice btw, instead of somewhere else on the drive so that if you set up slaves, the jobs will continue to run and be able to access the scripts they need. The workspace folder on the master is usually found at C:\Program Files (x86)\Jenkins\workspace\<job name>

Ok, moving on. The contents of the script are as follows:

try {
    Get-Module -ListAvailable
}
catch {
    throw $_
    exit -1
}

And we are going to save it to the workspace folder:

SaveModulesScript

Get-Module is a very simple CmdLet, but don’t forget to almost always put your code in try/catch blocks when scripting for Jenkins. The reason is that to execute PowerShell, Jenkins is calling Powershell.exe from cmd.exe. If you, the script author are not diligent about not only catching errors, but returning non zero return codes, then Jenkins doesn’t have any way of knowing that something went wrong. This will result in job steps that encounter errors, but do not halt project execution, and do not cause projects to be marked as failures. So what we do is catch the error, immediately throw it back out so that the error makes it to the console for logging, and then throw a non zero return code to ensure the project gets marked as failed.

Hopefully though, if we’ve done our job right, the console output from this first job run will show us a list of all available modules, including Pester. If you don’t see Pester in the list try adding a line to the script to output the $PSModulePath variable and ensure the Pester module is in one of those directories.

The First Tests

First we’ll save a script to the workspace folder just like the one above. I am going to call it Get-CustomLocalGroupMembership .ps1 with the contents below:

function Get-CustomLocalGroupMembership
{
    param(
    [string[]]
    $computername,
    [string]
    $group
    )

    process{
        foreach($computer in $computername)
        {
            $props = @{computername="$computer";members=@()}
            $ADSIcomputer = [adsi]("WinNT://$computer,computer")

            try{$members = $ADSIcomputer.psbase.Children.Find('Administrators','Group').psbase.invoke("Members")}
            catch{Write-verbose "cannot find memberships for $computer"}

            foreach($member in $members)
            {
                try{$props.members += $member.GetType().InvokeMember("Name",'GetProperty',$null,$member,$null)}
                catch{$props.members += $null}
            }

            Write-OutPut (New-Object -TypeName PSObject -Property $props)
        }
    }
}

Next, the tests file that will leverage that function. I am going to call it UserPermissionsTests.tests.ps1. It will have the contents below and I’ll save it to the same directory.

. $env:workspace\Get-CustomLocalGroupMembership.ps1

$requiredUsers = 'LocalUser1','LocalUser2','LocalUser3'

Describe "Server Alive Tests" {
    $processes = Get-Process

    it "Should be running things" {
        $processes.count | Should BeGreaterThan 1
    }
}

Describe "Users and groups tests" {

    Context "Group membership context" {
        $members = Get-CustomLocalGroupMembership -computername $env:COMPUTERNAME

        it "Should have returned members" {
            $members | Should Not BeNullOrEmpty
        }

        foreach($user in $requiredUsers){
            it "Should Contain Required User: $user" {
                $members.members -eq $user | Should Be $user
            }
        }
    }
}

So let’s note a couple things. When I look at that script in PowerShell ISE, that $env:workspace variable isn’t going to mean anything. To test that script effectively in PowerShell ISE you may need to assign an $env:workspace variable in your session manually before testing to ensure it will execute as you expect without having to do make modifications you might forget to remove and break your project.

Next, the list of local users. I actually created them for the purposes of this demo, so feel free to do so yourself to follow along.

Now let’s look at the project’s only build step:

InvokePesterWithPermsTests

That was easy. Invoke-Pester will automatically do a recursive search in the current working directory ($env:workspace) for any files with <name>.tests.ps1 as the name format and execute them. This project step is very clear and easy to read.

If you noticed that after I just got done saying that we should almost always put our code in try/catch blocks, I didn’t do it here, you aren’t wrong. The trick is that Invoke-Pester is going to take care of this for me with the -enableExit parameter. If any errors occur during execution Invoke-Pester is kind enough to bubble up the error for me and return a non zero exit code. Even if there are no unexpected exceptions; if I simply have failed tests, it will return all of the errors and return an exit code equal to the number of failed tests.

So let’s run the project and see what we get. Our output should look like below:

FailedTestsOutput.PNG

This is really fantastic. We can see not only that tests failed, but exactly what user we expected to present and wasn’t found. Take some time to click around in the job and look at all the nice results you get, and realize it only gets better as tests start to pass later.

Now let’s see if we can get the output file and read it to figure out how we can make some use of it.

NunitIntermediateParsing.PNG

Well, that’s ugly but it will work to help us figure out what to do next. We can use this kind of testing to figure out exactly how to query the xml file to get the data we need, and what we see there is very close. I don’t know NUnitXML format well enought to tell you what query will get you the data you need. I just know XML well enough to keep querying until I have what I need. Later you’ll see the query I came up with to make the project to fix the permissions work. So let’s move on and set up the fix.

Before you attempt to implement the configuration below, create a new Free Style Project called “Fix Permissions Job Step” if you want to follow along, and then add the Post-Build actions shown below to the Permissions Testing project.

PostBuildConfig.PNG

Next, in the Fix Permissions Job project we will tell it to copy the xml result file from the permissions test project. You will also see the build step that invokes a pester test. The code that follows will be the content of the AddUsersToAdmins.tests.ps1 file that the build step is invoking along with the code for a helper function it needs.

FixPermissionsJobConfig

. $env:JENKINS_HOME\userContent\PowershellScripts\Add-DomainUserToLocalGroup.ps1

[xml]$NUnit = Get-Content $env:WORKSPACE\PermissionsTestsOutput.xml

$users = $NUnit.SelectNodes('//test-case[@result = "Failure"]').failure.message | ForEach-Object{($_ -split '{([a-zA-Z\d\s]+)')[1]}

Describe "Adding Users to Admins" {

    foreach($user in $users){

        it "Should add user to admins: $user" {
            {Add-DomainUserToLocalGroup -domain $env:COMPUTERNAME -user $user -computer $env:COMPUTERNAME -group Administrators} | Should Not Throw
        }
    }
}

 

Function Add-DomainUserToLocalGroup
{
    [cmdletBinding()]
    Param(
    [Parameter(Mandatory=$True)]
    [string]$computer,
    [Parameter(Mandatory=$True)]
    [string]$group,
    [Parameter(Mandatory=$True)]
    [string]$domain,
    [Parameter(Mandatory=$True)]
    [string]$user
    )
    Write-Host "Adding $user to group: $Group"

    $de =  [ADSI]("WinNT://$computer/$Group,group" )

    $de.psbase.Invoke("Add",([ADSI]("WinNT://$domain/$user")).path)
    Write-host "$user successfully added to $group`n"
}

Make sure that the test users are NOT a part of the admins group and then run the test job. What you should see is a tests job that runs and fails three tests, then executes the fix job which will add the users and mark those tests as passed.

Once you do that, go ahead and kick off the permissions test project again and you will see the tests not only pass this time, but Jenkins knows the tests failed last time and doesn’t mark them as just passed, but as fixed.

Lets say that those user permissions being missing is a serious problem. In the past you might have set an alert on them being missing so someone could fix it. Now, there is only a need to send out an alert if the attempt at FIXING it fails, which Jenkins will know about as soon as it attempts to fix it for you.

Conclusion

This kind of operational validation can be extended to testing things like ensuring a web site is up. Don’t just test to ensure that the w3wp service is running and that the SQL service is running. You can actually run Invoke-WebRequest and test to ensure that you get a return code of 200 and that the elements you expect to find are present in the web page, using Pester, and if they aren’t, you can run further tests in follow up projects to make automated attempts at solving some of the common issues you know might cause an outage.

You won’t get a midnight alert because Jenkins fixed it for you, but you can see in your build statuses the next morning that something went wrong and take a look at what it was based on the tests that failed.

Thanks for reading and of course if you have any questions, hit me up on Twitter!

Custom Certificate Based File Encryption

The Problem

I was recently asked to come up with a method to ensure files are encrypted at rest after they are transferred from client servers onto my employers servers. We have constraints on the way we do things that can make it difficult to install third party software to get things like this done, so I often find myself in the position of having to come up with my own code to do this kind of thing. That being said, cryptographer is hard, even for really smart people, so I have no intention of actually rolling my own crypto.

Thankfully, with Powershell, it’s turn the .NET crypto classes into a usable tool for my purposes. Over the next few blog posts I’ll be using this MSDN page as the basis for a custom encryption module that can be deployed to servers without having to install any third party software at all.

How Certificate Based Cryptography is Going to Work

Since you’re reading a blog mostly about Powershell scripting, I’m going to assume you are working in a Windows environment. The most natural way then to encrypt and decrypt files will be using certificates managed in the Windows Certificate store. The thing is, if you know how encrypting an SSL web session works, Public/Private key cryptography isn’t actually very good for encrypting large amounts of data like text files or media files.

If you read the linked MSDN page closely you get something like the following workflow.

  1. Create a certificate with a public and private key pair
  2. Export the public key and copy to the server the source sensitive files
  3. For each file that we encrypt we will do the following
    1. Generate a new AES symetric encryption key and IV
    2. Use the key and IV to encrypt the data file.
    3. Use the public key to encrypt the key and the IV and prepend them to the encrypted data file.
  4. When the files arrive on the destination server we can use the private key decrypt the key and IV for each data file and then use the decrypted keys decrypt the larger data file.

Building the Module

The first two functions we need are one to create a certificate and one to check and see if the one we are creating already exists. These are easy since we can just use self signed certificates. Self Signed certs are fine for this use since we aren’t asking browsers or other systems to trust them. They are purely for our own use.


Import-Module PKI

<# 	.SYNOPSIS 		Get an existing clientss certificate from the certificate store. 	.DESCRIPTION 		Get an existing clients certificate from the LocalMachine\AddressBook certificate store and return it as an X509Certificate2 .NET object. 	.PARAMETER  Client 		The client name you would like to find. 	.EXAMPLE 		Get-ClientCert -client Testclient 	    Directory: Microsoft.PowerShell.Security\Certificate::localMachine\AddressBook 	Thumbprint                                Subject 	----------                                ------- 	EAE61338A4F802A989406506DC471A0C3A83F371  CN=Testclient 	 	.EXAMPLE 		Get-ClientCert -client "testClient2" | ForEach-Object { [IO.File]::WriteAllBytes("$($PWD.Path)\$($_.Subject).cer",($_.export('Cert', 'password'))); Get-Item "$($PWD.Path)\$($_.Subject).cer"} 		    Directory: C:\ 		Mode                LastWriteTime     Length Name 		----                -------------     ------ ---- 		-a---         3/26/2016   5:37 PM       2601 CN=testClient2.cer 		This example shows you how to get a client certificate and export its public key as a certificate file. This allows you to transport the public key to a client server for use encrypting files. 	.EXAMPLE 		"testClient2", "testClient3", "testClient4" | Get-ClientCert | ForEach-Object { [IO.File]::WriteAllBytes("$($PWD.Path)\$($_.Subject).p12",($_.export('PKCS12', 'password'))); Get-Item "$($PWD.Path)\$($_.Subject).p12"} 		    Directory: C:\ 		Mode                LastWriteTime     Length Name 		----                -------------     ------ ---- 		-a---         3/26/2016   5:37 PM       2601 CN=testClient2.p12 		-a---         3/26/2016   5:37 PM       2593 CN=testClient3.p12 		-a---         3/26/2016   5:37 PM       2601 CN=testClient4.p12 		This example shows you how to get a set of client certificates and export their full private and public keys. Most useful for importing into a new key store for server migrations. 		This is only necessary until the Export-ClientCert function is complete. 	.INPUTS 		System.String 	.OUTPUTS 		System.Security.Cryptography.X509Certificates.X509Certificate2 #>

function Get-ClientCert
{
	[outPutType([System.Security.Cryptography.X509Certificates.X509Certificate2])]
	param
	(
		[parameter(Mandatory = $true, ValueFromPipeline = $true)]
		[string]$client
	)

	process
	{
		Write-Output (Get-ChildItem cert:\localMachine\AddressBook | Where-Object Subject -EQ "CN=$client")
	}
}

<# 	.SYNOPSIS 		Create a new encryption certificate for a client. 	.DESCRIPTION 		Create a new self signed X509 certificate for a named client and output the public key to a file. 	.PARAMETER  client 		Client Name. 	.PARAMETER  outFolder 		Folder to out put Public Key File. 	.EXAMPLE 		PS C:\> New-ClientCert -client NewClient -outFolder c:\ClientPublicKeys

				Directory: C:\ClientPublicKeys

		Mode                LastWriteTime     Length Name
		----                -------------     ------ ----
		-a---         3/26/2016   4:05 PM        802 NewClient.cer

		This example shows how to call the New-ClientCert function with a single client name.

	.EXAMPLE
		PS C:\>$clients = "newClient4", "newClient5", "newClient6"
		PS C:\>$clients | New-ClientCert -outFolder c:\ClientPublicKeys
				 Directory: C:\ClientPublicKeys

		Mode                LastWriteTime     Length Name
		----                -------------     ------ ----
		-a---         3/26/2016   4:09 PM        799 newClient4.cer
		-a---         3/26/2016   4:09 PM        799 newClient5.cer
		-a---         3/26/2016   4:09 PM        799 newClient6.cer

		This example shows how to call the New-ClientCert function with multiple client names via pipeline.

	.INPUTS
		System.String,System.Int32

	.OUTPUTS
		System.io.FileInfo
#>

function New-ClientCert {
	[OutputType([System.IO.FileInfo])]
	param(
		[Parameter(Position=0, Mandatory=$true,ValueFromPipeline=$true)]
		[System.String]
		$client,
		[Parameter(Position=1)]
		[System.String]
		$outFolder
	)

	begin {
		if ($outFolder.IndexOf('.') -gt 0)
		{
			throw "-outFolder Parameter should be a folder, not a file name."
		}
	}
	process {

		if (Get-ClientCert -client $client)
		{
			Write-Warning "A certificate already exists for $client"
		}
		else
		{
			$cert = New-SelfSignedCertificate -DnsName $client -CertStoreLocation Cert:\LocalMachine\My | Move-Item -Destination Cert:\LocalMachine\AddressBook -PassThru

			if ($outFolder)
			{
				$outPath = Join-Path -Path $outFolder -ChildPath "$client.cer"
			}
			else
			{
				$outPath = Join-Path -Path $PWD.Path -ChildPath "$client.cer"
			}

			Export-Certificate -Cert $cert -FilePath $outPath -Type cer -NoClobber
		}
	}
}

These are pretty thin wrappers over existing commandlets, but they serve the purpose at hand. For instance, in this case I know that if I’m generating a self signed cert, I’m always going to want to export the public key for use on a remote server. So why write the commands to do the export myself each time I generate a cert? Wrap it in a function and viola, the cert is conveniently exported each and every time.

Also, notice in the function New-ClientCert that each cert we create gets immediately moved to the Address Book cert store. This is because we aren’t going to be asking the system to trust these certs in any way. I don’t need a browser to accept connections encrypted with these certs. I just need a place to keep named certificates.

To actually encrypt a file we use the following function.


<# 	.SYNOPSIS 		Encrypt a file using the specified named certificate 	.DESCRIPTION 		Use the public key from certificate named by the -client parameter to encrypt the data in the file specified by the -path paremeter. 		The data is encrypted and copied to a file in the same folder as the source file with .encrypted appended to the file name. 	.PARAMETER  client 		Find a certificate in the LocalMachine\AddressBook cert store with the Subject set to "CN=$client". 	.PARAMETER  path 		Path to file to be encrypted 	.EXAMPLE 		PS C:\>ConvertTo-EncryptedFile -path "c:\data.txt" -client "NewClient"

				 Directory: C:\

		Mode                LastWriteTime     Length Name
		----                -------------     ------ ----
		-a---         3/26/2016   4:09 PM        799 data.txt.encrypted

		This example shows how to call the ConvertTo-EncryptedFile with a single client and file.

	.EXAMPLE
		Get-Something 'One value' 32

	.INPUTS
		System.String,System.Int32

	.OUTPUTS
		System.IO.FileInfo

#>

function ConvertTo-EncryptedFile
{
	[outputType([System.IO.FileInfo])]
	param
	(
		[parameter(Mandatory = $true)]
		[string]$path,
		[string]$client
	)

	$cert = Get-ClientCert -client $client

	if(Test-Path $path)
	{
		$file = Get-Item $path
		$folder = $file.DirectoryName
		$Name = $file.Name

		$destination = Join-Path $folder -ChildPath "$Name.encrypted"

		$serviceProvider = [System.Security.Cryptography.RSACryptoServiceProvider]$cert.PublicKey.Key
		$aesManaged = New-Object System.Security.Cryptography.AesManaged

		$aesManaged.KeySize = 256
		$aesManaged.BlockSize = 128
		$aesManaged.Mode = 'CBC'

		$transform = $aesManaged.CreateEncryptor()

		$keyformatter = New-Object System.Security.Cryptography.RSAPKCS1KeyExchangeformatter $serviceProvider

		[byte[]]$keyEncrypted = $keyformatter.CreateKeyExchange($aesManaged.Key, $aesManaged.GetType())

		[byte[]]$lenK = [bitconverter]::GetBytes($keyEncrypted.Length)
		[byte[]]$lenIV = [bitconverter]::GetBytes($aesManaged.IV.Length)

		$outFS = New-Object System.IO.FileStream @($destination, [System.IO.FileMode]::Create)

		$outFS.Write($lenK, 0, 4)
		$outFS.Write($lenIV, 0, 4)

		$outStreamEncrypted = New-Object System.Security.Cryptography.CryptoStream @($outFS, $transform, [System.Security.Cryptography.CryptoStreamMode]::Write)

		$count = 0
		$offset = 0

		$blockSizeBytes = $aesManaged.BlockSize / 8
		$data = New-Object byte[] $blockSizeBytes
		$bytesRead = 0

		$inFS = New-Object System.IO.FileStream @($path, [System.IO.FileMode]::Open)

		do
		{
			$count = $inFS.Read($data, 0, $blockSizeBytes)
			$offset += $count
			$outStreamEncrypted.Write($data, 0, $count)
			$bytesRead += $blockSizeBytes
		}
		while ($count -gt 0)
		$inFS.Close()
		$outStreamEncrypted.FlushFinalBlock()
		$outStreamEncrypted.Close()
		$outFS.Close()

		$inFS.Dispose()
		$outStreamEncrypted.Dispose()
		$outFS.Dispose()

		Remove-Variable transform
		$aesManaged.Dispose()

		Write-Output (Get-Item $destination)

	}
	else
	{
		throw "File to encrypt not found at path: $path"
	}

}

This function is where we see the code from the linked MSDN article translated into Powershell. Going over how it functions in too much detail could take a while, but it’s basically outlined by the work flow detailed above.

That’s it for this post as I’ve run out of time, but in later posts we are going to fill out the support functions we need to make a usable module, and of course, we’ll work on getting the encrypted data back again into plain text or usable data.

Event Based Asynchronous Job Management In Powershell

In my last post I demo’d building an event based GIU app in Powershell Studio. You probably noticed though that some of the code to handle long running tasks in a background job was less than ideal. To recap it was handled as follows:

  • Create a timer object
  • Assign a code block to the tick event that knows how to poll for the status of all of your background jobs
  • Create and start the jobs running
  • Start the timer ticking
  • The code block checks for job results and hopefully cleans up after itself when the jobs are done.

It certainly works, but it can hardly be called clean. In fairness, without C#’s background workers, Powershell is at a cleanliness disadvantage, but I think there’s still a better way. What if the process looked more like:

  • Register an event handler that just waits for your job to finish without sitting there and cycling over and over again.
  • Create and start your job
  • Inside the code of the job, fire an event when the job is done that calls your registered event handler
  • The event handler consumes the results of the long running task and cleans up the job when it’s done.

Check out the code snippet below and then I’ll go over it in detail.

$session = New-PSSession -ComputerName "Server1"

$tb = New-Object System.Windows.Controls.TextBox

$block = {
    Write-host ($event.SourceArgs | Out-String)

    $tb.Text = $event.SourceArgs | Out-String

    Get-Job -Name $sender | Remove-Job -Force
}

Register-EngineEvent -SourceIdentifier Custom.RaisedEvent -Action $block

$jobName = "Server1EventTest"

Invoke-command -Session $session -ScriptBlock{
                                                param([string]$jobName)
                                                Start-Sleep -Seconds 10
                                                Register-EngineEvent Custom.RaisedEvent -Forward
                                                New-Event Custom.RaisedEvent -Sender $jobName -EventArguments (Get-Service)
                                            } -ArgumentList $jobName -AsJob -JobName $jobName | out-null

Considerations in this code

Session Variables: Creating a session variable takes a little time, but it saves you time later. If you are making a GUI find an unobtrusive place in your app’s execution like the FormShown event to create session objects or even a loading spinner before the user is allowed to start if you need to. Create sessions on the local server and to any remote servers you want to execute jobs on. That way they will be ready to go when you need to pass them to Invoke-Command.

Job Names: There might be a better way to do this, but to allow the code to clean up after itself without looping through all background jobs, it’s important that the event handling code have a way to know which job it should delete. In typical use you would probably build a string inside a loop for each job name you need, and pass it into the job.

Job Name Param: The script block you pass into the job needs to take at least one parameter so it can receive the name you’ve assigned the job. This is important because the jobs result set needs to include this name so the callback code block knows which job it’s receiving results from.

Register-EngineEvent in Remote Code Block: It’s not great that you have to repeat the event name so many times, but it’s import that you register the engine event in the main session, and also in the background job code block. In the background code block though you use the -Forward parameter to ensure the event you raise later gets forwarded up to the parent session.

-EventArguments: This should be the result of your long running operation.

-Sender: The sender will be your job name so the callback codeblock knows which job to clean up.

Making use of values: If you have a GUI app it’ really easy to make use of the jobs results. You can find the UI element you want to modify and assign the value where it’s needed. If you are not in a GUI app you can either create the variable ahead of time and do the assignment in the callback code block, or assign a new variable in the code block, but make sure the scope the variable so that it still exists after the code block completes.

Memory Usage: I looked for a decent way to do this using runspaces because I think the memory usage is probably lower, but I didn’t find a decent way to make it happen. So keep in mind that this works well, but keep an eye on RAM Usage in testing. If you start a loop over a large number of object I can imaging memory consumption getting out of control pretty quickly. But of course, using the looping method the same consideration applies.

Why Not Runspaces?: Run spaces have a lot of advantages besides lower memory usage. One of them is persistence. The runspace doesn’t need to be cleaned up when it’s done with a task. You can also give it lots function definitions and just ask it to execute them as needed. The problem I ran into was that it was a lot harder to get the runspace to communicate results and data back up to the parent runspace than it was to communicate downward.

Anyway, I hope you like it and please let me know if you think there are ways to improve it! You can reach me on Twitter @RandomNoun7

Event Driven App with Sapien Powershell Studio

I was on Twitter the other day and saw that @juneb_get_help tweeted asking people to talk about things they had built with Powershell Studio. I responded, but I thought it really deserved a blog post to talk about the app and the kinds of things you can do with it.

The Business Problem

The company I work for installs software inside their clients’ networks. We provide the client with a detailed spec of what we would like the servers they give us to look like, but it is very common for servers to be just a little bit wrong. Since we use Puppet for configuration management, it’s important to verify that servers are in the correct state before we accept them.

The app I wrote is dropped onto a single server in the environment, and when provided a list of servers to test, invokes a series of remote jobs that instruct each box to test itself for correct state and report back the results.

Why Powershell Studio

As we will see shortly, Powershell studio is a good solution for this because it allows you take advantage of the convenience of Powershell for administering servers, while also taking advantage of event based programming and easily wrapping scripts in a GUI that suits the needs of slightly less technical users.

The App

InitalFullScreen

In the screenshot above you can see the design view for the app. The app consists of a main form, a tab control with tabs for the stages of testing, with sub controls for data. In the tab you see here I have a couple text boxes for Active Directory account names. Before the machines are tested I use the app to ensure that the Active Directory accounts we asked for have been created.

Notice on the right side the name says textBoxSQLAccount. That is going to be the name of a variable created by the Studio to reference that text box.

AddEvent

In this screenshot we are adding an event handler to the lower text box. When you click ok you get a code block for the object’s event handler.

AddEventCodeAutoComplete

If you’re familiar with Powershell syntax you will recognize a variable with a code block. In the background, when Powershell Studio builds the app, it ensures that code is executed when that event is fired just as the name implies. To keep things neat, the code is factored out into a function and I simply pass the textbox, since both boxes need to run the same check when the Leave event fires.

One of the cool things about Powershell Studio is how the really great auto complete incentivises you to write good code. The function Validate-Textbox was defined with a parameter of type System.Windows.Forms.Textbox. Powershell Studio knows it, so when auto complete comes up, it only shows me the variables of the correct type.

The code in Validate-TextBox is as follows:

function Verify-ADObject
{
	param (
		[string]$name
	)

	if ((([ADSISearcher]"Name=$($name)").FindOne(), ([ADSISearcher]"SAMAccountName=$($name)").FindOne() -ne $NULL)[0])
	{
		Write-Output $true
	}
	else
	{
		Write-Output $false
	}
}

function Validate-TextBox
{
	param (
		[System.Windows.Forms.TextBox]$textBox
	)

	if (Verify-ADObject -name $textBox.Text)
	{
		$textBox.BackColor = 'LimeGreen'
	}
	else
	{
		$textBox.BackColor = 'Red'
	}
}

With these event handlers in place, every time my cursor leaves the text box the Validate-TextBox function is called with the current text box as the parameter, which then passes the text value to Verify-ADObject. If an object is returned we know it exists and the textbox turns green, and if not we get a red box. Since this is a demo app this is enough, but in reality we would want some checks in place to ensure such things as that a value actually exists in the text box in case someone just clicked on the text box by accident and then left.
Firsttest

Some of you at this point may be wagging your fingers at me, and believe me I know. I shouldn’t be making network calls on the UI thread. I don’t disagree, but I’m also lazy, and this check happens very quickly so I’m not worried about it for these text boxes. In the next tab though I’m going to do it right.

InstallerRunning

This tab is going to install the roles we need on the server. In reality Puppet can take care of this for us, but this is a decent way to demo the next concept I want to cover. Combining Powershell jobs and event driven programming we can spin off long running processes into a background thread that keeps our UI from blocking while the servers go about their business.

The IIS tab consists of little more than a DataviewGrid control, a progress bar and a button. After we add servers to the list in the dataview control we handle the button’s Click event to start installing the Web Server roles that we need.

$buttonInstall_Click={
	Begin-Install $datagridview1 -bar $progressbar1
}

Earlier in the form code that you don’t see here I created a timer object, but didn’t start it. In the next code section I’ll get the list of servers from the DataGridview control, spin off the jobs to the background, and then add a code block to the timer’s tick event before starting the timer. The effect is that the invocation of Begin-Install completes and the UI thread is unblocked. In the background however, the timer is still running and with each tick it calls a code block that checks on the status of the jobs we spun off in the background. The Get-JobStatus function that gets called knows how to find the Datagridview control and update the appropriate rows as the servers report their status back to the background job. Here’s the code.

function Begin-Install
{

	$scriptBlock = {
		Add-WindowsFeature Web-WebServer, Web-Mgmt-Console, Web-App-Dev, Web-Asp-Net45, Web-Mgmt-Console
	}

	$style = New-Object System.Windows.Forms.DataGridViewCellStyle

	$style.BackColor = 'Yellow'

	$datagridview1.Rows | Where-Object{$_.Cells[0].Value.length -gt 0} | ForEach-Object{ $_.Cells[1, 2] } | ForEach-Object{ $_.style = $style }

	$progressbar1.Style = 'Marquee'
	$progressbar1.Visible = $true

	foreach ($server in ($datagridview1.Rows | Where-Object{ $_.Cells[0].value.length -gt 0 } | ForEach-Object{ $_.Cells[0].Value }))
	{
		Invoke-Command -ScriptBlock $scriptBlock -ComputerName $server -AsJob -JobName "installRoles_$server"
	}

	$timer.add_Tick({ Get-JobStatus })
	$timer.Start()
}

function Get-JobStatus
{

	if ($jobs = Get-Job | where state -NE 'running')
	{
		foreach ($job in $jobs)
		{
			$results = Receive-Job $job
			Remove-Job $job

			$row = $datagridview1.Rows | Where-Object{ $_.Cells[0].Value -eq $results.PScomputername }
			$style = New-Object System.Windows.Forms.DataGridViewCellStyle

			if ($results.success)
			{
				$style.BackColor = 'LimeGreen'
				$row.cells[1].style = $style
				$row.cells[1].value = 1
			}
			else
			{
				$style.BackColor = 'Red'
				$row.cells[1].style = $style
			}
		}
	}
	else
	{
		if (!(Get-Job))
		{
			$timer.Stop()
			$timer.Dispose()
			$progressbar1.Visible = $False
		}
	}
}

In the real version the process of just finding the list of servers and spinning off the jobs can take a noticeable amount of time, so really the entire operation should be in the background, but for a demo, just spinning off the remote portion makes it easier to follow what’s happening.

Troubleshooting

One of the advantages to using Powershell studio forms apps like this is that it allows me to export the entire app not as an executable finished product, but just as a big long script. The reason this can be nice is if there are bugs with the program, I don’t have to install the entire Powershell Studio on a client machine to debug. I export the script, set my break points in Powershell ISE, and I can debug on the clients machines using only the tools freely available on any Windows Server. With that in mind, the entire demo app is pasted below as a script. It’s very very rough, just thrown together over the course of a few hours to make this blog post happen, so please don’t there are bugs, and there’s no error handling, etc, I know. but if you want to let me know of any better ways to do this stuff, or just have general thoughts, don’t hesitate to let me know. You can find me on twitter @RandomNoun7

#------------------------------------------------------------------------
# Source File Information (DO NOT MODIFY)
# Source ID: 1d9b2aea-ddc6-4129-a0a5-07da6d38202b
# Source File: C:\Users\bhurt\Documents\SAPIEN\PowerShell Studio 2015\Projects\Tabbed App Demo\Tabbed App Demo.psproj
#------------------------------------------------------------------------
<#
    .NOTES
    --------------------------------------------------------------------------------
     Code generated by:  SAPIEN Technologies, Inc., PowerShell Studio 2015 v4.2.99
     Generated on:       3/7/2016 10:25 AM
     Generated by:        
     Organization:        
    --------------------------------------------------------------------------------
    .DESCRIPTION
        Script generated by PowerShell Studio 2015
#>


#region Source: Startup.pss
#----------------------------------------------
#region Import Assemblies
#----------------------------------------------
[void][Reflection.Assembly]::Load('mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
[void][Reflection.Assembly]::Load('System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
[void][Reflection.Assembly]::Load('System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
[void][Reflection.Assembly]::Load('System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
[void][Reflection.Assembly]::Load('System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a')
[void][Reflection.Assembly]::Load('System.Xml, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
[void][Reflection.Assembly]::Load('System.DirectoryServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a')
[void][Reflection.Assembly]::Load('System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
[void][Reflection.Assembly]::Load('System.ServiceProcess, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a')
#endregion Import Assemblies

#Define a Param block to use custom parameters in the project
#Param ($CustomParameter)

function Main {
<#
    .SYNOPSIS
        The Main function starts the project application.
    
    .PARAMETER Commandline
        $Commandline contains the complete argument string passed to the script packager executable.
    
    .NOTES
        Use this function to initialize your script and to call GUI forms.
		
    .NOTES
        To get the console output in the Packager (Forms Engine) use: 
		$ConsoleOutput (Type: System.Collections.ArrayList)
#>
	Param ([String]$Commandline)
		
	#--------------------------------------------------------------------------
	#TODO: Add initialization script here (Load modules and check requirements)
	
	
	#--------------------------------------------------------------------------
	
	if((Call-MainForm_psf) -eq 'OK')
	{
		
	}
	
	$global:ExitCode = 0 #Set the exit code for the Packager
}






#endregion Source: Startup.pss

#region Source: MainForm.psf
function Call-MainForm_psf
{
	#----------------------------------------------
	#region Import the Assemblies
	#----------------------------------------------
	[void][reflection.assembly]::Load('mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
	[void][reflection.assembly]::Load('System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
	[void][reflection.assembly]::Load('System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
	[void][reflection.assembly]::Load('System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
	[void][reflection.assembly]::Load('System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a')
	[void][reflection.assembly]::Load('System.Xml, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
	[void][reflection.assembly]::Load('System.DirectoryServices, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a')
	[void][reflection.assembly]::Load('System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089')
	[void][reflection.assembly]::Load('System.ServiceProcess, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a')
	#endregion Import Assemblies

	#----------------------------------------------
	#region Generated Form Objects
	#----------------------------------------------
	[System.Windows.Forms.Application]::EnableVisualStyles()
	$MainForm = New-Object 'System.Windows.Forms.Form'
	$TabControl = New-Object 'System.Windows.Forms.TabControl'
	$ADTab = New-Object 'System.Windows.Forms.TabPage'
	$tablelayoutpanel1 = New-Object 'System.Windows.Forms.TableLayoutPanel'
	$textBoxSQLAccount = New-Object 'System.Windows.Forms.TextBox'
	$textBoxAppPool = New-Object 'System.Windows.Forms.TextBox'
	$labelSQLServiceAccount = New-Object 'System.Windows.Forms.Label'
	$labelAppPoolServiceAccoun = New-Object 'System.Windows.Forms.Label'
	$IISTab = New-Object 'System.Windows.Forms.TabPage'
	$buttonInstall = New-Object 'System.Windows.Forms.Button'
	$progressbar1 = New-Object 'System.Windows.Forms.ProgressBar'
	$datagridview1 = New-Object 'System.Windows.Forms.DataGridView'
	$ServerName = New-Object 'System.Windows.Forms.DataGridViewTextBoxColumn'
	$IISInstalled = New-Object 'System.Windows.Forms.DataGridViewCheckBoxColumn'
	$InitialFormWindowState = New-Object 'System.Windows.Forms.FormWindowState'
	#endregion Generated Form Objects

	#----------------------------------------------
	# User Generated Script
	#----------------------------------------------
	$timer = New-Object System.Windows.Forms.Timer
	
	$MainForm_Load={
	#TODO: Initialize Form Controls here
	
	}
	
	$MainForm_Shown={
	
	}
	
	$textBoxSQLAccount_Leave={
		Validate-TextBox -textBox $textBoxSQLAccount
	}
	
	$textBoxAppPool_Leave={
		Validate-TextBox -textBox $textBoxAppPool
	}
	
	
	#region Control Helper Functions
	function Load-DataGridView
	{
		<#
		.SYNOPSIS
			This functions helps you load items into a DataGridView.
	
		.DESCRIPTION
			Use this function to dynamically load items into the DataGridView control.
	
		.PARAMETER  DataGridView
			The DataGridView control you want to add items to.
	
		.PARAMETER  Item
			The object or objects you wish to load into the DataGridView's items collection.
		
		.PARAMETER  DataMember
			Sets the name of the list or table in the data source for which the DataGridView is displaying data.
	
		#>
		Param (
			[ValidateNotNull()]
			[Parameter(Mandatory=$true)]
			[System.Windows.Forms.DataGridView]$DataGridView,
			[ValidateNotNull()]
			[Parameter(Mandatory=$true)]
			$Item,
		    [Parameter(Mandatory=$false)]
			[string]$DataMember
		)
		$DataGridView.SuspendLayout()
		$DataGridView.DataMember = $DataMember
		
		if ($Item -is [System.ComponentModel.IListSource]`
		-or $Item -is [System.ComponentModel.IBindingList] -or $Item -is [System.ComponentModel.IBindingListView] )
		{
			$DataGridView.DataSource = $Item
		}
		else
		{
			$array = New-Object System.Collections.ArrayList
			
			if ($Item -is [System.Collections.IList])
			{
				$array.AddRange($Item)
			}
			else
			{	
				$array.Add($Item)	
			}
			$DataGridView.DataSource = $array
		}
		
		$DataGridView.ResumeLayout()
	}
	
	function ConvertTo-DataTable
	{
		<#
			.SYNOPSIS
				Converts objects into a DataTable.
		
			.DESCRIPTION
				Converts objects into a DataTable, which are used for DataBinding.
		
			.PARAMETER  InputObject
				The input to convert into a DataTable.
		
			.PARAMETER  Table
				The DataTable you wish to load the input into.
		
			.PARAMETER RetainColumns
				This switch tells the function to keep the DataTable's existing columns.
			
			.PARAMETER FilterWMIProperties
				This switch removes WMI properties that start with an underline.
		
			.EXAMPLE
				$DataTable = ConvertTo-DataTable -InputObject (Get-Process)
		#>
		[OutputType([System.Data.DataTable])]
		param(
		[ValidateNotNull()]
		$InputObject, 
		[ValidateNotNull()]
		[System.Data.DataTable]$Table,
		[switch]$RetainColumns,
		[switch]$FilterWMIProperties)
		
		if($Table -eq $null)
		{
			$Table = New-Object System.Data.DataTable
		}
	
		if($InputObject-is [System.Data.DataTable])
		{
			$Table = $InputObject
		}
		else
		{
			if(-not $RetainColumns -or $Table.Columns.Count -eq 0)
			{
				#Clear out the Table Contents
				$Table.Clear()
	
				if($InputObject -eq $null){ return } #Empty Data
				
				$object = $null
				#find the first non null value
				foreach($item in $InputObject)
				{
					if($item -ne $null)
					{
						$object = $item
						break	
					}
				}
	
				if($object -eq $null) { return } #All null then empty
				
				#Get all the properties in order to create the columns
				foreach ($prop in $object.PSObject.Get_Properties())
				{
					if(-not $FilterWMIProperties -or -not $prop.Name.StartsWith('__'))#filter out WMI properties
					{
						#Get the type from the Definition string
						$type = $null
						
						if($prop.Value -ne $null)
						{
							try{ $type = $prop.Value.GetType() } catch {}
						}
	
						if($type -ne $null) # -and [System.Type]::GetTypeCode($type) -ne 'Object')
						{
			      			[void]$table.Columns.Add($prop.Name, $type) 
						}
						else #Type info not found
						{ 
							[void]$table.Columns.Add($prop.Name) 	
						}
					}
			    }
				
				if($object -is [System.Data.DataRow])
				{
					foreach($item in $InputObject)
					{	
						$Table.Rows.Add($item)
					}
					return  @(,$Table)
				}
			}
			else
			{
				$Table.Rows.Clear()	
			}
			
			foreach($item in $InputObject)
			{		
				$row = $table.NewRow()
				
				if($item)
				{
					foreach ($prop in $item.PSObject.Get_Properties())
					{
						if($table.Columns.Contains($prop.Name))
						{
							$row.Item($prop.Name) = $prop.Value
						}
					}
				}
				[void]$table.Rows.Add($row)
			}
		}
	
		return @(,$Table)	
	}
	#endregion
	
	$buttonInstall_Click={
		Begin-Install $datagridview1 -bar $progressbar1
	}
	
	#region AD Functions
	
	function Verify-ADObject
	{
		param (
			[string]$name
		)
		
		if ((([ADSISearcher]"Name=$($name)").FindOne(), ([ADSISearcher]"SAMAccountName=$($name)").FindOne() -ne $NULL)[0])
		{
			Write-Output $true
		}
		else
		{
			Write-Output $false
		}
	}
	
	function Validate-TextBox
	{
		param (
			[System.Windows.Forms.TextBox]$textBox
		)
		
		if (Verify-ADObject -name $textBox.Text)
		{
			$textBox.BackColor = 'LimeGreen'
		}
		else
		{
			$textBox.BackColor = 'Red'
		}
	}
	
	#endregion
	
	#region Install Tab Functions
	function Begin-Install
	{
		
		$scriptBlock = {
			Add-WindowsFeature Web-WebServer, Web-Mgmt-Console, Web-App-Dev, Web-Asp-Net45, Web-Mgmt-Console
		}
		
		$style = New-Object System.Windows.Forms.DataGridViewCellStyle
		
		$style.BackColor = 'Yellow'
		
		$datagridview1.Rows | Where-Object{$_.Cells[0].Value.length -gt 0} | ForEach-Object{ $_.Cells[1, 2] } | ForEach-Object{ $_.style = $style }
		
		$progressbar1.Style = 'Marquee'
		$progressbar1.Visible = $true
		
		foreach ($server in ($datagridview1.Rows | Where-Object{ $_.Cells[0].value.length -gt 0 } | ForEach-Object{ $_.Cells[0].Value }))
		{
			Invoke-Command -ScriptBlock $scriptBlock -ComputerName $server -AsJob -JobName "installRoles_$server"
		}
	
		$timer.add_Tick({ Get-JobStatus })
		$timer.Start()
	}
	
	function Get-JobStatus
	{
		
		if ($jobs = Get-Job | where state -NE 'running')
		{
			foreach ($job in $jobs)
			{
				$results = Receive-Job $job
				Remove-Job $job
				
				$row = $datagridview1.Rows | Where-Object{ $_.Cells[0].Value -eq $results.PScomputername }
				$style = New-Object System.Windows.Forms.DataGridViewCellStyle
				
				if ($results.success)
				{
					$style.BackColor = 'LimeGreen'
					$row.cells[1].style = $style
					$row.cells[1].value = 1
				}
				else
				{
					$style.BackColor = 'Red'
					$row.cells[1].style = $style
				}
			}
		}
		else
		{
			if (!(Get-Job))
			{
				$timer.Stop()
				$timer.Dispose()
				$progressbar1.Visible = $False
			}
		}
	}
	#endregion
		# --End User Generated Script--
	#----------------------------------------------
	#region Generated Events
	#----------------------------------------------
	
	$Form_StateCorrection_Load=
	{
		#Correct the initial state of the form to prevent the .Net maximized form issue
		$MainForm.WindowState = $InitialFormWindowState
	}
	
	$Form_StoreValues_Closing=
	{
		#Store the control values
		$script:MainForm_textBoxSQLAccount = $textBoxSQLAccount.Text
		$script:MainForm_textBoxAppPool = $textBoxAppPool.Text
		$script:MainForm_datagridview1 = $datagridview1.SelectedCells
	}

	
	$Form_Cleanup_FormClosed=
	{
		#Remove all event handlers from the controls
		try
		{
			$textBoxSQLAccount.remove_Leave($textBoxSQLAccount_Leave)
			$textBoxAppPool.remove_Leave($textBoxAppPool_Leave)
			$buttonInstall.remove_Click($buttonInstall_Click)
			$MainForm.remove_Load($MainForm_Load)
			$MainForm.remove_Shown($MainForm_Shown)
			$MainForm.remove_Load($Form_StateCorrection_Load)
			$MainForm.remove_Closing($Form_StoreValues_Closing)
			$MainForm.remove_FormClosed($Form_Cleanup_FormClosed)
		}
		catch [Exception]
		{ }
	}
	#endregion Generated Events

	#----------------------------------------------
	#region Generated Form Code
	#----------------------------------------------
	$MainForm.SuspendLayout()
	$TabControl.SuspendLayout()
	$ADTab.SuspendLayout()
	$tablelayoutpanel1.SuspendLayout()
	$IISTab.SuspendLayout()
	#
	# MainForm
	#
	$MainForm.Controls.Add($TabControl)
	$MainForm.ClientSize = '476, 452'
	$MainForm.Name = 'MainForm'
	$MainForm.StartPosition = 'CenterScreen'
	$MainForm.Text = 'Tabbed App Demo'
	$MainForm.UseWaitCursor = $True
	$MainForm.add_Load($MainForm_Load)
	$MainForm.add_Shown($MainForm_Shown)
	#
	# TabControl
	#
	$TabControl.Controls.Add($ADTab)
	$TabControl.Controls.Add($IISTab)
	$TabControl.Location = '12, 35'
	$TabControl.Name = 'TabControl'
	$TabControl.SelectedIndex = 0
	$TabControl.Size = '452, 405'
	$TabControl.TabIndex = 0
	#
	# ADTab
	#
	$ADTab.Controls.Add($tablelayoutpanel1)
	$ADTab.Location = '4, 22'
	$ADTab.Name = 'ADTab'
	$ADTab.Padding = '3, 3, 3, 3'
	$ADTab.Size = '444, 379'
	$ADTab.TabIndex = 0
	$ADTab.Text = 'Active Directory'
	$ADTab.UseVisualStyleBackColor = $True
	#
	# tablelayoutpanel1
	#
	$tablelayoutpanel1.Controls.Add($textBoxSQLAccount, 1, 0)
	$tablelayoutpanel1.Controls.Add($textBoxAppPool, 1, 1)
	$tablelayoutpanel1.Controls.Add($labelSQLServiceAccount, 0, 0)
	$tablelayoutpanel1.Controls.Add($labelAppPoolServiceAccoun, 0, 1)
	$tablelayoutpanel1.ColumnCount = 2
	$System_Windows_Forms_ColumnStyle_1 = New-Object 'System.Windows.Forms.ColumnStyle' ('Percent', 50)
	[void]$tablelayoutpanel1.ColumnStyles.Add($System_Windows_Forms_ColumnStyle_1)
	$System_Windows_Forms_ColumnStyle_2 = New-Object 'System.Windows.Forms.ColumnStyle' ('Percent', 50)
	[void]$tablelayoutpanel1.ColumnStyles.Add($System_Windows_Forms_ColumnStyle_2)
	$tablelayoutpanel1.Location = '6, 6'
	$tablelayoutpanel1.Name = 'tablelayoutpanel1'
	$tablelayoutpanel1.RowCount = 2
	$System_Windows_Forms_RowStyle_3 = New-Object 'System.Windows.Forms.RowStyle' ('Percent', 50)
	[void]$tablelayoutpanel1.RowStyles.Add($System_Windows_Forms_RowStyle_3)
	$System_Windows_Forms_RowStyle_4 = New-Object 'System.Windows.Forms.RowStyle' ('Percent', 50)
	[void]$tablelayoutpanel1.RowStyles.Add($System_Windows_Forms_RowStyle_4)
	$tablelayoutpanel1.Size = '432, 373'
	$tablelayoutpanel1.TabIndex = 0
	#
	# textBoxSQLAccount
	#
	$textBoxSQLAccount.Anchor = 'Bottom, Left'
	$textBoxSQLAccount.Location = '219, 163'
	$textBoxSQLAccount.Name = 'textBoxSQLAccount'
	$textBoxSQLAccount.Size = '210, 20'
	$textBoxSQLAccount.TabIndex = 0
	$textBoxSQLAccount.add_Leave($textBoxSQLAccount_Leave)
	#
	# textBoxAppPool
	#
	$textBoxAppPool.Location = '219, 189'
	$textBoxAppPool.Name = 'textBoxAppPool'
	$textBoxAppPool.Size = '210, 20'
	$textBoxAppPool.TabIndex = 1
	$textBoxAppPool.add_Leave($textBoxAppPool_Leave)
	#
	# labelSQLServiceAccount
	#
	$labelSQLServiceAccount.Anchor = 'Bottom, Right'
	$labelSQLServiceAccount.Location = '3, 163'
	$labelSQLServiceAccount.Name = 'labelSQLServiceAccount'
	$labelSQLServiceAccount.Size = '210, 23'
	$labelSQLServiceAccount.TabIndex = 2
	$labelSQLServiceAccount.Text = 'SQL Server Service Account'
	$labelSQLServiceAccount.TextAlign = 'MiddleRight'
	#
	# labelAppPoolServiceAccoun
	#
	$labelAppPoolServiceAccoun.Anchor = 'Top, Right'
	$labelAppPoolServiceAccoun.Location = '3, 186'
	$labelAppPoolServiceAccoun.Name = 'labelAppPoolServiceAccoun'
	$labelAppPoolServiceAccoun.Size = '210, 23'
	$labelAppPoolServiceAccoun.TabIndex = 3
	$labelAppPoolServiceAccoun.Text = 'App Pool Service Account'
	$labelAppPoolServiceAccoun.TextAlign = 'MiddleRight'
	#
	# IISTab
	#
	$IISTab.Controls.Add($buttonInstall)
	$IISTab.Controls.Add($progressbar1)
	$IISTab.Controls.Add($datagridview1)
	$IISTab.Location = '4, 22'
	$IISTab.Name = 'IISTab'
	$IISTab.Padding = '3, 3, 3, 3'
	$IISTab.Size = '444, 379'
	$IISTab.TabIndex = 1
	$IISTab.Text = 'IIS'
	$IISTab.UseVisualStyleBackColor = $True
	#
	# buttonInstall
	#
	$buttonInstall.Location = '7, 207'
	$buttonInstall.Name = 'buttonInstall'
	$buttonInstall.Size = '75, 23'
	$buttonInstall.TabIndex = 2
	$buttonInstall.Text = 'Install'
	$buttonInstall.UseVisualStyleBackColor = $True
	$buttonInstall.add_Click($buttonInstall_Click)
	#
	# progressbar1
	#
	$progressbar1.Location = '7, 311'
	$progressbar1.Name = 'progressbar1'
	$progressbar1.Size = '431, 23'
	$progressbar1.TabIndex = 1
	$progressbar1.Visible = $False
	#
	# datagridview1
	#
	$System_Windows_Forms_DataGridViewCellStyle_5 = New-Object 'System.Windows.Forms.DataGridViewCellStyle'
	$System_Windows_Forms_DataGridViewCellStyle_5.Alignment = 'MiddleCenter'
	$System_Windows_Forms_DataGridViewCellStyle_5.BackColor = 'Control'
	$System_Windows_Forms_DataGridViewCellStyle_5.Font = 'Microsoft Sans Serif, 8.25pt'
	$System_Windows_Forms_DataGridViewCellStyle_5.ForeColor = 'WindowText'
	$System_Windows_Forms_DataGridViewCellStyle_5.SelectionBackColor = 'Highlight'
	$System_Windows_Forms_DataGridViewCellStyle_5.SelectionForeColor = 'HighlightText'
	$System_Windows_Forms_DataGridViewCellStyle_5.WrapMode = 'True'
	$datagridview1.ColumnHeadersDefaultCellStyle = $System_Windows_Forms_DataGridViewCellStyle_5
	$datagridview1.ColumnHeadersHeightSizeMode = 'AutoSize'
	[void]$datagridview1.Columns.Add($ServerName)
	[void]$datagridview1.Columns.Add($IISInstalled)
	$datagridview1.Location = '6, 32'
	$datagridview1.Name = 'datagridview1'
	$datagridview1.ScrollBars = 'None'
	$datagridview1.Size = '432, 150'
	$datagridview1.TabIndex = 0
	#
	# ServerName
	#
	$ServerName.HeaderText = 'Server Name'
	$ServerName.Name = 'ServerName'
	#
	# IISInstalled
	#
	$IISInstalled.AutoSizeMode = 'Fill'
	$IISInstalled.HeaderText = 'Roles Installed'
	$IISInstalled.Name = 'IISInstalled'
	$IISInstalled.ReadOnly = $True
	$IISTab.ResumeLayout()
	$tablelayoutpanel1.ResumeLayout()
	$ADTab.ResumeLayout()
	$TabControl.ResumeLayout()
	$MainForm.ResumeLayout()
	#endregion Generated Form Code

	#----------------------------------------------

	#Save the initial state of the form
	$InitialFormWindowState = $MainForm.WindowState
	#Init the OnLoad event to correct the initial state of the form
	$MainForm.add_Load($Form_StateCorrection_Load)
	#Clean up the control events
	$MainForm.add_FormClosed($Form_Cleanup_FormClosed)
	#Store the control values when form is closing
	$MainForm.add_Closing($Form_StoreValues_Closing)
	#Show the Form
	return $MainForm.ShowDialog()

}
#endregion Source: MainForm.psf

#region Source: Globals.ps1
	#--------------------------------------------
	# Declare Global Variables and Functions here
	#--------------------------------------------
	
	
	#Sample function that provides the location of the script
	function Get-ScriptDirectory
	{
	<#
		.SYNOPSIS
			Get-ScriptDirectory returns the proper location of the script.
	
		.OUTPUTS
			System.String
		
		.NOTES
			Returns the correct path within a packaged executable.
	#>
		[OutputType([string])]
		param ()
		if ($hostinvocation -ne $null)
		{
			Split-Path $hostinvocation.MyCommand.path
		}
		else
		{
			Split-Path $script:MyInvocation.MyCommand.Path
		}
	}
	
	#Sample variable that provides the location of the script
	[string]$ScriptDirectory = Get-ScriptDirectory
	
	#endregion Source: Globals.ps1

#Start the application
Main ($CommandLine)

Examining and Manipulating Cross Platform Text Files

Have you ever had to transfer files over FTP to and from Windows and Linux systems and had to deal with an administrator that just could not comprehend why the files he or she is giving you aren’t coming out right?

Maybe they are transferring the files to you and they keep showing up with no line breaks. The problem if course is pretty simple on the surface. Windows uses Carriege Return (Char(13))/Line Feed(Char(10)) together to represent line breaks, while many other operating systems only use CR. Of course if you’re not very lucky you are dealing with an OS or an import program that expects something even weirder like RS(Char(30)).

So yeah, the problem is simple to understand, and usually simple to fix. Use ASCII Transfer mode in FTP instead of Binary and most of the time the problem goes away. But what do you do when the admin on the other side doesn’t believe you that the file is fine from your perspective? Or maybe they insist that they DID FIX IT!!! How do you convince someone that their encoding is wrong, or that yours is fine? Maybe it’s not worth the struggle. How do you fix a file so that they can consume it no matter what?

Examining the file

<# .Synopsis    Get a specified number of bytes from a text file for display .DESCRIPTION    Read in a specified number of bytes from a text file. Display the bytes as a table that shows each character along with its decimal and hex values. .EXAMPLE    Get-TextBytesTable -path c:\textFile.txt -bytes 100 #>
function Get-TextBytesTable
{
    [CmdletBinding()]
    [Alias('gbt')]
    Param
    (
        # Path to file to read.
        [Parameter(Mandatory=$true,
                   Position=0)]
        $Path,

        # Number of bytes to read
        [int]
        $count
    )

    Process
    {
        (Get-Content $path -raw -Encoding Byte)[0..$count] |
            Foreach-Object{$props = @{
                                      character=[char]$_;
                                      Decimal=$_;
                                      Hex="0x$('{0:x}' -f $_)"
                                     };
                           New-Object -TypeName PSobject -Property $props
                           } |
                Format-Table Character,Decimal,Hex
    }
}

Let’s create a couple files and take a look at some of the output this function will give us.

$string = "Hello World!`r`nAnd here we have another line!"
$string | Set-Content c:\TestFile.txt -Encoding Ascii
$string | Set-Content c:\TestFile2.txt -Encoding UTF32
(Get-Content C:\TestFile.txt).trim() -join [char]10 | Set-Content c:\testFile3.txt -Encoding Ascii -NoNewline

Get-TextBytesTable -path c:\TestFile.txt -count 100
Get-TextBytesTable -path c:\testFile2.txt -count 100
Get-TextBytesTable -path c:\testFile3.txt -count 100

With the command and output below we see that the first file is a very plain ASII encoded file. No byte order mark or anything silly like that. Notice the Char 13 and Char 10. That’s our line break.

PS C:\> Get-TextBytesTable -Path .\testFile.txt -count 100

character Decimal Hex
--------- ------- ---
        H      72 0x48
        e     101 0x65
        l     108 0x6c
        l     108 0x6c
        o     111 0x6f
               32 0x20
        W      87 0x57
        o     111 0x6f
        r     114 0x72
        l     108 0x6c
        d     100 0x64
        !      33 0x21
      ...      13 0xd
      ...      10 0xa
        A      65 0x41
        n     110 0x6e
        d     100 0x64
               32 0x20
        h     104 0x68
        e     101 0x65
        r     114 0x72
        e     101 0x65
               32 0x20
        w     119 0x77
        e     101 0x65
               32 0x20
        h     104 0x68
        a      97 0x61
        v     118 0x76
        e     101 0x65
               32 0x20
        a      97 0x61
        n     110 0x6e
        o     111 0x6f
        t     116 0x74
        h     104 0x68
        e     101 0x65
        r     114 0x72
               32 0x20
        l     108 0x6c
        i     105 0x69
        n     110 0x6e
        e     101 0x65
        !      33 0x21
      ...      13 0xd
      ...      10 0xa

With UTF32 it gets a little more complicated. We start with the byte order mark and then have a bunch of unused bytes between each letter. Our Char 10 and 13 are still there though. We’ve taken up more space but this is still a fairly plain windows file.

PS C:\> Get-TextBytesTable -Path .\testFile2.txt -count 100

character Decimal Hex
--------- ------- ---
        ÿ     255 0xff
        þ     254 0xfe
                0 0x0
                0 0x0
        H      72 0x48
                0 0x0
                0 0x0
                0 0x0
        e     101 0x65
                0 0x0
                0 0x0
                0 0x0
        l     108 0x6c
                0 0x0
                0 0x0
                0 0x0
        l     108 0x6c
                0 0x0
                0 0x0
                0 0x0
        o     111 0x6f
                0 0x0
                0 0x0
                0 0x0
               32 0x20
                0 0x0
                0 0x0
                0 0x0
        W      87 0x57
                0 0x0
                0 0x0
                0 0x0
        o     111 0x6f
                0 0x0
                0 0x0
                0 0x0
        r     114 0x72
                0 0x0
                0 0x0
                0 0x0
        l     108 0x6c
                0 0x0
                0 0x0
                0 0x0
        d     100 0x64
                0 0x0
                0 0x0
                0 0x0
        !      33 0x21
                0 0x0
                0 0x0
                0 0x0
      ...      13 0xd
                0 0x0
                0 0x0
                0 0x0
      ...      10 0xa
                0 0x0
                0 0x0
                0 0x0

Lastly we see what the bytes look like in a file that looks fine to the linux admin but looks like just a blob of text to us. This simple file is easy to fix manually, but if you’re trying to set up automated data imports on a Windows system, this can be a real pain.

PS C:\> Get-TextBytesTable -Path .\testFile3.txt -count 100

character Decimal Hex
--------- ------- ---
        H      72 0x48
        e     101 0x65
        l     108 0x6c
        l     108 0x6c
        o     111 0x6f
               32 0x20
        W      87 0x57
        o     111 0x6f
        r     114 0x72
        l     108 0x6c
        d     100 0x64
        !      33 0x21
      ...      10 0xa
        A      65 0x41
        n     110 0x6e
        d     100 0x64
               32 0x20
        h     104 0x68
        e     101 0x65
        r     114 0x72
        e     101 0x65
               32 0x20
        w     119 0x77
        e     101 0x65
               32 0x20
        h     104 0x68
        a      97 0x61
        v     118 0x76
        e     101 0x65
               32 0x20
        a      97 0x61
        n     110 0x6e
        o     111 0x6f
        t     116 0x74
        h     104 0x68
        e     101 0x65
        r     114 0x72
               32 0x20
        l     108 0x6c
        i     105 0x69
        n     110 0x6e
        e     101 0x65
        !      33 0x21

Fixing the File

So now that we’ve seen how we can inspect the file, what can we do if the admin on the other end just doesn’t know how to fix this. And by the way, this doesn’t always mean they are incompetent. I’ve been told by a very smart admin that getting this right transferring in and out of AIX is just hard.

That last command really shows us how we can make the other guys life easier for very little effort on our part. If you aren’t super familiar with Powershell it’s worth looking at exactly how it works.

(Get-Content C:\TestFile.txt) -join [char]10 | Set-Content c:\testFile3.txt -Encoding Ascii -NoNewline

Get-Content reads a file’s content, but it will break up each line into a discreet string object, stripping its line endings in the process. The syntax forces the entire file to be processed at once and the newly created array of string objects is handed off to the -join operator. We join by char 10 in this case to give us Linux line endings. We pass that resulting string off to Set-Content choosing ASCII as our encoding (encoding can be whatever the recipient wants), ensuring that we use -NoNewline so we don’t get a Windows line ending appended at the very end of the file. Now you can do a binary file transfer and the Linux system is happy.

Need to terminate lines with a “~”? yeah I’ve seen it. Just use -join [char]126. Any crazy line terminator they want, you can provide.

This also gives us insight into how to fix Linux line endings that they can’t figure out how to fix for us.

Get-Content c:\BrokenFile.txt | Set-Content c:\FixedFile.txt

In this case we take advantage of the fact that while many older Windows programs adhere slavishly to Windows Cr\Lf line endings, Powershell really does attempt to be smarter, so it has been designed such that many commandlets like Get-Content understand Linux line endings by default. Again it strips the line endings as it breaks the files lines into an array of strings. As those string objects are passed on to Set-Content though, it adds them to file one at a time, but this time it uses the standard Windows line endings, and just like that, a file that just a second ago looked like one long line of gibberish is fixed.

One last thing, just to save you a minute of frustration, notice that I did not write to the same file as I read the data from. When Powershell starts reading a file and breaking the lines into string objects, the first of those strings will reach Set-Content before Get-Content has actually finished reading the file. If you want to convert a file in place you will have to stage the data somewhere else first, or Set-Content will just encounter a file that is still open for reading and throw an error that it can’t write to the file because it’s still locked.

PS C:\>$contents = Get-Content c:\BrokenFile.txt
PS C:\>$contents | Set-Content c:\FixedFile.txt

PS C:\>Get-Content c:\SourceFile.txt | Set-Content c:\temp.txt
PS C:\>Move-Item c:\temp.txt -destination c:\SourceFile.txt

If the file is small and you have the RAM to spare you can use the first method. If you want to conserve RAM and be nice to the other processes on the box, use the second.