Pages

Wednesday, September 19, 2018

Using a Free AWS box as Web Proxy Box

Throughout the year, there is always a need for a remote SSH box. SSH is a reasonable way to do management of external/remote computers (command-line, bash, etc.), run DNS/network lookups from the Internet DNS servers and to even test web content without the interference of internal spam filters. Taking on this project has helped me learn about Amazon Web Services(AWS) and some automation functionality capable in the environment.

Configure AWS SSH Proxy box

NimrodFlores does a wonderful job explaining his process for spinning up a box in AWS, and configuring the putty client to connect. I had 3 issues with his final answer though.

  1. The post suggests using changing the default proxy settings for your browser. This is a less than ideal solution for me as I must use the similar browser for accessing the corporate resources. Using a proxy box would make this impossible. I've found that the Chrome extension "FoxyProxy" is free and allows me to one-click turn-on and turn-off the proxy usage. 
  2. The box is never shut down. There is a hypothetical point when I would incur charges due to the amount of time the server is running. I really want the box to start on weekday mornings and stop when I am not using it. 
  3. By stopping the instance each night, Amazon puts a new IP address each time. This requires that you update the SSH session each time. My first idea was to attach an elastic IP address to this instance, but that generated a few cents/day charges for the 'static' address. I am now at $0.86 for the first 2 weeks of the month.I believe by getting rid of the EIP that out, I can stop that recurring charge. 

I posted these in his comments section, but my post was never approved and eventually deleted. /shrug

Part 1: Stopping and Starting the instance magically.

Starting the server: These instructions by Amazon are good. You can configure LAMBDA scripts to stop and start your instance on scheduled CRON events. The hardest thing about configuring the script was adjusting for GMT to local time. I setup both scripts but decided to use a different stop process.

Stopping the server: Instead of a cron job, I decided to use a CloudWatch alarm to stop an instance when the CPU utilization dropped down below a certain threshold. These AWS instructions are useful. The biggest trick came in finding the baseline CPU utilization between actively browsing the web of one user, vs idle chatter between my workstation and the server when I left the browser open. I currently stop the instance when the CPU Utilization drops below 0.117 for 2 hours.

Part 2 - Dynamically updating IP address

The instructions from NimrodFlores use the dynamically generated IP address from the EC2 instance page. While this is easy, it requires logging into the AWS portal each time I want to capture this, then update the putty session. (I only want to click on my shortcut on my toolbar and auto-connect to the proxy.) To work around this, I plan to use a dynamic DNS entry instead of the IP address in the putty session. 

ZoneEdit
I use ZoneEdit for a couple reasons. First off their free for the two websites I host there. Plus they offered dynamic dns options for when I wanted to host this blog on my home PC over a dial-up DSL line (pre-blogger)

For the proxy box, I decided to spin up a DYN record for the new hostname. Enable and take note of the "DYN Authentication Token" field at the bottom. Use this as the password alternative, not your ZoneEdit password.

I tried their both of their options on the automated dynamic DNS update, then found a post in their forum where they no longer support the ez-ipupdate option. Learned that AFTER I installed all the options required to compile the solution. The Javascript option is 'interesting' if you have a browser running on the Ubuntu server. I don't want to incur any additional costs, so smaller is better. Luckily I found DDClient.


Dynamic DNS - DDClient
Here is a wonderful set of instructions posted on linhost.  There are a few screens that don't match the wizard, but it steps through the install and basic configuration options. After the wizard is complete, you need to modify the configuration file. (don't do like me and reboot your server thinking it's complete after first installed.

This post has some other tweaks specifically regards to zoneedit you could add. No one responded to the question, but I believe that is about the time ZoneEdit was purchased by another provider.

Final
Since the first of the month, I have racked up less than a $1 in charges on my AWS server. Of this, 80% was incurred during the time the EIP was running. Besides the 80 cents, I am being charged roughly a 1/3 of  1 cent for each 'configuration item'. At last check, I have 39 configuration items (i.e. $0.12). I've gone through and deleted a number of those items, but I don't think I can shrink it much more than $0.10.

Wednesday, April 4, 2018

Powershell Hash table of Arrays

The Problem

I am working on a project that has me reviewing all our groups in AD. These groups are nested in places multiple layers deep.

All Staff 
-> Technical
- - > Server Team
- - -> Server Architecture & Build
- - -> Server M&O
- - > Network
-> HR
...

With this, I've wanted to modify one of those nested groups, changing it so it could be replicated to O365 and used as a distribution list. Unfortunately, to do this, I need to mail-enable the on-premise copy, then have that replicate up. As this is a legacy group, it was created as a Global group. You can't mail-enable Global groups and you can't convert children of Global groups to universal. So you need to determine all the 'parent' groups of a specific child.

The Process

This script creates a hash table with 2 properties (Parents and Children) and attempts to create a flat hierarchy.
  • "Technical" 
    • parents are "All Staff" 
    • children are "Server Team", "Server Architecture & Build", "Server M&O" and "Network". 

Once the script is ran, it returns the hash table for all groups in the environment with these properties populated. I am still working out a bug to populate parents and children several layers up/down (for example if Server Architecture and Build has more layers underneath it).. I have worked around this by simply running the meat of the script twice. I haven't verified if possibly need a 3rd or 4th iteration to capture deeply nested groups yet.

A Hash table of Arrays:
My hash table starts off as a fairly basic hash table. 

$MyGroups = @{}

I then populate it with two different array objects. I pulled this concept from the Hey Scripting Guy Blog.

#Capture all groups with email addresses populated.
$AllGroups = Get-Group -resultsize unlimited  ### For DLS ## -filter {WindowsEmailAddress -like "*"} 
$AllGroups | %{$MyGroups[$_.identity] = @{Parents=@(); $Children=@()}}

Now you can access 

$MyGroups["Technical"]

and get something like: 

Name                      Value
Parents                   {}
Children                  {}

As the script loops through all the groups, 
  1. it populates 'children' with all member/child groups (groups that are members of this group) 
  2. loops through each child group and sets itself as a parent
  3. copies this existing (parents of this group) parents to all those children. 
  4. copies children from all it's children. 
So depending on the order the groups are populated, one pass appears to do 80% of the copying, but not all. If "Server Team" was reviewed before "All Staff", then they won't know about each other. As I hinted at, running this loop twice, appears to pick these changes up. 

The Script




The final step is something like:

$Groups = .\Report-GroupHierarchy.ps1
$Groups["Technical"].Parents | Set-Group -Universal

Note: On-premise AD, the object identity was the fully qualified object path, and in O365, the object identity is the same as a the object name. So depending on where you are gathering the data, you may need to enter different values.

$MyGroups["Technical"]
$MyGroups["example.com/Groups/Technical"]

My Own Grandpa

With a little playing around, you could use the same construct to look for "I'm My Own Grandpa" situations.

Let's see, this works:

$grandpa = get-group | ?{Compare-Object $Groups[$_.identity].parents $Groups[$_.identity].children -ExcludeDifferent -IncludeEqual}


The real problem revisited:

The reason I was reviewing groups, was to see why a user wasn't receiving any email. With 1500 distribution lists it's actually very easy for an end-user to not end up on any of the 195 distribution lists that are the children of the 'All Staff' distribution list.

So where should this employee go? Let's find out who they work for.. First let's start off with a little query about direct reports..Over on the LazyWinAdmin there's a nice recursive script to pull all direct reports to their manager.

Find manager's subordinates -> find groups these people belong to -> find the group that has a parent of "All Staff".

Get-ADdirectReports -samaccountname "Manager" | %{(.\memberof-O365DL.ps1 -identity $_).memberof } | select -unique identity | ?{$Groups[$_.identity].parents -eq "All Staff"}




Tuesday, March 13, 2018

Export Exchange Properties From Resource Forest.

This is part of an on-going article for migrating mailboxes from an on-premise Exchange 2010+ environment to Microsoft's O365. See post here.

Creating the Identity Package - Mailboxes

As an Exchange admin, one of your first steps will be to sync the mailbox properties from the resource forest into the authentication domain. These mailbox properties are applied to the mailbox linked master account and synchronized to O365.

When performing a migration from on-premise Exchange to O365, the migration batch/move-request process will look for specific properties (Exchange GUID, Primary SMTP Address) and fail the moves if they do not match. In addition, you'll want to modify the UserPrincipalName on the AD objects so that clients logon to OWA using their email address.

Exchange properties and ADObject name:

  1. #Name (name)
  2. #DisplayName (displayname)
  3. #SamAccountName (SamAccountName)
  4. #WindowsEmailAddress (mail)
  5. #PrimarySMTPAddress (from ProxyAddresses)
  6. #LegacyExchangeDN (legacyExchangeDN)
  7. #EmailAddresses (proxyaddresses)
  8. #ExchangeGUID (msExchMailboxGUID)
  9. #GrantSendOnBehalfTo (publicDelegates)
  10. #ExternalEmailAddress (TargetAddress)

Part 1: Exporting Exchange properties from Exchange:

This script (testing out pastebin), is the latest revision of my script to do this. When it is run in your RF, you'll need to provide it a list of mailbox objects, the current accepted email domain (@example.com) and the tenant email domain (i.e. @ExampleTenant.onmicrosoft.com). The script will export a CSV file in the current directory containing all of the required properties.

  1.  $ExportNames = get-mailbox -organizationalunit "Finance" -resultsize unlimited
  2.  .\Export-RFToAD.ps1 -identity $ExportNames  -acceptedDomain "Example.COM" -O365Domain "Contoso.OnMicrosoft.com" -department "Finance"

The script:



Part 2: Importing the Exchange Properties

Once the CSV has been created, copy it over to the Exchange box in the authentication domain. This admin account will need to have sufficient permissions to modify all objects in this domain.

Point the following script at the CSV and the Organizational Unit for where to create new user objects. Note: New users will typically be resource type mailboxes in the Exchange environment. They can be moved after creation to the correct folders in your AD structure. Just make sure that this other OU is also synced up to the O365 tenant.

.\Import-O365IdentityPackageCSV.ps1 -CSVPath .\Finance.CSV -NewUserOU "Contoso.local\Resource Accounts"




If the script is unable to finish the steps, it will generate a new CSV called ".\FailedToImport.csv". Review each of these users in the authentication AD to see why they may have failed.

Possible Failure scenarios:

  1. Legacy Exchange properties - Unable to apply new Exchange properties as they still have properties from a former Exchange install. "Can't enable mail user as it's already a mailbox", or "Can't contact (some server long gone) to mail enable user". I attempt to resolve this issue by checking for the setting (~ line 92). If this does not resolve the issue, check the ADUC Attribute Editor for references to the legacy environment and (you may have to) remove them. 
  2. Non-Inherited permissions - If the script was unable to modify an account "ACCESS DENIED", the issue is typically because the account is not inheriting permissions. 
    1. Active Directory Users and Computers
    2. enable Advanced Features (view menu)
    3. Open properties of the AD account for failed user.
    4. Security tab -> Advanced button
    5. Check the "Include Inheritable permissions from this object's parent' checkbox.
For a quick check, I recommend doing two things:
  1. Make sure all of the objects that you mail-enabled match the CSV. We had an odd issue were the wrong GUID got applied to the wrong mailbox. Not a BIG deal as after migration, the missed GUID is still on premise and the wrong account now has the wrong displayname/email address.  For example:

    import-CSV .\csvpath | ?{(get-mailuser $_.exchangeguid).displayname -ne $_.displayname}
  2. Make sure mail-enabled accounts are in OUs that are sync'd via AAD Connect.

    import-CSV .\csvpath | %{get-mailuser $_.exchangeguid} | group organizationalUnit

Friday, March 2, 2018

Migration from on-premise Exchange 2010 resource forest to EXO.

We've just finished a year long project migrating 80,000 mailboxes from our single on-premise Exchange 2010 solution, to multiple O365 tenants. The on-premise environment is/was an Exchange resource forest. This means that user's logged into their local authentication domain, then via AD trust, they gained access to their mailbox hosted with us.

The only difference to this diagram, instead of the Internet cloud, replace with a WAN cloud. Clients could only access mail via Outlook over the VPN to our data center. I plan to use this space to post each of the scripts that I generated for this project.

image source: MS Blog - you had me at ehlo

To migrate from shared-multi-tenant, on-premise resource forest Exchange environment to individual O365 tenants.

  1. Create O365 tenant
  2. Configure Authentication domain
    1. Update AD to 2012 R2 or better. 
    2. Install Exchange 2013/2016 
      1. Clean-up legacy Exchange properties
      2. Extend schema for Exchange
      3. Install Exchange software
      4. Remove SCP record reference for auto-discover
    3. Cleanup 
      1. Consolidate (if possible) OUs so that they are easy to manage.
      2. Remove dead accounts.
      3. Cleanup mailboxes in Exchange.
      4. Update workstations so using latest Outlook (2013/2016) as available in Windows Update. Don't forget other Office applications.
      5. Public Folders
  3. Configure ADFS and AAD Connect to O365 tenant.
  4. Do Mailbox Identity Sync
    1. Export identity information from resource forest (RF). 
    2. Import identity info into similar objects in account forest.
  5. Sync mail enabled accounts and groups from account forest to O365.
    1. Review mailboxes for permissions assigned to Auth domain security groups. Include these groups in the sync.

  6. Migrate mailboxes from RF to O365.
    1. Create migration end-point from O365 to Exchange on-premise
    2. Create migration batches/move requests 
    3. Monitor / complete move requests.
  7. License mailboxes once migrated. 
    1. Convert any resource mailboxes that came over as USERMAILBOX to shared/equipment so as to avoid using a mailbox license.
  8. Create groups and external contacts in O365 tenant. 
  9. Review/ fix shared mailbox permissions (security groups)

Updates:

  • 3.14 - extended outline, added page for Identity Sync scripts.


Thursday, February 15, 2018

Powershell: Indeterminate Inline Progress Bar

I am currently working on removing a replicas from all of our on-premise public folders. There is quite possibly 40,000 public folders nested inside of the structure. We are looking to remove 2 copies from all replicas and the Exchange 2010 tools, while working didn't provide any progress.

So, I grabbed all the 'good' replicas.

$firstPF = get-publicfolder ".\Top" 

$GoodReplicas = $firstPF.Replicas | ?{$_ -notlike "remove this db name"} 

Get-PublicFolder -recurse | ?{$_.replicas -ne $GoodReplicas} | Set-PublicFolder -replicas $GoodReplicas

This was taking for ever and I couldn't tell if it was even running. I modified one of my previous progress bars, so that it would simply count to 100, then reset back to 1. Next I modified the code, to return the object I am currently parsing. This allows me to put the progress bar in-line with my code above and it keeps running.

Get-PublicFolder -recurse | ?{$_.replicas -ne $GoodReplicas} | %{wp3 -passthru $_} | Set-PublicFolder -replicas $GoodReplicas



You can see I use a few global variables. This helps maintain the counter between iterations. This also means that it will start at where you left off with the last run.


function wp3 {
 [CmdletBinding()] param(  
  [Parameter()][String]$JobName="Counter",
  [Parameter()]$Passthru
 )

 $envVar = get-Variable -Name $JobName -Scope Global -ErrorAction SilentlyContinue -ValueOnly
 $Times = $JobName+"_count"
 $envVar2 = get-Variable -Name $Times -Scope Global -ErrorAction SilentlyContinue -ValueOnly
 if ($EnvVar -eq $null) { 
  #Global Variable doesn't exist, create one called based on $JobName
  $Env_WPIndex = 0
  New-Variable -Name $JobName -Scope Global -Value 0 #-Visibility Private
 } else {
  #Use current global variable value.
  $env_WPIndex = [double]$EnvVar
 } 
 if ($envVar2 -eq $null) { 
  #Global Variable doesn't exist, create one called based on $JobName
  $envTimeThru = 0
  New-Variable -Name $Times -Scope Global -Value 0 #-Visibility Private
 } else {
  #Use current global variable value.
  $envTimeThru = [double]$envVar2
 }
 Write-Progress -Activity ($JobName+"("+$envTimeThru+")") -Status $([string]$Env_WPIndex+"%") -PercentComplete $Env_WPIndex 
 $env_WPIndex = $env_wpIndex + 1
 
 if ($env_wpIndex -lt 100) { 
  #if less then max object count, increment the global variable by one
  Set-Variable -Name $JobName -Scope Global -ErrorAction SilentlyContinue -Value $env_WPIndex
 } else {
  $envTimeThru = $envTimeThru + 1
  #if already greater than max, remove the global variable from machine. 
  Set-Variable -Name $JobName -Scope Global -Value 0 # -ErrorAction SilentlyContinue
 }

 Set-Variable -Name $Times -Scope Global -ErrorAction SilentlyContinue -Value $envTimeThru
 return $Passthru
}

Monday, January 8, 2018

MegaMillions Script - i.e. playing with Invoke-webrequest

My co-worker likes to read through the powershell reddit and found this interesting little challenge. 

I had an idea for a function to generate Powerball and Megamillions numbers - multiple ticket generation segmented into separate objects and then converted into valid JSON; I did come up with something but figured this would be a nice short script challenge since it doesn't rely on an external API.
Enjoy! (source)
I thought I'd take a crack at it. So far, of the 12 responses, most of them are variants of the random number generator. After one post about 'unique' numbers, the values started getting error checking.

Results returned looks like this:
Balls:  1(16%) 58(12%) 6(16%) 61(12%) 64(12%) Mega: 22


#megamillions
$MegaMillionsWebPage = Invoke-WebRequest http://www.megamillions.com/winning-numbers/last-25-drawings
#Extract table from web page
$mmTable = ($MegaMillionsWebPage.ParsedHtml.getElementsByTagName("TABLE") | % {$_.innertext} | % {$_.split("`n")})[1..25]
#Return mid-five columns from table (ball column)
#DrawDate Balls MegaBall Megaplier Details
$balls = $mmTable | % {$_.split(" ")[1..5]}
#Get only megaball values
$Megaball = $mmTable | % {$_.split(" ")[6]}
#Group appearance of mega ball by appearance on table
$GrpBalls = $balls | group | Sort-Object -property count -Descending 
#Base line statistics of mega ball number occurence.
$BallStats = $GrpBalls | Measure-Object -Property count -min -max -average
# Have won more than 'average' number of times. 
$avg = [int]$BallStats.average
While ($mostPopular.count -lt 5 -and $avg -gt 0) {
    $mostPopular = $GrpBalls | ? {$_.count -gt $avg} | select -ExpandProperty Name    
    $avg-- #broaden scope if less than 5 results returned. 
}
#Return 5 numbers from the most popular results.
$MyBalls = $mostPopular | Sort-Object {Get-Random} | select -first 5 | % {[int]$_} | Sort-Object
#Show number of appearance of each value.. 
$WeightedBalls = $GrpBalls | ? {$myballs -eq $_.name} | select name, @{Name = "Weight"; Expression = {[string](($_.count / 25) * 100) + "%"}}
$BallReport = $WeightedBalls | sort -Property name | %{$_.name+"("+$_.weight+")"}
write-host -NoNewline "Balls: ", ($BallReport -join (" "))
$megaGrp = $megaball  | group | Sort-Object -property count -Descending 
$MegaBallStats = $megaGrp | Measure-Object -Property count -min -max -average
$MegaAVG = [int]$MegaBallStats.average
#Randomly pick one of the most popular mega numbers.. 
$MegamostPopular = $megaGrp | ? {$_.count -eq $MegaBallStats.maximum} | select -ExpandProperty Name | Sort-Object {Get-Random} | select -first 1
write-host " Mega:",$MegamostPopular

Monday, September 11, 2017

Powershell CSV Join

I am currently working on a fairly large project migrating mailboxes from on-premise Exchange 2010 upto Microsoft's O365. One of the issues that we're having is that mailbox permissions don't always migrate correctly. So I've developed process to capture the mailbox permissions in one form or another in a CSV prior to migration, then I reapply them after reapplying them.

One of the issues experienced is that on-premise account information doesn't always match up with what's in O365. For example, I have the local Exchange / AD account information, but not necessarily what the account looks like in o365. So I wrote this script to join two CSV files based on similar properties. This way I can join a CSV containing mailbox information (i.e. SamAccountName) to the Get-MailboxPermission "USER" field.

.\CSVJoin.PS1 -CSV1Data "c:\bin\All-Mailboxes.CSV" -CSV1Property Samaccountname -CSV2Data "c:\bin\All-MailboxPermissions.csv" -CSV2Property USER -JoinedCSV "c:\BIN\Joined-CSV.csv"

The  script will export a CSV containing all fields from CSV1 plus all the fields from CSV2 that don't overlap in name.

#CSVJoin.PS1
[CmdLetBinding()]
param(
    [parameter(Mandatory=$true,HelpMessage="Path and filename for source CSV 1")][ValidateScript({Test-Path $_ })][string]$csv1Name,
    [parameter(Mandatory=$true,HelpMessage="Propery from CSV1 to join files on.")][string]$csv1Property,
    [parameter(Mandatory=$true,HelpMessage="Path and filename for source CSV 2")][ValidateScript({Test-Path $_ })][string]$Csv2Name,
    [parameter(Mandatory=$true,HelpMessage="Propery from CSV2 to join files on.")][string]$csv2Property,
    [parameter(Mandatory=$true,HelpMessage="Path and Name for combined CSV file")][string]$JoinedCSV
)

$csv1Data = Import-CSV $csv1Name | ?{$_.$CSV1Property -ne $null}
$csv2Data = Import-csv $Csv2Name | ?{$_.$CSV2Property -ne $null}

#Capture all the column values for each CSV file and compare them.
$csv1Members = $csv1Data[0] | Get-Member | ?{$_.membertype -eq "NoteProperty"}
$csv2Members = $csv2Data[0] | Get-Member | ?{$_.membertype -eq "NoteProperty"}
$AddCSV2members = Compare-Object $csv1Members $csv2Members | ?{$_.sideindicator -eq "=>"} | %{$_.inputobject}

#Populate HashTable with First CSV based on JOIN fields
$csv1HashTable = @{}
$csv1Data | %{$csv1HashTable[$_.$csv1Property.trim()] = $_}


#Loop through Second CSV and join fields to first CSV. 
$newCSV = @()
ForEach ($c in $csv2Data) {
    $Row = $csv1HashTable[$c.$csv2property.trim()]
    if ($row ) {
        ForEach ($m in $AddCSV2members) {
            $Row | add-member -Membertype NoteProperty -Name $m.name -value $C.$($M.name) -force
        }
        $newCSV += $Row
    }
}

if ($newCSV) {
    $newCSV | Export-csv $JoinedCSV -notypeinformation
}