Pages

Friday, July 11, 2025

Aria Project Unicorn: Cloud template to deploy elastic vm on new dynamically allocated network

๐Ÿงช Rapid-Provisioning with Aria Assembler: Disposable Infrastructure at Scale

A few months back, I found myself with some rare downtime—so naturally, I returned to experimenting with Aria Assembler’s more advanced provisioning capabilities. There’s something elegant about temporary infrastructure that fulfills its purpose and then disappears. The concept: build what’s needed, let it serve, and allow Aria to automate its own teardown.

๐ŸŽฏ Use Case

My first goal was to develop a disposable environment that end-users could spin up at will and retire just as easily. By letting Aria handle both provisioning and decommissioning, we eliminate orphaned resources and ensure clean exits.

⚙️ What This Template Does

  • Creates a new /28 subnet for isolation (approx. 14 usable IPs)
  • Deploys a Windows virtual machine with elastic scaling configured
  • Places a load balancer in front of the VM for basic traffic distribution

This ephemeral environment lives for the duration of its TTL (time-to-live). Once expired, Aria deletes all associated components without human intervention.

๐Ÿงฑ Prerequisites

  • AWS Cloud Account and Project: Ready and authorized for resource deployment.
  • Custom Network Profile: Tagged for unique network constraints:
    project:projectname and network:Dynamic28
    • Isolation Policy: On-demand network
    • Network Domain: AWS default VPC
    • External Subnet: selected from a /20 block
    • Subnet Size: /28
  • Image and Flavor Profiles: Configured to deploy a Windows-based VM

๐Ÿ› ️ Suggestions for Making It Functional

This is a concept template, but here’s how it could evolve:

  • Add Security Groups: Ensure appropriate access controls and network protection.
  • Add Inputs: Introduce dynamic variables (e.g., project name, image type) for repeatable deployments.
  • Auto-Tag Resources: Help with auditability and lifecycle tracking.
  • Lifecycle Hooks: Optional alerts or logic before deletion.

๐Ÿš€ Why Disposable Infrastructure Matters

Ephemeral environments like these promote resource efficiency, sandbox agility, and cleaner cloud hygiene. With Aria handling both launch and sunset operations, teams can iterate rapidly while maintaining control and governance.

Thinking about scaling this concept further? I’ve got a few ideas, maybe a series around intelligent disposability, runtime flexibility, and audit-forward automation.


<code>

'formatVersion': 1
inputs: {}
resources:
  front-end-load_balancer1:
    type: Cloud.LoadBalancer
    properties:
      internetFacing: false
      network: ${resource.dynamic-network.id}
      instances: ${resource["web_vm"].id}
      routes:
        - protocol: HTTP
          port: '80'
          instanceProtocol: HTTP
          instancePort: '80'
          healthCheckConfiguration:
            protocol: HTTP
            port: '80'
            urlPath: /index.pl
            intervalSeconds: 60
            timeoutSeconds: 30
            unhealthyThreshold: 5
            healthyThreshold: 2
  web_vm:
    type: Cloud.Machine
    properties:
      image: Windows 2022
      flavor: t3.small
      constraints: # tags define multiple projects, might break if associated with both.
        - tag: project:MyAriaProject
        - tag: az:a
      autoScaleConfiguration:
        policy: Metric
        minSize: 1
        maxSize: 10
        desiredCapacity: 1
        metricScaleRules:
          - action:
              type: ChangeCount
              value: -2
              cooldown: 60
            trigger:
              metric: CPUUtilization
              period: 60
              operator: LessThan
              statistic: Average
              threshold: 3
              evaluationPeriods: 1
          - action:
              type: ChangeCount
              value: 2
              cooldown: 60
            trigger:
              metric: CPUUtilization
              period: 60
              operator: GreaterThan
              statistic: Average
              threshold: 1
              evaluationPeriods: 3
      networks:
        - network: ${resource.dynamic-network.id}
          securityGroups: []
  dynamic-network: # network profile will create a /28 subnet, see profile for where.
    type: Cloud.Network
    properties:
      networkType: private
      name: esw-dyna-app
      tags:
        - key: placement
          value: new-dynaNetwork
        - key: deployment
          value: ${env.deploymentName}
      constraints:
        - tag: project:MyAriaProject
        - tag: network:Dynamic28
</code>

 

Wednesday, July 2, 2025

Simple and Smart: Turn on the Porch Light When Someone’s Detected

Here’s one of those little automations that feels like magic when it works: your porch light turns on automatically when someone walks up after sunset. It’s a practical example of using Google Home’s new automation script editor—and it’s ridiculously easy to set up.

Part of the inspiration for this automation came from watching delivery drivers show up late at night or early in the morning, using their phone flashlights just to find the steps to my front door. I wanted to provide a safe, welcoming spot for them to drop off packages; no fumbling in the dark, no guesswork. A little light goes a long way toward making everyone feel more comfortable.

What You’ll Need

  • A camera that supports person detection — I used a Nest Doorbell/Cam with a subscription to enable person alerts.
  • A smart light — either a connected smart bulb or, in my case, a smart switch controlling the porch light.
  • Access to the public preview of Google Home’s script editor — you'll need this to write YAML-style automations.

The Automation Script

metadata:
  name: Turn on porch light when a person is seen
  description: Turn on light when a person is seen
automations:
  - starters:
      - type: device.event.PersonDetection
        device: Front Door - Front Porch
    condition:
      type: time.between
      after: SUNSET
      before: sunrise
    actions:
      - type: device.command.OnOff
        devices:
          - Porch - Front Porch
        on: true

How It Works

  • Trigger: A person is detected by the camera on the front porch.
  • Condition: It only runs between sunset and sunrise (so it doesn’t waste energy during the day).
  • Action: Turns on the porch light immediately.

Why I Love It

This kind of automation is simple but useful. It boosts security, saves you from fumbling for the switch, and just feels like your home is paying attention. Google’s new automation editor makes it easy to build and customize these routines with YAML-style clarity.

Want to Try It?

If you've got a Google Home-compatible camera and smart light, just open the Google Home script editor and paste this in as a custom routine. You can always tweak the device names or time settings to fit your setup.

More to come as I play with presence sensors, weather triggers, and multi-room responses. Smart homes are finally getting... smarter.

Wednesday, June 25, 2025

Supercharging Patch Compliance Checks with PowerShell 7 Parallelism

When you're managing thousands of systems, checking for patch compliance across all of them can become a real slog, especially if you’re still looping through them sequentially or juggling a pile of background jobs. That used to be me.

Recently, I rewrote one of my older patch check scripts using PowerShell 7’s ForEach-Object -Parallel feature, and the results were night and day. Here’s a look at how I did it and why you might want to make the leap, too.

The Legacy Job-Based Approach (Worked... but Clunky)

$servers = Get-Content .\serverlist.txt
$jobs = foreach ($server in $servers) {
    Start-Job -ScriptBlock {
        $s = $using:server
        Invoke-Command -ComputerName $s -ScriptBlock {
            Get-HotFix -Id KB5008380
        }
    }
}

# Wait and collect
$results = $jobs | Wait-Job | ForEach-Object {
    Receive-Job -Job $_
}

It got the job done, but there was too much scaffolding: tracking, collecting, cleaning up jobs, and dealing with throttling manually.

Enter PowerShell 7: A One-Liner Game Changer

$servers = Get-Content .\serverlist.txt

$results = $servers | ForEach-Object -Parallel {
    try {
        Invoke-Command -ComputerName $_ -ScriptBlock {
            Get-HotFix -Id KB5008380
        }
        [PSCustomObject]@{ Server = $_; Patched = $true }
    }
    catch {
        [PSCustomObject]@{ Server = $_; Patched = $false }
    }
} -ThrottleLimit 10
  • ✔️ Built-in throttling
  • ✔️ Fewer moving parts
  • ✔️ Much easier to maintain and explain to others

And yes—runtime improved significantly when scaled out.

Bonus: Reporting

You can easily integrate this with your preferred reporting pipeline—CSV, HTML, or even Power BI. Here's a quick CSV export:

$results | Export-Csv .\PatchResults.csv -NoTypeInformation

Final Thoughts

The job-based model served us well, but with PowerShell 7's baked-in parallelism, most of my automation scripts just got faster and cleaner. This is one of those “glad I made the switch” moments—especially when time-to-resolution matters.

Have you tried this approach yet? I’d love to hear how you're using -Parallel in your environment.

Tuesday, June 3, 2025

Blizzard API Project

The Blizzard team has realized that WoW players love to reminisce about their game. We love to play old dungeons, level more characters, and collect appearances from this legacy content. Last summer, Blizzard introduced a gameplay mode called "Remix". This limited-time playstyle encouraged leveling new characters in order to collect all of the available items. Each of these collectable items provided a cosmetic appearance that all of your characters could use. The interesting dilemma here is tracking appearances your account still needed to collect.

For many, the UI was the easiest method to track. You could walk up to a vendor and compare what your account 'knew' versus what was available. "Easy" except the UI required that you change your class to see what you collected. Does my druid have this set? Open appearances, change class to druid, scroll until finding Mists of Pandaria collections, check the various raid-tier levels. Second option was a spreadsheet variant. Some diligent people went through all the vendor offerings, got the names and cost of all the items. When you went to the vendor, you'd mark off all the items you purchased. Neither of these options really appealed to me. 

In an effort to find a better way, I investigated coding against the API. Could a Google Sheets make a call and populate the completion statistics for me? Would it be easier to code against WoWhead's api and query data? I even investigated learning LUA and writing my own addon that would help. This lead to my own analysis paralysis and soon after the end of the Remix event. 

Fast forward 6 months and Blizzard introduces two new achievements based on past expansions. "A Farewell to Arms" and "A World Awoken" are meta-achievements that require completing a number of events, reputation grinds, and collecting treasures in the game. Completing each of these achievements, rewards the grand prize of an iconic mount from that expansion. I mean, who wouldn't want to transform into "Jani Lord of Thieves, God of Garbage, Master of Minions, and Ruler of Rubbish". This, along with my current work with python had peaked my interest. 

Project plan: Using local copy of Python, connect to Blizzard's API and query the character achievement tree. Look for any achievements that are marked as incomplete and provide them as a simple HTML page. 

Step 1: Create an account on Blizzard's developer portal. After agreeing to all the legal things, you're provided with a client id and secret. These credentials are passed in the headers to the authentication portal and a refresh token is provided. This refresh token is put into a JSON structure to create a bearer token. The bearer token is passed each time calling the Blizzard API. All this is identical to how to authenticate with Aria. So far so good.

Step 2: Read character's achievement stack. This is an easy call to the API. The returned JSON provides (almost) every achievement that is available and their status. Issues I found:
  • Achievements are presented in a flat format. The main meta-achievement has 8 child criteria. The child criteria, while listed there are not achievement ids, they are criteria ids. Took me awhile to figure that out.  
  • There are holes in the results. None of my characters have completed the "Storm Chaser" achievement. This meta-achievement is to complete all 4 different 'elemental storms' events in all 4 zones. This achievement doesn't show in the API results for the characters. (I bet if I complete one of the 4 zone achievements, Storm Chasers will appear in my query.)
  • Last, some of the achievements are character-based. So, while my main character may have petted all the dogs in DragonFlight, my new alt still shows the achievement incomplete. 
New Step 2: Read the generic achievements stack from the API. Using a recursive loop, go through the achievement id by id and pull all children. Return all of the child achievement ids in an array. Loop through the character achievement results for completed achievements and report back on what is incomplete. This solution avoided the holes in the API. I was able to query the entire tree and present a high-level result. Unfortunately, it looks like I still have close to 70 achievements to complete. I decided to implement a few optimizations:
  1. While querying the achievements tree, I now build a dictionary (ie PowerShell hash table). The dictionary equates parent achievements to its children. This allowed me to optimize the code to say, if you completed this parent, all the children can be assumed completed.
  2. Everything is written to the local drive. I was having issues where the time between runs was several minutes, especially when recursing the achievement tree. So on initial run, each achievement JSON, the achievement tree and the achievement dictionary is written locally.  At the moment, I am also reusing old character achievement JSON dumps, but that may be updated. The final report now completes in seconds instead of minutes. 
  3. The final report only includes items with no children. Using the dictionary, I look for achievement entries that don't have children listed and focus on those. My report that used to show I needed "Storm Chasers" and "Chasing Storms in Waking Shores" and "Firestorms in Waking Shores", now it only shows the "Firestorms" entry. 
This leaves one issue unresolved. Some of the incomplete achievements are character based. As of writing this, my solution is to run the same code against my alternates that played through that expansion and manually compare notes. The druid leveled via hunts, where the paladin favored the pvp type activities and so on. My plan is to pull all my max level toons from my account, query their completion status and combine into a master list. If any of these 8 characters says this achievement is complete, then consider it complete. I am running into a permissions issue as my developer credentials do not allow querying private account details. 

With the Legion Remix on the books for "soon"(tm), I am that much closer to achieving my goal of a dynamic shopping list. Once I have code, need to consider how I can share it. I personally like the idea of an external website that I can review from outside the game (while here in office). I'll try to keep you posted. 



Wednesday, April 2, 2025

VRO Action to find tagged network

 In Aria Automation, one of the most frustrating aspects of using constraint tags is the unknown. On my vCenter builds, I have created a simple cloud template that utilizes 4 tags to constrain new virtual machine build to a specific network. To make sure the dropdown options are legitimate, I have automation that will query the tags based on the selections. Once the engineer selects a project, it populates the tenant dropdown. When they select a tenant, it proceeds to populate the application, environment and tier tags. 

Unfortunately, some tenants (like IT) have hundreds of options for tag combinations and not all of them are legitimate. This causes builds to fail only after the engineer had populated the template and hit submit. "no matching fabric network with these tags..."


This JavaScript action will take the provided constraint tags and return any networks that match that combination. I added a radio-button set to read-only to my custom form and set the source to this VRO action. As I didn't want it to list ALL networks when the form is loaded, I have it only update after the tenant tag is populated. Functionally, it is currently only returning results when all 4 tags are populated, but it should return each time an additional tag is selected. 

Friday, September 30, 2022

VRA ABX Action to set Deployment Lease

 Running vRealize Automation (VRA), I wanted to create a new catalog item in Service Broker that would basically self-destruct after a certain time. My intent is to allow our Windows engineers to quickly spin up a new machine in AWS with a lease applied to it. 

I could find a catalog template setting that allowed me to configure this, so I found that I could do this via a REST call to the VRA. So I decided to work on creating an extensibility action using Powershell. 

Note, once a lease is set, I don't think it's possible to remove the lease. 

Pre-requisites:

  • Extensibility Actions on Prem integration server configured.
  • VRA refresh token defined. This will be used in step 0.
  • BlueprintID of a catalog template that will apply the lease to. We'll need this for step 2. The blueprint id can be captured easily from the URL when viewing it in the design menu. Copy everything after the %2F, for example (the bolded portion): 

    https://www.mgmt.cloud.vmware.com/automation-ui/#/blueprint-ui;ash=%2Fblueprint%2Fedit%2F123456-7890123456-123456789
Step 0: setup Action Constant
  1. Inside VRA - Extensibility menu, select Actions on the sidebar. 
  2. Select Action Constants and add a + New Action Constant
  3. Enter the name of "refreshToken" and paste the refresh token string into the value field and select toggle to enable the Encrypt the action contact value
  4. Click SAVE.

Step 1: create the action 

  1. Extensibility menu, Actions on sidebar and click + New Action
  2. Define the new action Name "SetDeploymentLease" and select the VRA Project that this will run against then click Next.
  3. On the top, change the dropdown from PYTHON to POWERSHELL. 
  4. Copy the pastebin code below and paste it into the body of the action.
  5. Under Default Inputs, add two more input fields (for a total of 3). These fields will be used during the run-time of the action. 

    1. Default - LeasePeriod - Positive integer for the number of days to define the lease.
    2. Default - deploymentId - can be anything, this will be sent to the action from the subscription event defined in step 2. I copied the deployment id off an existing 'test' deployment directly from the URL. (like prerequisites, it's everything after the %2F). 
    3. Action Constant - refreshToken - this will pull the value from Step 0.
  6. SAVE the action
Step 2: create a subscription - The subscription will trigger after the blueprint is run.
  1. Extensibility menu, Subscriptions option on sidebar and click + NEW SUBSCRIPTION
  2. Give subscription name. 
  3. For Event Topic select "Deployment Resource Completed"
  4. Enable the Condition and enter a (case matters) single value of: 
    event.data.blueprintId == "value from pre-requisites"
  5. Action/Workflow: select the new action you created in Step 1.
  6. Projects: Select the project you specified in step 1, item 2.
  7. SAVE



Testing: 
  1. Option: If valid values are entered in step 1 part 5, the test button on the Action will run. 


  2. Option: Deploy a new machine using the catalog item defined in the prerequisites. Within a minute after the deployment completes the build, the lease should be applied via the subscription. 





Monday, October 11, 2021

Using VRO to query tags for a VRA-Cloud Assembly dropdown

We've just started building servers in our vSphere with NSX-T environment using VRA to deploy the servers. The security group placement is controlled via NSX policies that are determined by the machine's tagging. I wanted to provide a dynamic dropdown in VRA so that I did not have to recode the cloud template and subsequent service broker custom form each time a new tag value was added to vSphere. I noticed in VRA that the NSX tags were being discovered there. 

This routine runs from VRO to read the tags in VRA (/iaas/api/tags) with a specific KEY, and returns the VALUE in a sorted array.

Prerequisites:

  1. VRO server integrated with VRA-Cloud Assembly. 
  2. VRA Plug-in to VRO (reference: here) - need to download and install on your VRO server.
  3. Cloud template that needs a dropdown of tag values. 
Warning:  Without a filter, we have over 7,000 tags coming from NSX. When I tried to query all our tags, I crashed my VRO server and needed to reboot it to kill the query. 
   
Configuration:
  1. Set default VRA host 
    1. VRO - select Configurations - vRA plug-in
    2. Variables tab - click on defaultHost
    3. Click on the value field and select the VRA host you want to query.
    4. Save
  2. Create new Action
    1. VRO - select Library - Actions menu item 
    2. New Action
    3. type in a name and module name 
      1. typing in a 'new' name for the module will create a new module. 
    4. Click on the Script tab and make sure Javascript is selected for Runtime.
    5. On the right, click Add New Input button and create a new input
      1. name: tagFilter
      2. type: string
    6. Change the Return type to string and check the checkbox for Array.
    7. Paste the code below.
    8. Save.
Last step, TEST! Click the RUN option up top before closing the Action screen and enter a tagFilter value that matches a tag showing in your VRA - Configure - Tags. 

Replication:
  1. Log into your VRA instance
  2. Scroll down to Integrations and open the VRO integration
  3. Click the Start Data Collection button and wait...


Use:
  1. Log into your VRA instance
  2. Open a cloud template
  3. Browse to the Inputs menu. 
  4. Locate or create a new input. Mine:
    1. Name: ApplicationTag
    2. Display name: Application
    3. Type: string
    4. Default Value: leave blank.
  5. Scroll to the bottom of the screen and expand More Options
    1. Pairs: External Source
    2. Action: click Select and search for the name you input into 2.3 in VRO above. 
    3. Unselect BIND and type in the tag KEY value to filter on. 
    4. SAVE to save select.
    5. SAVE to close Input properties
  6. Test cloud template - The dropdown should load for a few seconds then complete.