Pages

Thursday, July 30, 2020

VRA API Programming Lessons Learned

I've been working for so long inside the closed environment of Exchange powershell, that when a co-worker suggested programming against a REST API, I sort of cringed in fear. Here was a brand new programming method that I had no concept of where to start. Luckily, VMWare vRealize Automation (reference) has a full-featured API that you can easily write PowerShell commands against to both pull information from or push configurations up to. The documentation is well rounded with examples (written in CURL format) that can be easily translated to Powershell. 

Our friend on this journey will be invoke-restMethod. (msdn link) (yes, I have started writing my commands in camel case). This command has a few basic components:
  1. headers - a variable that contains the credentials that you'll need to access this URI
  2. uri - the URL to the aspect that you are looking to read from/post to. 
  3. method - GET information, POST information PATCH update information etc. 
  4. body - the information you'd like to post up to this interface. Usually empty when doing a GET.
Headers = Authorization - This differs per the API. VMWare requests that a 'refresh' token is generated in the UI, then exchanged for a bearer token. The bearer token can then be used for up to 30 minutes on API calls. AWS needs an Access key and Secret key encoded in a basic credential (i.e. 'get-credential') object. 

URI - This is the basic URL to reach this object type. From my experience, this URL follows a basic format. 

    API URL / object

For example, to query all of the VRA Cloud Accounts (reference) in my environment, the URI would be 

$myURL = "https://api.mgmt.cloud.vmware.com" + "/iaas/api/cloud-accounts"

Method - Are actions telling invoke-restMethod what is to be performed at the URL. 90% of the time I used only GET to query and POST to create. PATCH can be used to update an existing entry. (see the msdn link for more info)

Note: Powershell on a GET method will return an object that you can extract information from. Unfortunately, I was never able to read modify and post the same object back to a site. I ended up reading it, then creating a new body and posting that. 

$Results = Invoke-RestMethod -Uri $myURL -Method get -ContentType "application/json" -UseBasicParsing -headers $headers 
 
$results | convertto-json

(if the query is successful, the results should look like this)

whereas, you can manipulate the object:

$results.content.name 

and get "my-name"


Body - This was the largest part of the learning curve. The body variable is looking for a JSON formatted variable. For example, peek at the cloud-accounts POST method. The completed body JSON should look like something like this:

Sample Cloud-Account Body from code.vmware.com

In Powershell, you can attempt to create this two different ways. Simply, it can be a giant text string that you format to contain all of the properties and values. Seems easy enough, but will be rather hard to support later on.  The second option you can create a PowerShell hash table and convert it to JSON using the (obviously named) convertTo-JSON command. Once we identify come clues, I feel you'll agree, the hash table to JSON method is much easier.

Observation one: Items are enclosed in curly brackets. These enclose a concept or object. This will be replaced with a hash. "@{ }"
Observation two: There are a few items enclosed in square brackets as well. These are actually arrays of results. Often when there is a one-to-many relationship. @( ) 
Observation three: Backslashes "\" surround almost all variable values. Ignore them. 
Observation four: concepts are separated by commas. Replace these with a semi-colon. 
Observation five: the colon is used to separate a key with its value. "name": "string". replace colons with an equals sign.
Observation six: variable names are in 'camelCase' sensitive (only uppercase the first letter of the following words). This is EXTREMELY important when doing REST calls in JSON. You can build the prettiest JSON in the world, but if your variable names are not perfectly matching the case shown in the documentation, the rest call will fail. regionIds -ne RegionIDs. Going from PowerShell to JSON was a shock because of this! 


This next example includes a multi-value array for the tags:

The hardest part of the process is making sure that the hash tables and arrays are closed correctly. 

Here's a snippet of what I use to create new AWS Cloud Accounts in VRA. I pass a number of variables to this JSON, but I think you can guess how they correlate to your AWS account. 

$CloudRegionArray = @("us-west-2") # we only put stuff here.
$NewCloudAccount = @{"accessKeyId"=$strCloudAccountAccessKey;"secretAccessKey"=$strCloudAccountSecretKey;"createDefaultZones"="true";"name"=$strCloudAccountName;"regionIds"=$CloudRegionArray;"tag"=@(@{"key"="ca";"value"=$($strCloudAccountName+"-aws")})}  | convertto-json

Invoke-RestMethod -uri "https://api.mgmt.cloud.vmware.com/iaas/api/cloud-accounts-aws" -Method POST -ContentType "application/json" -headers $headers -Body $NewCloudAccount



Monday, February 10, 2020

Using Powershell against vRealize Automation Cloud Assembly

Over the last few weeks, I have been working to automate the steps to onboard a new customer into vRealize Automation (VRA) Cloud Assembly. The plan is as a new customer signs up with our solution, we will create a new AWS account that all of their resources will be deployed into. VRA will then be configured with a new 'cloud account'. Since we could potentially have different support areas needing access, we will create a new project to grant the permissions. With the new cloud account, we need to touch each of the flavor mappings and image mappings, create network profiles for each AWS region and storage profiles.

Prerequisite # 1: Get a refresh token
A refresh token is used to initiate the authenticated communications with your environment. This refresh token is exchanged by VRA for a bearer token. You then pass this bearer token in the header of each REST call to VRA.

  1. Log into your VMware Cloud Services console.
  2. Click down caret by your name in the upper right corner. 
  3. Select My Account from the dropdown menu.
  4. Select API Tokens from the top menu. 
  5. Populate the form so that it includes the permissions you want to include. 
  6. Click the Generate button and it will pop up a screen with a 64 character token. 
  7. Copy down this code, you'll need it to populate the variable in the AuthenticateWithVRA function.  (line 53)

Prerequisite 2: AWS Programmatic Access Token
The script configures the AWS cloud account in the very beginning. Therefore script requires an AWS Access key and secret key credentials for a programmatic access account. The account will need to have power user role access to the AWS environment if it is to actually provision objects there. 



Prerequisite 3: Master Account configured
To limit the scope of the script, I clone most of the configuration of an existing 'master account'. This account has the flavor and image mappings configured similar to how all new projects would be configured. To simplify the script I've embedded this value in the script.  (line 33)

# VRA Dev Account ID -> Script copies this account for new customers to be.
$DevAccountID = 'abc12345667890'

This value can be pulled from the UI fairly easily. Log into VRA Cloud console, open the Cloud Account that you want to replicate then look at the URL for the project. The end of the URL, after the last %2F contains the cloud account ID to use. Mine is 29 hex digits long.

sorry this is either really BIG or really small.

Copy down that code and then locate (see reference above) and replace the ID value in the code. 



The full script can be pulled from GitHub.

Wednesday, September 19, 2018

Using a Free AWS box as Web Proxy Box

Throughout the year, there is always a need for a remote SSH box. SSH is a reasonable way to do management of external/remote computers (command-line, bash, etc.), run DNS/network lookups from the Internet DNS servers and to even test web content without the interference of internal spam filters. Taking on this project has helped me learn about Amazon Web Services(AWS) and some automation functionality capable in the environment.

Configure AWS SSH Proxy box

NimrodFlores does a wonderful job explaining his process for spinning up a box in AWS, and configuring the putty client to connect. I had 3 issues with his final answer though.

  1. The post suggests using changing the default proxy settings for your browser. This is a less than ideal solution for me as I must use the similar browser for accessing the corporate resources. Using a proxy box would make this impossible. I've found that the Chrome extension "FoxyProxy" is free and allows me to one-click turn-on and turn-off the proxy usage. 
  2. The box is never shut down. There is a hypothetical point when I would incur charges due to the amount of time the server is running. I really want the box to start on weekday mornings and stop when I am not using it. 
  3. By stopping the instance each night, Amazon puts a new IP address each time. This requires that you update the SSH session each time. My first idea was to attach an elastic IP address to this instance, but that generated a few cents/day charges for the 'static' address. I am now at $0.86 for the first 2 weeks of the month.I believe by getting rid of the EIP that out, I can stop that recurring charge. 

I posted these in his comments section, but my post was never approved and eventually deleted. /shrug

Part 1: Stopping and Starting the instance magically.

Starting the server: These instructions by Amazon are good. You can configure LAMBDA scripts to stop and start your instance on scheduled CRON events. The hardest thing about configuring the script was adjusting for GMT to local time. I setup both scripts but decided to use a different stop process.

Stopping the server: Instead of a cron job, I decided to use a CloudWatch alarm to stop an instance when the CPU utilization dropped down below a certain threshold. These AWS instructions are useful. The biggest trick came in finding the baseline CPU utilization between actively browsing the web of one user, vs idle chatter between my workstation and the server when I left the browser open. I currently stop the instance when the CPU Utilization drops below 0.117 for 2 hours.

Part 2 - Dynamically updating IP address

The instructions from NimrodFlores use the dynamically generated IP address from the EC2 instance page. While this is easy, it requires logging into the AWS portal each time I want to capture this, then update the putty session. (I only want to click on my shortcut on my toolbar and auto-connect to the proxy.) To work around this, I plan to use a dynamic DNS entry instead of the IP address in the putty session. 

ZoneEdit
I use ZoneEdit for a couple reasons. First off their free for the two websites I host there. Plus they offered dynamic dns options for when I wanted to host this blog on my home PC over a dial-up DSL line (pre-blogger)

For the proxy box, I decided to spin up a DYN record for the new hostname. Enable and take note of the "DYN Authentication Token" field at the bottom. Use this as the password alternative, not your ZoneEdit password.

I tried their both of their options on the automated dynamic DNS update, then found a post in their forum where they no longer support the ez-ipupdate option. Learned that AFTER I installed all the options required to compile the solution. The Javascript option is 'interesting' if you have a browser running on the Ubuntu server. I don't want to incur any additional costs, so smaller is better. Luckily I found DDClient.


Dynamic DNS - DDClient
Here is a wonderful set of instructions posted on linhost.  There are a few screens that don't match the wizard, but it steps through the install and basic configuration options. After the wizard is complete, you need to modify the configuration file. (don't do like me and reboot your server thinking it's complete after first installed.

This post has some other tweaks specifically regards to zoneedit you could add. No one responded to the question, but I believe that is about the time ZoneEdit was purchased by another provider.

Final
Since the first of the month, I have racked up less than a $1 in charges on my AWS server. Of this, 80% was incurred during the time the EIP was running. Besides the 80 cents, I am being charged roughly a 1/3 of  1 cent for each 'configuration item'. At last check, I have 39 configuration items (i.e. $0.12). I've gone through and deleted a number of those items, but I don't think I can shrink it much more than $0.10.