Archive
Creating AWS S3 buckets for webpage redirection
I recently had an issue where I needed to solve a particular problem:
1. I had a DNS domain name
dns.name.here
2. I needed to point it to a HTTPS URL hosted on another domain:
https://other.dns.name.here/path/to/site/goes/here
3. The DNS server for dns.name.here does not support HTTP Redirect records.
To address this, I decided to use S3 buckets hosted on Amazon Web Services to handle the redirection to the HTTPS URL. In this scenario, what I’m doing is pointing the relevant dns.name.here domain name at the S3 bucket’s AWS domain name. The S3 bucket is performing a HTTP 301 redirect, which sends the requesting web browser the URL of the site I want to connect to. For those interested, Amazon’s documentation of how to use an S3 bucket for URL redirection is linked below:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html
After doing it the first time manually, I decided to see if anyone had scripted this task. It turns out the answer is “no”, at least for what I wanted to do, so I’ve written a script which handles this task. For more details, please see below the jump.
Amazon Web Services’s new EC2 metadata tag option doesn’t allow spaces in tag names
Beginning on January 6th, Amazon Web Services added a new option to include your instance’s tags as part of the instance’s metadata when the instance is launched:
By including this data in the instance metadata, this information no longer needs the DescribeInstances or DescribeTags API calls to retrieve tag information. For shops which use tag information extensively, this will cut down on the number of API calls you need to make and allow tag retrieval to scale better.
There is one limitation: tags stored in metadata cannot have spaces. If you have the “tags in metadata” option enabled and you have a tag with spaces in it, you’ll see a message similar to the one below:
‘Tag Name Here’ is not a valid tag key. Tag keys must match pattern ([0-9a-zA-Z-_+=,.@:]{1,255}), and must not be a reserved name (‘.’, ‘..’, ‘_index’)
This was an issue for me yesterday because I’m using AWS’s Patch Manager to keep my instances updated and that uses the following tag:
Patch Group
This tag must be used by patching groups and is referenced in the documentation this way:
Patch groups require use of the tag key Patch Group. You can specify any tag value, but the tag key must be Patch Group.
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-group-tagging.html
The result was that I set up a new instance yesterday with my tags, including the Patch Group tag, and received the following message when I tried to launch the instance:
‘Patch Group’ is not a valid tag key. Tag keys must match pattern ([0-9a-zA-Z-_+=,.@:]{1,255}), and must not be a reserved name (‘.’, ‘..’, ‘_index’)
I put in a ticket to AWS Support and the fix is the following:
When setting up new EC2 instances, make sure that the Allow tags in metadata setting under the Advanced Details section is set to Disabled.
This turns off including your instance’s tags with the instance’s metadata as part of the instance’s launch. This addresses the issue because tag information will not be added to your instance’s metadata and thus removes the metadata tagging limitations from the instance creation process. Now your tags can include spaces again, though you’re also back to having to retrieve tag information via the API.
On Monday, January 10th 2022, the Allow tags in metadata setting was set to Enabled by default. However, I suspect AWS got enough support calls about this particular issue that they made a change to the default settings. As of Tuesday, January 11th 2022, the Allow tags in metadata setting is now set to Disabled by default.
Session videos from Jamf Nation User Conference 2021 now available
Jamf has posted the session videos for from Jamf Nation User Conference 2021, including the video for my AutoPkg In The Cloud session.
For those interested, all of the the JNUC 2021 session videos are available on YouTube. For convenience, I’ve linked my session here.
Slides from the “AutoPkg in the Cloud” session at Jamf Nation User Conference 2021
For those who wanted a copy of my cloud hosted-AutoPkg talk at at the Jamf Nation User Conference 2021 conference, here are links to the slides in PDF and Keynote format.
Slides and video from the “AutoPkg in the Cloud” session at MacSysAdmin 2021
For those who wanted a copy of my cloud hosting for AutoPkg talk at at the MacSysAdmin 2021 conference, here are links to the slides in PDF and Keynote format.
The video of my session is available for download from here:
Identifying an AWS RDS-hosted database by its tag information
Recently, I was working on a task where I wanted to set up an automated process to create manual database snapshots for a database hosted in Amazon’s RDS service. Normally this is a straightforward process because you can use the database’s unique identifier when requesting the database snapshot to be created.
However in this case, the database was being created as part of an Elastic Beanstalk configuration. This meant that there was the potential for the database in question to be removed from RDS and a new one set up, which meant a new unique identifier for the database I wanted to create manual database snapshots from.
The Elastic Beanstalk configuration does tag the database, using a Name tag specified in the Elastic Beanstalk configuration, so the answer seemed obvious: Use the tag information to identify the database. That way, even if the database identifier changed (because a new database had been created), the automated process could find the new database on its own and continue to make snapshots.
One hitch: Within the AWS API, RDS lists only the following three API calls to interact with tags.
ListTagsForResource would seem to be the answer, but the hitch there is that you have to have the database’s Amazon Resource Name (ARN) identifier available first and then use that to list the tags associated with the database:
aws rds add-tags-to-resource --resource-name arn:aws:rds:us-east-1:123456789102:db:dev-test-db-instance --tags Key=Name
I was coming at it from the other end – I wanted to use the tag information to find the database. RDS’s API doesn’t support that.
Fortunately, the RDS API is not the only way to read tags from an RDS database. For more details, please see below the jump.
Connecting to AWS EC2 instances via Session Manager
When folks have needed command line access to instances running in Amazon Web Service’s EC2 service, SSH has been the usual method used. However, in addition to using SSH to connect to EC2 instances in AWS, it is also possible to connect remotely via Session Manager, one of the services provided by AWS’s Systems Manager tool.
Session Manager uses the Systems Manager agent to provide secure remote access to the Mac’s command line interface without needing to change security groups and allow SSH access to the instance. In fact, Session Manager allows remote access to EC2 instances which have security groups configured to allow no inbound access at all. For more details, please see below the jump.
Setting up AutoPkg, AutoPkgr and JSSImporter on an Amazon Web Services macOS EC2 instance
One of the outcomes of the recent Amazon Web Service’s Insight conference was AWS’s announcement that, as of November 30th, macOS EC2 instances were going to be available as on-demand instances or as part of one of AWS’s reduced cost plans for those who needed them long-term.
There are a few differences about AWS’s macOS offerings, as opposed to their Linux and Windows offerings. macOS EC2 instances are set up to run on actual Apple hardware, as opposed to being completely virtualized. This means that there are the following dependencies to be aware of:
- macOS EC2 instances must run on dedicated hosts (AWS has stated these are Mac Minis)
- One macOS EC2 instance can be provisioned per dedicated host.
AWS has also stipulated that that dedicated hosts for macOS EC2 instances have a minimum billing duration of 24 hours. That means that even if your dedicated host was only up and running for one hour, you will be billed as if it was running for 24 hours.
For now, only certain AWS regions have EC2 Mac instances available. As of December 20th, 2020, macOS EC2 instances are available in the following AWS Regions:
- US-East-1 (Northern Virginia)
- US-East-2 (Ohio)
- US-West-2 (Oregon)
- EU-West-1 (Ireland)
- AP-Southeast-1 (Singapore)
The macOS EC2 instances at this time support two versions of macOS:
macOS Big Sur is not yet supported as of December 20th, 2020, but AWS has stated that Big Sur support will be coming shortly.
By default, macOS EC2 instances will include the following pre-installed software:
- ENA drivers
- EC2 macOS Init
- EC2 System Monitoring for macOS
- Systems Manager SSM Agent for macOS
- AWS Command Line Interface (AWS CLI) version 2
- Xcode Command Line Tools
- Homebrew
For folks looking to build services or do continuous integration testing on macOS, it’s clear that AWS went to considerable lengths to have macOS EC2 instances be as fully-featured as their other EC2 offerings. Amazon has also either made it possible to install the tools you need or just went ahead and installed them for you. They’ve also included drivers for their faster networking options and made it possible to manage and monitor Mac EC2 instances using AWS’s tools just like their Linux and Windows EC2 instances.
That said, all of this comes with a price tag. Here’s how it works out (all figures expressed in US dollars):
mac1 Dedicated Hosts (on-demand pricing):
$1.083/hour (currently with a 24 hour minimum charge, after which billing is by the second.)
$25.99/day
$181.93/week
$9493.58/year
Now, you can sign up for an AWS Savings Plan and save some money by paying up-front for one year or three years. Paying for three years, all cash up front is the cheapest option currently available:
$0.764/hour
$18.33/day
$128.31/week
$6697.22/year
Now some folks are going to look at that and have a heart attack, while others are going to shrug because the money involved amounts to a rounding error on their existing AWS bill. I’m mainly going through this to point out that hosting Mac services on AWS is going to come with costs. None of AWS’s existing Mac offerings are part of AWS’s Free Tier.
OK, so we’ve discussed a lot of the background but let’s get to the point: How do you set up AutoPkg to run in the AWS cloud? For more details, please see below the jump.
Resizing an AWS macOS EC2 instance’s boot drive to use all available disk space
I’ve started working with Amazon Web Service’s new macOS EC2 instances and after a while, I noticed that no matter how much EBS drive space I assigned to a EC2 instance running macOS, the instance would only have around 30 GBs of usable space. In this example, I had assigned around 200 GBs of EBS storage, but the APFS container was only using around 30 GBs of the available space.
After talking with AWS Support, there’s a fix for this using APFS container resizing. This is a topic I’ve discussed previously in the context of resizing boot drives for virtual machines. For more details, see below the jump.
Remotely gathering sysdiagnose files and uploading them to S3
One of the challenges for helpdesks with folks now working remotely instead of in offices has been that it’s now harder to gather logs from user’s Macs. A particular challenge for those folks working with AppleCare Enterprise Support has been with regards to requests for sysdiagnose logfiles.
The sysdiagnose tool is used for gathering a large amount of diagnostic files and logging, but the resulting output file is often a few hundred megabytes in size. This is usually too large to email, so alternate arrangements have to be made to get it off of the Mac in question and upload it to a location where the person needing the logs can retrieve them.
After needing to gather sysdiagnose files a few times, I’ve developed a scripted solution which does the following:
- Collects a sysdiagnose file.
- Creates a read-only compressed disk image containing the sysdiagnose file.
- Uploads the compressed disk image to a specified S3 bucket in Amazon Web Services.
- Cleans up the directories and files created by the script.
For more details, please see below the jump.
Recent Comments