Image of AWS Resource Access Manager aka. AWS RAM

AWS Resource Access Manager

Why Infrastructure Resource Sharing is not always a good idea...

One of the more recent service offerings that have confused some former colleagues of mine is AWS Resource Access Manager, also known as AWS RAM. After looking into it, I think the benefits do make life easier in sharing resources around Transit Gateway, App Registries, CodeBuild, Licence Manager plus more.

Whilst sharing EC2 Image Builder (no more Packer if you don't want it), or AWS Glue features for data projects can only be added to the list of amazing graces AWS has offered us via AWS RAM, there are aspects to AWS RAM that should come with a 'handle with care' sign on them. Anybody who copied AMIs (Amazon Machine Image) over regions will tell you it's a multistage snapshotting job that can be tedious when the number of AMIs is substantial. AWS RAM is your friend and you can share cross-region, cross-account components, AMIs, and more allowing access and not replication to solve your resource needs. The granular detail is also shared with AWS Glue shares where Data Catalogs and meta sharing can be done in the same manner allowing a Glue object to be shared no matter if it's a catalog, database or table. After finishing a data project recently, I can only salute the usefulness of this feature when thinking about design and how tables in particular are shared between regions and accounts. Embraced fully, the architecture on data mesh projects can get a real shot in the arm from AWS RAM in the management of project resources in development all the way to production. 

Looking through the services list, I came to a grinding halt at subnets. The sharing of subnets for some reason has been seen as a good idea to allow multi-region/account sharing using AWS Organizations. It can allow a subnet on account 1 (dev) to be shared with account 2 (prod) once there are no prohibiting Service Control Policies (SCPs) at the AWS Organizations account level. Imagine the scenario, you deploy EC2 instances in the subnet in account 1 (dev) and have deployed a database to the shared subnet in account 2 (prod) for reasons that should be considered very bad practice. With AWS RAM, it's possible. Whilst viewing the account 1 (dev) subnet, you cannot see the database in account 2 (prod). If you log into one of those dev EC2 instances in account 1 (dev) subnet, you can ping the database in account 2 (prod) and bypass all subnet access control rules, VPC controls, etc that you may have configured for your production database. This is an example of why AWS RAM should be handled with due care and caution. 

Don't get me wrong, overall I think it's a great idea from AWS and certainly worth exploring when you are looking at horizontals like Datamesh projects for example that greatly benefit from AWS RAM. Like all things new and great, handling them with care is always a good idea and prevents those gotchya moments that often overtake us when we least expect it. Stay tuned for more on infrastructure in this blog along with articles on other areas of interest in the writing and DevOps arenas. To not miss out on any updates on my availability, tips on related areas or anything of interest to all, sign up for one of my newsletters in the footer of any page on Maolte. I look forward to us becoming pen pals!

Related Articles

image of a project timeline for a Maolte Technical Solutions Limited article on major incidents and digital migration

Major Incidents and Digital Migrations

Image of Jenkins workflow

CICD and Jenkins

Image of a runbook template header on Confluence for technical writing purposes

Effective Technical Documentation