How to Optimize AWS Costs with FinOps

Amazon is locked in the race against rising energy and infrastructure costs. Their per-instance pricing has halted its freefall after years of relentless reductions; at the same time, external pressures are threatening a big squeeze on organizations’ budgets. Identifying and understanding AWS spend is quasi-analytical – controlling it is wholly cultural. Unifying these requires fluency with FinOps that many organizations still need to develop.

What Causes AWS Costs to Increase?

AWS and other hyperscalers drive an immense amount of innovation, with total cloud expenditure now predicted to account for $84.6 billion dollars of corporate spend. Having placed itself at the cornerstone of emerging technologies such as artificial intelligence, AWS grants teams the tools to incorporate their immense compute power into everyday business ops. Despite the performance increases and reduced latency,  even AWS’ market dominance does not make it immune to external factors, however. 

Heavy investment in infrastructure around Europe sits uncomfortably close to the war in Ukraine, highlighting a dependency on Russian energy supplies. Even following heavy investment in its high-efficiency Graviton 3 chip – which Amazon CFO Brian Olsavsky has credited with a-billion-dollar expenses drop in the first quarter of 2022 –  further international tensions surrounding Taiwan’s chip industry still shed a gloomy shadow on cost prospects.

With cloud vendors battling increased spend, it’s up to the customer to navigate increasingly complex cost models. With over 160 cloud services, the sheer variety on offer – alongside the nitty-gritty demands of each unique integration – can threaten the most well-intentioned cost-saving aspiration. 

There are three fundamental drivers of cost within AWS: computestorage, and outbound data transfer. Understanding the impact of each is one of the primary ways that organizations can begin to reclaim control over their cloud cost. 

Compute 

The first cost driver within the AWS is compute: elastic cloud compute (EC2), for example, provides secure resizable compute to developers, offering a variety of virtual hardware across CPU, memory, storage and network capabilities. Each instance type demands its own cost alongside the geographic region it’s placed within – the cost is proportional to the energy and storage requirements of the associated hardware, alongside the overall demand currently being placed on that datacenter.

While the wide selection of instance types provides DevOps with unmatched flexibility, it provides the first clue to how AWS costs can increase. With DevOps granted direct access to Cloud compute power – and therefore cost – a siloed development team can easily disregard budget constraints in favor of higher availability or power. With no guardrails for cloud spending, this newfound autonomy can result in unexpected and unexplainable costs. Compute is one area that quickly accrues extra cost, partly thanks to the speed with which AWS introduces new product families – building a technical debt of outdated compute instances – and partly thanks to the sheer speed with which DevOps can spin up and release new projects.

Storage

With compute power outsourced, many organizations choose to make further use of AWS’ data storage. Amazon Simple Storage Service (S3) grants endpoints the ability to access stored files from any location.  Now an established backbone of cloud-native applications and AI data lakes, S3 lends incredible configurability to data storage. One of the more intuitive cost drivers within S3 is the size of the original being stored; this facet of cloud cost is often just assumed to be a cost of business.

However, many DevOps overlook or underestimate the importance of access frequency. AWS offers storage solutions based on how often each database is accessed  – S3 Standard’s low latency and high throughput makes it perfect for rapid-access fields such as content distribution, gaming applications, and big data analytics. However, their products range all the way to Glacier Deep Archive – built for low-cost storage of data with very rare access; for example, backups in the event of widespread compromise. 

Policies are able to be implemented that automatically transition objects between storage classes, and Intelligent-Tiering is an S3 offering that even does it for you. The assumption that storage classes are simply set-and-forget can be a major driver of cost. 

Outbound Data

With storage and compute power costs accounted for, the last major consideration is AWS data transfer fees. One of the more overlooked areas of cost impact, the quantity of data being transferred is as important as its destination. Keep in mind that only outbound transfers incur fees, as the incoming gigabytes aren’t transported by Amazon-owned architecture.

The first ‘layer’ of cost impact is based on whether your data is going from AWS to the internet or to another Amazon-based workload. Compute is more than just remote servers: once the third-party server has processed the request, each byte must be sent back to the user’s device over the public internet. As the standard AWS compute service, outbound EC2 pricing can shed some light on the sheer variety of cost. Up to one gigabyte is transferable for free every month. From there, the following 9.9 terabytes in that month are charged at $0.108 dollars per GB. Given that the average AWS customer is managing a total of 883 terabytes of data, today’s enterprise demands are lightyears beyond even the most inexpensive tier. To reflect this, AWS offers economy-of-scale tiered pricing, with each tier offering a better price per GB of transferred data. This is yet another factor in the sheer unpredictability of AWS cost: even if you’re relying on one service in one region, changes in demand mean that your cloud spend is subject to constant fluctuation.

As we zoom out and start to look at services within their surrounding networks, the view of data transfer costs becomes even more cluttered. Consider the architectural best practice of setting up multiple availability zones: with a primary RDS database set up, both the ingress and egress of data to the second availability zone is charged. This means that – should your organization suddenly have to rely on the secondary availability zone – getting data to your consumers could suddenly cost a great deal more. 

What is Cost Optimization in AWS?

Cost optimization starts with visibility. In this way, controlling AWS cost differs significantly from the traditional approach used for physical servers and on-premises software licenses. The traditional spending model can be neatly segmented into a few major roles: finance teams authorize budgets; procurement teams oversee vendor relationships; and IT teams handle installation and provisioning. In contrast, cloud computing has enabled virtually any end user from any business sector to independently and rapidly acquire technology resources. As IT and development teams are pushed to the end of this procurement chain, there’s very little incentive left to optimize resources that have already been acquired.

AWS cost optimization recognizes that this approach leaves IT teams chronically unprepared. One on-the-ground consequence of this is the popularity of on-demand instances – the highest-cost form of resource fulfillment. If siloed teams are the barrier to individual cost responsibility, then democratized access to real-time cost information becomes a major key. While efficient DevOps teams further help the organization remain agile and competitive, a solid foundation of Financial DevOps (FinOps) is vital as cloud service adoption expands. Without this, an aspiring cost optimization project risks slipping back into cloud confusion.

AWS Cost Optimization Best Practices

Since its inception in 2006, Amazon Web Services (AWS) has slashed its compute prices over 67 times. This may come as a surprise to more than a few customers, as real-world spend has increased astronomically. This is due to the fact that cloud usage has grown to the point that all cost reductions are negated – expenses have steadily eaten up increasing swathes of revenue. Regaining control over AWS costs demands visibility, resource reservation, and a new level of team cohesion.

To effectively manage and optimize AWS costs, consider the following AWS cost optimization best practices and strategies:

Decommission Intelligently

First and foremost when it comes to cost management best practices is a proactive approach to provisioning and decommissioning resources. Regularly identify and remove unneeded applications, instances, storage volumes, EBS volumes, snapshots, and unattached Elastic IP addresses.

Making this a challenge are fluctuating monthly expenditures and the speed with which new environments are created. Discovering unused resources necessitates an approach of continuous refinement and a commitment to discipline. AWS CloudWatch offers one way to identify unused DynamoDB instances by measuring any active reads or writes on the table or global secondary indexes (GSIs). Identifying what tables have seen no activity over the last 30 days gives you a decent idea of what you can axe. Decommissioning them can place new strain on lean teams, so make sure to set reasonable KPIs.

Support Devs to Decommission

Further decommissioning can be supported at the development level – for instance, unused EBS volumes can be avoided by selecting the Delete on Termination box when EC2 instances are created. Implementing guidelines at the development level helps prevent the need for decommissioning entirely.

Be Proactive About Performance Tuning

FinOps isn’t all about reducing cloud resources – it also covers how you issue more. Waiting until application performance drops is a surefire way to overspend on cloud resources. Instead, use this as an opportunity to implement robust tracking tools – Cost Explorer’s inbuilt forecasting can be a great first step to seeing how much your application will need. 

This allows you to be judicious with the resources you add, setting up a solid foundation for the following best practices. 

Reserve to Save

Now that you know how many resources are truly necessary, Reserved Instances are a cost management best practice that offers immediate effect. 

For specific services such as Amazon EC2 and Amazon RDS, Reserved Instances can achieve savings of up to 75% compared to the corresponding on-demand capacity. RIs are offered in three payment varieties: all up-front, partial up-front, and no upfront. 

Purchasing Reserved Instances on the RI Marketplace involves a trade-off between upfront payment and discounts. A larger initial payment results in a bigger discount. For maximum savings, paying entirely upfront grants the highest discount. Opting for Partial up-front Reserved Instances provides lesser discounts but requires less initial expenditure. 

Sell Unused Reservations

Should you find yourself hanging onto unused reservations, it’s crucial to utilize them – or else waste your investment. When you identify a reserved instance that is not being fully used, consider deploying it for a new application or an existing one currently operating on costlier on-demand instances. Alternatively, you have the option to sell your Reserved Instances in the RI Marketplace.

Prioritize Optimization Strategies

When first exploring FinOps adoption, the sheer scale of change can be intimidating. One way around this analysis paralysis is to evaluate and rank optimization techniques based on their impact and the effort required. Tackle them systematically, with the following framework:

Seek out the right stakeholders

Consider their cost optimization blocks, the impact of their unoptimized spending, and what tools they would need for optimal visibility and cost reduction.

Map out the change network

By visualizing the change process, it becomes possible to identify which teams would be impacted by the potential optimization method, alongside whether new communication structures are required.

Put a cost to the project

Balance the time and tools that each optimization project will require against the potential savings. This helps provide a clear metric for project prioritization.

Regularly Review Your Architectural Choices

Annually or quarterly, reassess applications for architectural efficiency. Seek assistance from your AWS account team for an AWS Well-Architected Review. Some of the recommendations, for instance, may allow you to keep on top of infrequently-accessed data by moving it to S3 Glacier, and long-term archived data to Glacier Deep Archive.

Upgrade to Latest-Generation Instances

Stay informed about AWS updates and new features. Upgrading to the latest generation instances can offer improved performance and functionality, potentially allowing for reductions at an even lower cost.

FinOps Adoption Success Stories

When GlobalDots partnered with a major eCommerce giant, their cloud operations were spread across 74 different accounts, and some inefficient development habits snowballed into a multi-million cloud spend. GlobalDots spent several months consolidating this client’s cloud resources, doubling the number of machines running on reservation contracts, and systematically streamlining outdated architecture. In this FinOps case study, the eCommerce giant would go on to enjoy a 20% reduction in its cloud bill. At the same time, they grew by a third thanks to increased feature development and web traffic. 

Large-scale cultural changes can feel glacially slow at times. However, FinOps adoption sits at the divide between analysis and action – with just a few major stakeholders on board, it becomes possible to transform your organization’s approach from the inside out.

Latest Articles

Complying with AWS’s RI/SP Policy Update: Save More, Stress Less

Shared Reserved Instances (RIs) and Savings Plans (SPs) have been a common workaround for reducing EC2 costs, but their value has always been limited. On average, these shared pools deliver only 25% savings on On-Demand costs—far below the 60% savings achievable with automated reservation tools. For IT and DevOps teams, the trade-offs include added complexity, […]

Itay Tal Head of Cloud Services
5th December, 2024
How Optimizing Kafka Can Save Costs of the Whole System

Kafka is no longer exclusively the domain of high-velocity Big Data use cases. Today, it is utilized on by workloads and companies of all sizes, supporting asynchronous communication between even small groups of microservices.  But this expanded usage has led to problems with cost creep that threaten many companies’ bottom lines. And due to the […]

Itay Tal Head of Cloud Services
29th September, 2024
How E-commerce TrustMeUp Achieved 40% Faster Delivery and 25% Bandwidth Savings with GlobalDots & CloudFront

A popular e-commerce platform was growing fast, but that growth created challenges. With a poorly optimized cloud setup, the company faced content quality problems, as well as ongoing security issues. The only way to solve the problem was to optimize their CloudFront distribution – leading them to work with GlobalDots’ innovation experts. Using the solution […]

Itay Tal Head of Cloud Services
11th September, 2024
EBS-Optimized Instances: A Guide to Cut Costs and Maintain Performance

A recent study of over 100 enterprises found more than 15% of AWS cloud bills comes from Elastic Block Store (EBS). But what can you do to cut those costs without impacting performance? The key is to select EBS-optimized instances. With the right combination of EBS-optimized instances and EBS volumes, companies consistently maintain at least […]

Ganesh The Awesome Senior Pre & Post-Sales Engineer at GlobalDots
19th May, 2024

Unlock Your Cloud Potential

Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.

    GlobalDots' industry expertise proactively addressed structural inefficiencies that would have otherwise hindered our success. Their laser focus is why I would recommend them as a partner to other companies

    Marco Kaiser
    Marco Kaiser

    CTO

    Legal Services

    GlobalDots has helped us to scale up our innovative capabilities, and in significantly improving our service provided to our clients

    Antonio Ostuni
    Antonio Ostuni

    CIO

    IT Services

    It's common for 3rd parties to work with a limited number of vendors - GlobalDots and its multi-vendor approach is different. Thanks to GlobalDots vendors umbrella, the hybrid-cloud migration was exceedingly smooth

    Motti Shpirer
    Motti Shpirer

    VP of Infrastructure & Technology

    Advertising Services