12Aug, 2024
Threat modeling with MITRE ATT&CK – Keeping Up with Evolving AWS Security Best Practices and Threat Landscape

Threat modeling with MITRE ATT&CK

The MITRE ATT&CK framework has emerged as an interesting tool for organizations using AWS to understand, anticipate, and counteract cyber threats. This globally recognized framework offers a comprehensive matrix of tactics and techniques that are commonly employed by cyber adversaries. The MITRE ATT&CK for Cloud matrix, specifically, is tailored to address cloud environments. It provides insights into potential cloud-specific threats and vulnerabilities, which are particularly useful for AWS users.

Incorporating the MITRE ATT&CK framework into AWS security practices offers numerous benefits as it provides a structured methodology for understanding and anticipating potential threats within your AWS landscape. Here are its key integrations:

  • Mapping to AWS services: By aligning the ATT&CK framework with AWS services, organizations can gain detailed insights into potential attack vectors. This involves understanding how specific ATT&CK tactics and techniques can be applied to or mitigated by AWS services, such as EC2, S3, or IAM.
  • Utilization in security assessments: Incorporating the framework into security assessments allows for a more thorough evaluation of AWS environments. It helps in identifying vulnerabilities that could be exploited through known attack methodologies, thus enabling a more targeted approach to securing cloud assets. For instance, organizations can use the framework to simulate attack scenarios, such as a credential access attack, where an attacker might attempt to obtain AWS access keys through phishing or other methods.
  • Enhancing incident response: The framework can significantly improve incident response strategies. By mapping ongoing attacks to the ATT&CK matrix, incident response teams can more quickly understand the attacker’s Tactics, Techniques, and Procedures (TTPs), leading to faster and more effective containment and remediation.
  • Feeding continuous monitoring: The framework aids in the development of continuous monitoring strategies that are more aligned with the evolving threat landscape. It allows security teams to proactively look for indicators of attack tactics and techniques, enabling early detection of potential threats.
  • Developing customized threat models: Creating threat models based on ATT&CK scenarios tailored to AWS can significantly enhance defense strategies. For example, building a model around the exfiltration techniques can help in preparing defenses against potential data breaches from S3 buckets.
  • Developing red team exercises: Conducting red team exercises using ATT&CK-based scenarios provides a realistic test of AWS defenses. For example, simulating an attack where a red team uses lateral movement techniques to move between EC2 instances can test the effectiveness of network segmentation and access controls.

Building upon our discussion of MITRE ATT&CK and how to handle emerging threats in general, next, we will explore the wealth of resources available for continuous learning in AWS security.

20Jul, 2024
Storage optimization – Introduction to Serverless on AWS

Storage optimization

Modern cloud applications ingest huge volumes of data—operational data, metrics, logs, etc. Teams that own the data might want to optimize their storage (to mini‐ mize cost and, in some cases, improve performance) by isolating and keeping only business-critical data.

Managed data services provide built-in features to remove or transition unneeded data. For example, Amazon S3 supports per-bucket data retention policies to either delete data or transition it to a different storage class, and DynamoDB allows you to configure the Time to Live (TTL) value on every item in a table. The storage optimization options are not confined to the mainstream data stores; you can specify the message retention period for each SQS queue, Kinesis stream, API cache, etc.

DynamoDB manages the TTL configuration of the table items efficiently, regardless of how many items are in a table and how many of those items have a TTL timestamp set. However, in some cases, it can take up to 48 hours for an item to be deleted from the table. Consequently, this may not be an ideal solution if you require guaranteed item removal at the exact TTL time.

AWS Identity and Access Management (IAM)

AWS IAM is a service that controls the authentication and authorization of access to AWS services and resources. It helps define who can access which services and resources, under which conditions. Access to a service or resource can be granted to an identity, such as a user, or a resource, such as a Lambda function. The object that holds the details of the permissions is known as a policy and is stored as a JSON document, as shown in Example 1-1.

Example 1-1. IAM policy to allow read actions on DynamoDB Orders table

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Effect”: “Allow”,

“Action”: [

“dynamodb:BatchGet*”,

“dynamodb:Get*”,

“dynamodb:Query”

],

“Resource”: “arn:aws:dynamodb:eu-west-1:12890:table/Orders”

}

]

}

29May, 2024
Incremental and Iterative Development – Introduction to Serverless on AWS

Incremental and Iterative Development

Iterative development empowers teams to develop and deliver products in small increments but in quick succession. As Eric Ries says in his book The Startup Way (Penguin), you start simple and scale fast. Your product constantly evolves with new features that benefit your customers and add business value.

Event-driven architecture (EDA), which we’ll explore in detail in Chapter 3, is at the heart of serverless development. In serverless, you compose your applications with loosely coupled services that interact via events, messages, and APIs. EDA principles enable you to build modular and extensible serverless applications.1 When you avoid hard dependencies between your services, it becomes easier to extend your applica‐ tions by adding new services that do not disrupt the functioning of the existing ones.

Multiskilled, Diverse Engineering Teams

Adopting new technology brings changes as well as challenges in an organization. When teams move to a new language, database, SaaS platform, browser technology, or cloud provider, changes in that area often require changes in others. For example, adopting a new programming language may call for modifications to the develop‐ ment, build, and deployment processes. Similarly, moving your applications to the cloud can create demand for many new processes and skills.

Influence of DevOps culture

The DevOps approach removes the barriers between development and operations, making it faster to develop new products and easier to maintain them. Adopting a DevOps model takes a software engineer who otherwise focuses on developing applications into performing operational tasks. You no longer work in a siloed soft‐ ware development cycle but are involved in its many phases, such as continuous integration and delivery (CI/CD), monitoring and observability, commissioning the cloud infrastructure, and securing applications, among other things.

  1. A module is an independent and self-contained unit of software.

Adopting a serverless model takes you many steps further. Though it frees you from managing servers, you are now programming the business logic, composing your application using managed services, knitting them together with infrastructure as code (IaC), and operating them in the cloud. Just knowing how to write software is not enough. You have to protect your application from malicious users, make it available 24/7 to customers worldwide, and observe its operational characteristics to improve it continually. Becoming a successful serverless engineer thus requires devel‐ oping a whole new set of skills, and cultivating a DevOps mindset (see Figure 1-12).

Figure 1-12. Traditional siloed specialist engineers versus multiskilled serverless engineers

15Feb, 2024
The AWS Well-Architected Framework – Introduction to Serverless on AWS

The AWS Well-Architected Framework

The AWS Well-Architected Framework is a collection of architectural best practices for designing, building, and operating secure, scalable, highly available, resilient, and cost-effective applications in the cloud. It consists of six pillars covering fundamental areas of a modern cloud system:

Operational Excellence

The Operational Excellence pillar provides design principles and best practices to devise organizational objectives to identify, prepare, operate, observe, and improve operating workloads in the cloud. Failure anticipation and mitigation plans, evolving applications in small but frequent increments, and continuous evaluation and improvements of the operational procedures are some of the core principles of this pillar.

Security

The Security pillar focuses on identity and access management, protecting appli‐ cations at all layers, ensuring data privacy and control as well as traceability and auditing of all actions, and preparing for and responding to security events. It instills security thinking at all stages of development and is the responsibility of everyone involved.

Reliability

An application deployed and operated in the cloud should be able to scale and function consistently as demand changes. The principles and practices of the

Reliability pillar include designing applications to work with service quotas and limits, preventing and mitigating failures, and identifying and recovering from failures, among other guidance.

Performance Efficiency

The Performance Efficiency pillar is about the approach of selecting and the use of the right technology and resources to build and operate an efficient system. Monitoring and data metrics play an important role here in constantly reviewing and making trade-offs to maintain efficiency at all times.

Cost Optimization

The Cost Optimization pillar guides organizations to operate business applica‐ tions in the cloud in a way that delivers value and keeps costs low. The best practices focus on financial management, creating cloud cost awareness, using cost-effective resources and technologies such as serverless, and continuously analyzing and optimizing based on business demand.

Sustainability

The Sustainability pillar is the latest addition to the AWS Well-Architected Framework. It focuses on contributing to a sustainable environment by reducing energy consumption; architecting and operating applications that reduce the use of compute power, storage space, and network round trips; use of on-demand resources such as serverless services; and optimizing to the required level and not over.

12Nov, 2023
AWS security knowledge landscape – Keeping Up with Evolving AWS Security Best Practices and Threat Landscape

AWS security knowledge landscape

AWS provides an extensive array of resources that can be leveraged to support your ongoing education in AWS security. These resources are designed to cater to a wide range of learning needs, from foundational knowledge to advanced security techniques. Let’s learn more about them:

  • AWS whitepapers: Dive into AWS whitepapers for comprehensive insights into security best practices, architectural recommendations, and service-specific security advice. They serve as an excellent starting point, laying a solid foundation for understanding the principles behind AWS security measures and how they can be applied in real-world scenarios.
  • AWS security blog: This dynamic platform is regularly updated with insights from AWS security experts, offering deep dives into new features, security enhancements, and detailed guides on implementing AWS security services. The blog’s content is organized by product and by level (100, 200, 300, or 400), making it easy for readers to filter articles based on their expertise and areas of interest. Whether you are a novice in cloud security or an experienced professional, the AWS Security Blog provides valuable knowledge tailored to your learning curve.
  • AWS security announcements: Staying updated with the latest security announcements from AWS is crucial for maintaining the security integrity of your AWS environment. These updates include information on security patches, vulnerabilities, and compliance issues that may affect your services. Integrating these bulletins into an RSS aggregator is a proactive way to ensure you receive timely updates, enabling you to implement necessary security measures swiftly.
  • AWS documentation: The official AWS documentation is an indispensable resource for anyone working with AWS services. It covers detailed instructions on configuring and using AWS services, including security protocols and best practices. The documentation is regularly updated to reflect the latest service features and security guidelines, making it an essential reference for day-to-day operations and troubleshooting.
  • AWS Well-Architected Framework: This framework is pivotal for learning how to design and operate secure, high-performing, and resilient cloud applications. Its so-called Security Pillar in particular focuses on offering guidance for protecting data and systems. Learning through this framework enables you to assess and improve your architecture in alignment with AWS best practices, enhancing your ability to identify and mitigate risks.
  • AWS workshops: AWS workshops are interactive learning sessions that offer hands-on experience with AWS services, facilitating the practical application of theoretical knowledge. These workshops, categorized by level and topic, allow learners to select sessions that match their current skills and learning objectives. Opt for higher-level (300 or 400) security workshops to gain practical knowledge on implementing AWS security services and features.
  • AWS re:Post: As a user-driven Q&A community, AWS re:Post is a platform where AWS users can ask questions and share knowledge about AWS services, including security. It is an excellent resource for getting insights from real-world experiences and expert advice on specific security challenges.
  • AWS Q as your virtual mentor: Tap into AWS Q for an engaging and responsive learning experience, obtaining swift solutions to your AWS security questions. Whether you are addressing a particular challenge or looking for advice on security best practices, AWS Q serves as your on-demand guide, breaking down complex AWS concepts into understandable information. With its comprehensive coverage of AWS knowledge, including the topics discussed earlier, AWS Q is an essential resource for AWS learners of every level.

Incorporating these resources into your learning path allows for a structured and comprehensive approach to mastering AWS security. By staying informed about the latest developments and engaging with AWS security-related content, you can ensure that your skills remain sharp and relevant. Prioritizing your learning based on your career goals—whether you are aiming to be a cybersecurity generalist, an AWS expert, or a specialized AWS security professional—facilitates a focused and rewarding professional development journey in the realm of cloud security.

16May, 2023
Personal advice from an experienced exam taker – Keeping Up with Evolving AWS Security Best Practices and Threat Landscape

Personal advice from an experienced exam taker

Having taken more than 20 exams in my career and only failing two on the first attempt, I recommend the following strategies for effective exam preparation:

  • Diversify learning resources: Use at least two different types of learning resources to cover all bases and limit the risk of bias from specific authors – for example, a Udemy video course and a book, or a video course and a Cloud Academy learning path.
  • Develop a detailed learning plan: Structure your study time and topics to cover comprehensively. Set achievable goals, regularly assess your progress, and adjust your plan as necessary to stay on track.
  • Schedule your exam date and time: Booking your exam in advance gives you a clear deadline to work toward. Schedule your exam for when you are most alert and focused. For many like myself, the morning is ideal. Additionally, after booking your exam, consider giving a call or sending an email to the exam center. Some centers are infrequently used, and issues such as no staff presence on the exam day can occur, leading to lengthy rescheduling frustrations.
  • Practice with exam quizzes: Engage in extensive quiz practice, focusing on retaking failed questions and undertaking full new quizzes to simulate the exam experience and improve time management. Aim to consistently score above 80% on fresh quizzes to gauge your readiness for the real exam day.
  • Manage time: Always aim to answer all questions as guessing yields a better success rate than leaving answers blank. Calculate the time per question and set easy-to-remember milestones. As an example, the CSC exam requires 65 questions to be answered within 170 minutes, which means you should allocate 2.5 minutes and 37 seconds for each question. Aim to answer 12 questions after the first 30 minutes, 24 questions by the 60-minute mark, and so on.
  • Prefer in-person exams for lengthy tests: While online proctoring allows for taking an exam from home, it can be overly strict, risking disqualification for minor infractions. My preference is to opt for test centers to avoid these issues.

Moving on from the AWS certification, let’s delve into continuous professional development, emphasizing the power of participating in events and networks.

15Dec, 2022
Keeping abreast with new technologies – Keeping Up with Evolving AWS Security Best Practices and Threat Landscape

Keeping abreast with new technologies

The AWS ecosystem is dynamic, with the landscape of cloud technology and security threats constantly evolving. AWS and third-party vendors frequently introduce new security services, features, and tools designed to address these challenges. For AWS professionals, staying updated with these innovations is not just about enhancing security postures—it is also about ensuring their knowledge remains current. This section delves into effective strategies for keeping pace with these developments and seamlessly integrating them into your environment.

Staying informed

Information is power in the realm of security. Security professionals must proactively seek out information on the latest AWS security announcements and security blog posts related to their topics of interest. Regular participation in AWS-specific webinars and workshops provides opportunities to gain firsthand knowledge of the latest updates and how they can be applied in practice. These resources can help you to anticipate and mitigate emerging threats with the latest AWS security technologies.

The next crucial phase involves evaluating and testing these advancements within AWS.

Evaluating and testing

Upon discovering new security technologies, evaluating their potential impact and testing their effectiveness is key. This involves doing the following:

  • Defining security enhancement objectives: Evaluate how new services or features can bolster your security posture. Establish clear metrics for evaluating new security tools, including performance, compatibility with existing systems, and overall security enhancement.
  • Testing in a controlled environment: Implement new technologies in a sandbox or development environment first. This allows you to gauge their performance, identify any integration issues, and understand their operational implications without risking production systems.
  • Security testing: Determine the actual security benefits by simulating attack scenarios or using penetration testing tools. Evaluate how technology improves your defense against these simulated threats.
  • Performance testing: Measure how the new technology performs under different scenarios. Look at its responsiveness, speed, and resource consumption during peak and off-peak hours.

Following evaluation and testing, let’s transition to ensuring compatibility and compliance as our next essential step.

12Oct, 2022
Ensuring compatibility and compliance – Keeping Up with Evolving AWS Security Best Practices and Threat Landscape

Ensuring compatibility and compliance

As new security technologies are adopted, ensuring compatibility with existing systems and compliance with relevant regulations is essential. This step requires a thorough review of how new tools interact with current architectures and an assessment of compliance implications, especially for industries subject to strict regulatory standards. This requires doing the following:

  • Mapping the current infrastructure: Start by creating an updated map or diagram of your current AWS infrastructure and security setup. This will help you pinpoint where new tools will be integrated.
  • Identifying integration points: Highlight specific areas within your infrastructure where the new security solutions will interact with existing systems. This could include network connections, data flows, APIs, and Lambda functions.
  • Compatibility assessment: Conduct a detailed analysis of how new security solutions integrate with your current AWS setup. Look for potential conflicts or dependencies that might affect their function.
  • Compliance evaluation: For organizations subject to industry regulations, it is crucial to ensure that new technologies do not compromise compliance. Review the security and compliance documentation provided by AWS or third-party vendors to understand their implications.

Next, let’s discuss how to effectively integrate these solutions into your environment.

Integrating into existing environments

The successful adoption of new security technologies depends on their integration into your existing AWS environment without causing disruptions. This involves doing the following:

  • Incremental deployments: Gradually introduce new technologies, starting with non-critical systems to minimize disruption and allow for adjustments based on initial observations.
  • Automating where possible: Leverage automation for the integration process to reduce manual errors and streamline deployment. Automation can also assist in maintaining configuration standards across your environment.
  • Updating security documentation: Revise your existing security documentation. This update should cover any new Standard Operating Procedures (SOPs) introduced by the integration.
  • Monitoring and adjusting: After deployment, continuously monitor for operational and security performance. Be prepared to make adjustments based on the outcome.

Integration lays the groundwork for our final focus—planning for future-proof security.

16Jun, 2022
The essence of AWS security mastery – Closing Note

The essence of AWS security mastery

Our journey began with the shared responsibility model, a foundational concept that sets the stage for understanding the balance of security tasks between AWS and its users. We ventured deeper into infrastructure security, IAM, data protection, and the vast arsenal of AWS security services. We dissected VPC design, IAM intricacies, the power of encryption, the nuances of securing microservices, serverless deployments, multi-tenancy, automating security, and many more topics. Each chapter was built upon the last, brick-by-brick, layering knowledge to equip you with a robust framework for AWS security. With practical examples and in-depth discussions, the goal was to illuminate the path toward a comprehensive and resilient security posture.

The continuous evolution of cloud security

In the realms of AWS, cloud computing, and cybersecurity, change is the only constant. New threats emerge with increasing sophistication, prompting AWS to continuously roll out new services and features. These advancements are designed not just to combat emerging threats but also to meet the evolving demands of businesses. This ever-shifting landscape demands a proactive and informed security approach – an approach agile enough to embrace change and integrate cutting-edge technologies.

A significant part of AWS security’s future lies in the advancements of artificial intelligence (AI), machine learning (ML), and the burgeoning field of generative AI. These technologies are already reshaping our security strategies, from enhancing threat detection and prediction to automating security responses. AWS has begun this transformative journey, integrating AI and ML into services such as Amazon GuardDuty for intelligent threat detection and Amazon SageMaker for building ML models to predict and thwart security vulnerabilities. As AWS continues to innovate, your ability to harness these technologies is crucial for staying ahead in the security domain.

The journey of continuous learning

As the landscape of cloud security continuously evolves, so does the need for perpetual learning and adaptation. True mastery in AWS security transcends theory; it thrives on application. This book has provided the seeds, but the garden of your expertise must be continually cultivated. The vast expanse of cloud security demands relentless curiosity, exploration, and hands-on experience. To refine your skills, actively engage with the vibrant AWS community, dive into forums and workshops, and embark on real-world projects that challenge and expand your knowledge. Remember, the path to proficiency is one of perpetual learning, where each challenge conquered is a stepping stone toward mastery.

In conclusion

This book marks a significant milestone in your journey toward AWS security mastery, but it is merely the beginning. The road ahead is filled with opportunities to expand your knowledge and refine your skills. As you progress, carry the principles and insights from these pages with you. May this knowledge serve as a foundation upon which you will continue to build, innovate, and secure within the ever-evolving AWS ecosystem.

Keep learning, keep exploring, and keep securing your cloud with passion and expertise.

12Mar, 2022
The emergence of networking – Introduction to Serverless on AWS

The emergence of networking

Early mainframes were independent and could not communicate with one another. The idea of an Intergalactic Computer Network or Galactic Network to interconnect remote computers and share data was introduced by computer scientist J.C.R. Lick‐ lider, fondly known as Lick, in the early 1960s. The Advanced Research Projects Agency (ARPA) of the United States Department of Defense pioneered the work, which was realized in the Advanced Research Projects Agency Network (ARPANET). This was one of the early network developments that used the TCP/IP protocol, one of the main building blocks of the internet. This progress in networking was a huge step forward.

The beginning of virtualization

The 1970s saw another core technology of the modern cloud taking shape. In 1972, the release of the Virtual Machine Operating System by IBM allowed it to host multiple operating environments within a single mainframe. Building on the early time-sharing and networking concepts, virtualization filled in the other main piece of the cloud puzzle. The speed of technology iterations of the 1990s brought those ideas to realization and took us closer to the modern cloud. Virtual private networks (VPNs) and virtual machines (VMs) soon became commodities.

The term cloud computing originated in the mid to late 1990s. Some attribute it to computer giant Compaq Corporation, which mentioned it in an internal report in 1996. Others credit Professor Ramnath Chellappa and his lecture at INFORMS 1997 on an “emerging paradigm for computing.” Regardless, with the speed at which technology was evolving, the computer industry was already on a trajectory for massive innovation and growth.

The first glimpse of Amazon Web Services

As virtualization technology matured, many organizations built capabilities to auto‐ matically or programmatically provision VMs for their employees and to run busi‐ ness applications for their customers. An ecommerce company that made good use of these capabilities to support its operations was Amazon.com.

During early 2000, engineers at Amazon were exploring how their infrastructure could efficiently scale up to meet the increasing customer demand. As part of that process, they decoupled common infrastructure from applications and abstracted it as a service so that multiple teams could use it. This was the start of the concept known today as infrastructure as a service (IaaS). In the summer of 2006, the com‐ pany launched Amazon Elastic Compute Cloud (EC2) to offer virtual machines as a service in the cloud for everyone. That marked the humble beginning of today’s mammoth Amazon Web Services, popularly known as AWS!