Back to blog results

12월 14, 2022 By Chas Clawson

Defragging database security in a fragmented cloud world

Security can often be distilled down to protecting data. And with microservice-driven applications, the approach to cloud database security has evolved quite dramatically. Beyond just securing data in the cloud, it’s now also difficult to know where the data resides, where the data is flowing, and how this data should be classified.

Looking back: Monolithic application architectures

Let’s start by winding the clock back to what monolithic applications looked like not too long ago. Typically they were built in a “stack”, or using a two-tier architecture where the application would directly interface with a database. LAMP, for example, stands for Linux, Apache, MySql, and PHP. These are all the components you would need to build a basic web application on a single node. While this may be simple to develop and deploy, it quickly becomes unsustainable as your application grows in scale and complexity. For example, in that model, all the traditional security best practices rest on the shoulders of the system application owner.

This includes:

  • Hardening the operating system

  • Patching and updating the OS, database, and server applications

  • Ensuring all communication between nodes is encrypted

  • Ensuring data is encrypted at rest

  • Handling identity, access & authentications

It’s easy for a developer or system admin to unintentionally make a mistake. And it only takes one vulnerability for the entire stack and its data to be compromised. Additionally, running an entire application as a tightly coupled monolith introduces problems with data latency, data scalability, and multiple points of failure. Good luck finding the bottleneck of an application written in one sprawling java JAR file, or when a vulnerability is discovered, bringing down the entire system to patch. Innovation slows to a crawl.

From a business perspective, this approach just doesn't meet the needs of today's customers. Regardless of the business vertical, customers demand highly interactive apps, in the palm of their hand. If one business can’t provide that, they will go elsewhere. In order to deliver the real-time user experience customers expect, applications have to be architected from the outset to solve four distinct challenges.

  • Fast data ingest

  • Low latency experience

  • High concurrency & scalability

  • Highly secured data (in transit and at rest)

What does the modern application stack like this look like?

First and foremost, many of the components are “purpose-built”, often on top of open-source technology components hosted or managed by cloud providers. Developers are free to choose their language of choice, for example, which database technology meets the use-case at hand, and package these services up as standalone containers that are exposed via documented APIs. The database team can now focus on the database component only, while the other developers work on their components respectively. Services and technologies can be swapped in and out, as long as the exposed APIs remain the same.

Second, all the services and components are built and delivered “as code”. This means security and hardening has to be implemented at the earliest stages by the builders. Gone are the days when “all hands are on deck” as a new monolithic app is deployed over the weekend hoping that all the new code works. Also gone are the days when an app dev team throws code over the wall to a security team to “make it secure”. The real-time monitoring and operations is also part of the code with logs, metrics and tracing ingrained at the deepest levels of the app before it is pushed to production. Thus, development, security and operations are more unified as DevSecOps.

So if LAMP is antiquated, what actually does a new modern stack look like? Well, the beauty of cloud-native apps is every stack is, again, purpose-built and can be different. Here is just one example:

  • Network Infrastructure: VPC, Elastic Load Balancing

  • Network Service Mesh/API Gateway: Istio or AWS App Mesh

  • Application Services: NodeJS, Kubernetes

  • Data Processing & Message Streaming: Kafka, RabbitMQ

  • Data Persistence: MongoDB Atlas or AWS DocumentDB

  • Logging and observability across all layers: Sumo Logic, OpenTelemetry

As with all things cloud, it’s critical to understand the shared services model. Every IT thing you can shake a stick at is likely also offered “as a Service”. For the purposes of this article we’re focusing on database technology. In the example stack above, MongoDB can be used as a stand-alone self-managed database, or customers may opt for MongoDB Atlas, thus taking advantage of shared responsibility and offloading of management to an external provider. This holds true for most modern database technologies whether they are hosted in AWS, GCP or Azure.

So how do customers go through the “digital transformation” from old to new? E.g. How is an existing application rebuilt on top of modern database technology? There are established and proven patterns that can be used for these migrations. Insert propellerhead terms like rehosting, re-platforming, and automated migration services, but generally the database goes through this type of journey over its life: Commercial Proprietary Database (like Oracle on-prem) → Open Source Database→ Cloud Hosted & Open Source → Cloud Managed Database Service

Amazing! Now we’ve solved the challenges listed above by moving to cloud technology (latency, scalability). Not so fast… what about security? One could argue that managing and manipulating data today has never been easier, but the flip side of that is securing the data spread across countless data stores has never been more difficult.

Before you can secure this modern data, you have two things that must be done right out of the gate—discovery and classification. Fortunately, cloud providers have APIs for most everything, and solutions like Cloud Security Posture Management (CSPM) and Data Security Posture Managment (DSPM) automatically map and interrogate services to provide better visibility and protection across your cloud-native data assets to prevent data vulnerabilities and compliance violations.

The common reaction here is, “Great! Another security tool”, however, keep in mind all of the assets, inventory and findings can be integrated into existing tools, like Sumo Logic’s Cloud Security Monitoring & Analytics solution so system owners and developers can leverage this data without having to add another monitoring tool to their daily rotation.

This also holds true for securing the CI/CD pipeline. If everything is delivered as code, and pushed into production at light speed, developers are now in a dangerous and powerful position to do EVERYTHING in a highly secure way. Again, application security is now part of the development process, not just an afterthought. We’ve seen the rise of DevSecOps and shift-left security approaches. This means, designing security and observability into the approach right from the beginning. Have a strategy for detections at each layer of the stack above. Beyond that, include telemetry and monitoring of the code pulls and pushes before they are even deployed. That is true end-to-end security.

New tools allow users to view applications from individual services or application components view. When security meets development meets observability within a single solution it becomes much easier to do root cause analysis and determine where things are breaking down. For example, Sumo Logic has an entity model that provides support for DB entities for these database technologies: MySQL, PostgreSQL, Cassandra, Redis, SqlServer, Oracle, MongoDB, ElasticSearch, CouchBase and MariaDB. Analysts and developers are able to use powerful dashboards available in the app catalog and can traverse their data using the hierarchical structure and grouping of their databases in a intuitive Explore tab. These entities will be visible up in an Entity Inspector, allowing users to quickly switch context between an Entity Inspector and over to security alerts, operational alerts and the underlying logs, metrics or traces for complete visibility at all application layers.

Cloud database security best practices

1. Understand your model of shared responsibility for each layer of the application stack

Understand liabilities and what risk has been transferred to service providers. This analysis should be done at the various levels of an application and the underlying services being built or leveraged. This includes federated identity and access management, managed database services, and all the underlying infrastructure gluing everything together.

2. Leverage modern micro-service/serverless architectures

Whenever possible, separate or decouple database services/servers, web servers, and other components for a more modular and secure microservice architecture. Services that will communicate with the database should be whitelisted in the network access controls. For example, the database server should reside in a private VPC. Within your log analytics solution, create rules monitoring VPC flow logs to ensure that only allowed systems are communicating directly with the database.

3. Continuously discover and verify

Leverage modern cloud security posture and vulnerability tools to identify issues before they are a problem. Stay ahead of your changing attack surface by generating security insights via use-case-driven rules, detections & dashboard visualizations. Refer to modern Risk Register and SBOM approaches for best practices. Just because we are in a cloud world, does not mean tried and tested IT Asset Management (ITAM) practices should not be followed.

4. Leverage database encryption at rest and in transit.

Use the latest encrypted transport protocols and ciphers with every service or client accessing the data. Encrypt data at rest so that even data that is stolen is unusable by attackers without the encryption keys. A well-designed encryption and decryption approach should be transparent to users, applications, and services with minimal effect on I/O latency and throughput.

5. Monitor and log database and application activity.

Correlate across security logs, authentication logs, application logs, and even CI/CD activities for complete end-to-end visibility. For example, unify visibility across key Amazon Web Services (AWS), such as EC2, ECS, RDS, ElastiCache, API Gateway, Lambda, DynamoDB, Application ELB and Network ELB. If federated authentication or SAML is in use, ensure the logs are also being collected from the identity service providers. Also keep in mind, SQL injection continues to be one of the most dangerous vulnerabilities for online applications. If code allows users to add untrusted data to database queries, attackers can use this user input to steal valuable data, bypass authentication, or corrupt the records in your database. Monitor for

6. Least privilege user management

Follow the best practice principle of least privilege. Create roles that define the exact access rights required by a set of users. Then create users and assign them only the roles they need to perform their operations. A user can be a person or a client application. Use the SIEM tool to periodically audit all user accounts authenticating to the database. A user can have privileges across different databases. If a user requires privileges on multiple databases, create a single user with roles that grant applicable database privileges instead of creating the user multiple times in different databases. (Related Sumo article: LINK)

Many of the best practices above can be monitored and validated using tools like Sumo Logic. This simplifies your internal compliance monitoring as well as helps organizations beholden to industry regulatory requirements. For example, as part of a PCI/DSS audit, you will need to provide artifacts or evidence to auditors.

Within Sumo Logic, you could:

  • Monitor database performance, error rates and failures for both operational and security use cases.

  • Provide a developer workbench that includes vulnerability scan results, misconfigurations, and benchmark deviations.

  • Provide access to the real-time database monitoring dashboards and alerts that shows authentications, where they are coming from, and recent configuration changes, with higer severity on changes with security implications.

  • Demonstrate that log redaction is in use and that sensitive data is masked or obfuscated in log artifacts

  • Show complete visibility across the CI/CD toolchain, from development to deployment to operational monitoring.

  • Demonstrate advanced correlation and detection across network, authentication, activity logs, and threat intelligence.

As a SaaS solution, these are the problems Sumo Logic set out to solve over a decade ago as we pioneered moving security analytics into the cloud; something that other vendors thought couldn’t be done successfully. We built our solution on the same best practices we help our customers achieve with their own data. We successfully achieved analytics at cloud-scale in a modern multi-tenant application for real-time security monitoring and alerting.

We now ingest petabytes of data per day into our AWS-native SaaS platform. Thousands of customers log in every day and search across huge sets of data with minimal latency. And this is all mission-critical data meeting extremely high SLAs—stored and accessed in the most secure ways possible. This could only be achieved using modern application architectures.

Learn more about Sumo Logic’s security platform.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Chas Clawson

Chas Clawson

Field CTO, Security

As a technologist interested in disruptive cloud technologies, Chas joined Sumo Logic's Cyber Security team with over 15 years in the field, consulting with many federal agencies on how to secure modern workloads. In the federal space, he spent time as an architect designing the Department of Commerce ESOC SIEM solution. He also worked at the NSA as a civilian conducting Red Team assessments and within the office of compliance and policy. Commercially, he has worked with MSSP practices and security consulting services for various fortune 500 companies. Chas also enjoys teaching Networking & Cyber Security courses as a Professor at the University of Maryland Global College.

More posts by Chas Clawson.

People who read this also enjoyed