Create an Audit Log for DynamoDB Changes | Best Practices for Data Auditing

Elegant Solution for DynamoDB Audit Log

Prev Question Next Question

Question

You use DynamoDB to store the customer banking data.

You need to create an audit log of all changes to the data.

It's important not to lose any information.

What is an elegant way to accomplish this?

Answers

Explanations

Click on the arrows to vote for the correct answer

A. B. C. D.

Answer-A.

You can use the DynamoDB table streams as triggers to execute the lambda functions.

Triggers are custom actions you take in response to updates made to the DynamoDB table.

To create a trigger, first you enable Amazon DynamoDB Streams for your table.

Then, you write a Lambda function to process the updates published to the stream.

For more information on DynamoDB with Lambda, please visit the below URL:

http://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html

To create an audit log of all changes to the data in DynamoDB, we need to capture every write operation that occurs in the DynamoDB table. For this purpose, we can use DynamoDB Streams, which captures a time-ordered sequence of item-level modifications in a DynamoDB table and stores this information in a log. We can then use this log to trigger an AWS Lambda function that will log the changes to AWS CloudWatch Logs.

Option A: Use a DynamoDB Stream and stream all changes to AWS Lambda. Log the changes to AWS CloudWatch Logs, removing sensitive information before logging.

This option is the most elegant and efficient way to capture every write operation that occurs in DynamoDB, without losing any information. We can use a Lambda function to process the stream and extract the relevant information, which can then be logged to CloudWatch Logs. To ensure the security of sensitive information, we can remove it before logging.

Option B: Before writing to DynamoDB, do a pre-write acknowledgment to disk on the application server, removing sensitive information before logging. Periodically rotate these log files into S3.

This option involves logging every write operation to disk before writing to DynamoDB. While it will capture every write operation, it may cause performance issues if the logging process is not optimized. Additionally, it may not be as reliable as using DynamoDB Streams since it relies on the application server's disk.

Option C: Use a DynamoDB Stream and periodically flush to an EC2 instance store, removing sensitive information before putting the objects. Periodically flush these logs to S3.

This option is similar to Option A, but instead of logging changes to CloudWatch Logs, we log them to an EC2 instance store. This method can be more complex to set up and maintain and may not be as reliable as using DynamoDB Streams.

Option D: Before writing to DynamoDB, do a pre-write acknowledgment to disk on the application server, removing sensitive information before logging. Periodically pipe these files into CloudWatch Logs.

This option is similar to Option B, but instead of rotating log files into S3, we periodically pipe them into CloudWatch Logs. This method can be less reliable than using DynamoDB Streams since it relies on the application server's disk. Additionally, it may not be as efficient as using DynamoDB Streams since it involves moving the log files from the application server to CloudWatch Logs.

In conclusion, the most elegant and reliable way to create an audit log of all changes to data in DynamoDB is to use DynamoDB Streams and stream all changes to an AWS Lambda function, which can log the changes to AWS CloudWatch Logs while removing sensitive information before logging.