This sample demonstrates a way to ingest the NAS audit logs from an FSx for Data ONTAP file system into a CloudWatch log group without having to NFS or CIFS mount a volume to access them. It will attempt to gather the audit logs from all the SVMs within all the FSx for Data ONTAP file systems that are within a specified region. It will skip any file systems where the credentials aren't provided in the supplied AWS SecretManager's secret, or that do not have the appropriate NAS auditing configuration enabled. It will maintain a "stats" file in an S3 bucket that will keep track of the last time it successfully ingested audit logs from each SVM to try to ensure it doesn't process an audit file more than once. You can run this script as a standalone program or as a Lambda function. These directions assume you are going to run it as a Lambda function. NOTE: There are two ways to install this program. Either with the CloudFormaiton script found this this repo, or by following the manual instructions found in the this file.
- An FSx for Data ONTAP file system.
- An S3 bucket to store the "stats" file and optionally a copy of all the raw NAS audit log files. It will also
hold a Lambda layer file needed to be able to an add Lambda Layer from a CloudFormation script.
- You will need to download the Lambda layer zip file
from this repo and upload it to the S3 bucket. Be sure to preserve the name
lambda_layer.zip. - The "stats" file is maintained by the program. It is used to keep track of the last time the Lambda function successfully ingested audit logs from each SVM. Its size will be small (i.e. less than a few megabytes).
- You will need to download the Lambda layer zip file
from this repo and upload it to the S3 bucket. Be sure to preserve the name
- A CloudWatch log group to ingest the audit logs into. Each audit log file with get its own log stream within the log group.
- Have NAS auditing configured and enabled on the SVM within a FSx for Data ONTAP file system. Ensure you have selected the XML format for the audit logs. Also, ensure you have set up a rotation schedule. The program will only act on audit log files that have been finalized, and not the "active" one. You can read this knowledge based article for instructions on how to setup NAS auditing.
- Have the NAS auditing configured to store the audit logs in a volume with the same name in all SVMs on all the FSx for Data ONTAP file systems that you want to ingest the audit logs from.
- An AWS Secrets Manager secret that contains the credentials you want to use to obtain the NAS Audit logs with for all the FSxN file systems.
- The secret should be in the form of key/value pairs where the key is the file system ID and value is a dictionary with the keys
usernameandpassword. For example:
- The secret should be in the form of key/value pairs where the key is the file system ID and value is a dictionary with the keys
{
"fs-0e8d9172fa5411111": {"username": "fsxadmin", "password": "superSecretPassword"},
"fs-0e8d9172fa5422222": {"username": "service_account", "password": "superSecretPassword"}
}-
You have applied the necessary SACLs to the files you want to audit. The knowledge base article linked above provides guidance on how to do this.
-
AWS Endpoints. Since the Lambda function runs within your VPC it will not have access to the Internet, even if you can access the Internet from the Subnet it runs from. Therefore, there needs to be an VPC endpoint for all the AWS services that the Lambda function uses. Specifically, the Lambda function needs to be able to access the following AWS services:
- FSx.
- Secrets Manager.
- CloudWatch Logs.
- S3 - Note that typically there is a Gateway type VPC endpoint for S3, so you should not need to create a VPC endpoint for S3.
-
Role for the Lambda function. Create a role with the necessary permissions to allow the Lambda function to do the following:
| Service | Actions | Resources |
|---|---|---|
| Fsx | fsx:DescribeFileSystems | * |
| ec2 | DescribeNetworkInterfaces | * |
| CreateNetworkInterface | arn:aws:ec2:<region>:<accountID>:* | |
| DeleteNetworkInterface | ||
| CloudWatch Logs | CreateLogGroup | arn:aws:logs:<region>:<accountID>:log-group:* |
| CreateLogStream | ||
| PutLogEvents | ||
| s3 | ListBucket | arn:aws:s3:<region>:<accountID>:* |
| GetObject | arn:aws:s3:<region>:<accountID>:*/* | |
| PutObject | ||
| Secrets Manager | GetSecretValue | arn:aws:secretsmanager:<region>:<accountID>:secret:<secretName>* |
- <accountID> - is your AWS account ID.
- <region> - is the region where the FSx for ONTAP file systems are located.
- <secretName> - is the name of the secret that contains the credentials for the fsxadmin accounts.
Notes:
- Since the Lambda function runs within your VPC it needs to be able to create and delete network interfaces.
- The AWS Security Group Policy builder incorrectly generates resource lines for the
CreateNetworkInterfaceandDeleteNetworkInterfaceactions. The correct resource line isarn:aws:ec2:<region>:<accountID>:*. - It needs to be able to create a log groups so it can create a log group for the diagnostic output from the Lambda function.
- Since the ARN of any Secrets Manager secret has random characters at the end of it, you must add the
*at the end, or provide the full ARN of the secret.
-
Create a Lambda deployment package by:
- Downloading the ingest_nas_audit_logs.py file from this repository and placing it in an empty directory.
- Rename the file to
lambda_function.py. - Install a couple dependencies that aren't included with AWS's base Lambda runtime by executing the following command:
pip install --target . xmltodict requests_toolbelt - Zip the contents of the directory into a zip file.
zip -r ingest_nas_audit_logs.zip .
-
Within the AWS console, or using the AWS API, create a Lambda function with:
- Python 3.11, or higher, as the runtime.
- Set the permissions to the role created above.
- Under
Additional ConfigurationsselectEnable VPCand select a VPC and Subnet that will have access to all the FSx for ONTAP file system management endpoints that you want to gather audit logs from. Also, select a Security Group that allows TCP port 443 outbound. Inbound rules don't matter since the Lambda function is not accessible from a network. - Click
Create Functionand on the next page, under theCodetab, selectUpload From -> .zip file.Provide the .zip file created by the steps above. - From the
Configuration -> Generaltab set the timeout to at least 30 seconds. You will may need to increase that if it has to process a lot of audit entries and/or process a lot of SVMs.
-
Configure the Lambda function by setting the following environment variables. For a Lambda function you do this by clicking on the
Configurationtab and then theEnvironment variablessub tab.Variable Required Description fsxRegion Yes The region where the FSx for ONTAP file systems are located. s3BucketRegion Yes The region of the S3 bucket where the stats file is stored. s3BucketName Yes The name of the S3 bucket where the stats file is stored. copyToS3 No Set to trueif you want to copy the raw audit log files to the S3 bucket.fsxnSecretARNsFile No The name of a file within the S3 bucket that contains the Secret ARNs for each for the FSxN file systems. The format of the file should be just <fsID>=<secretARN>. For example:fs-0e8d9172fa5411111=arn:aws:secretsmanager:us-east-1:123456789012:secret:fsxadmin-abc123fileSystem1ID No The ID of the first FSxN file system to ingest the audit logs from. fileSystem1SecretARN No The ARN of the secret that contains the credentials for the first FSx for Data ONTAP file system. fileSystem2ID No The ID of the second FSx for Data ONTAP file system to ingest the audit logs from. fileSystem2SecretARN No The ARN of the secret that contains the credentials for the second FSx for Data ONTAP file system. fileSystem3ID No The ID of the third FSx for Data ONTAP file system to ingest the audit logs from. fileSystem3SecretARN No The ARN of the secret that contains the credentials for the third FSx for Data ONTAP file system. fileSystem4ID No The ID of the forth FSx for Data ONTAP file system to ingest the audit logs from. fileSystem4SecretARN No The ARN of the secret that contains the credentials for the forth FSx for Data ONTAP file system. fileSystem5ID No The ID of the fifth FSx for Data ONTAP file system to ingest the audit logs from. fileSystem5SecretARN No The ARN of the secret that contains the credentials for the fifth FSx for Data ONTAP file system. statsName Yes The name you want to use as the stats file. logGroupName Yes The name of the CloudWatch log group to ingest the audit logs into. volumeName Yes The name of the volume, on all the FSx for ONTAP file systems, where the audit logs are stored. NOTE: You only need to set the
fsxnSecretARNsFileor thefileSystemXIDandfileSystemXSecretARNvariables. If both are provide, then thefsxnSecretARNsFilewill be used and thefileSystemXIDandfileSystemXSecretARNvariables will be ignored. -
Test the Lambda function by clicking on the
Testtab and then clicking on theTestbutton. You should see "Executing function: succeeded". If not, click on the "Details" button to see what errors there are. -
After you have tested that the Lambda function is running correctly, add an EventBridge trigger to have it run periodically. You can do this by clicking on the
Add Triggerbutton within the AWS console on the Lambda page and selectingEventBridge (CloudWatch Events)from the drop-down menu. You can then configure the schedule to run as often as you want. How often depends on how often you have set up your FSx for ONTAP file systems to rotate audit logs, and how up-to-date you want the CloudWatch logs to be.
This repository is maintained by the contributors listed on GitHub.
Licensed under the Apache License, Version 2.0 (the "License").
You may obtain a copy of the License at apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" basis, without WARRANTIES or conditions of any kind, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
© 2025 NetApp, Inc. All Rights Reserved.