Get the report
MoreComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
January 5, 2023
Serverless computing is a modern cloud-based application architecture, where the application’s infrastructure and support services layer is completely abstracted from the software layer. Any computer program needs hardware to run on, so serverless applications are not “serverless” - they do run on servers - it’s just that the servers are not exposed as physical or virtual machines to the developer running the code.
In a truly serverless architecture paradigm, program code runs on infrastructure hosted and managed by a third party - typically a cloud service - which not only takes care of provisioning, scaling, load balancing and securing the infrastructure, but also installs and manages AWS toolkit, operating systems, patches, serverless architecture framework, code libraries and all necessary support services. As far as the user is concerned, a serverless back-end would scale and load-balance automatically as the application load increases or decreases, all while keeping the application online. The user only needs to pay for the resources consumed by a running application.
Market researchers expect the global serverless architecture market to grow significantly over the next years, from USD 7.3 billion in 2020 to USD 36.9 billion by 2028, with a CAGR of 21.71% from 2021 to 2028.
You can read more about how to get logs, platform metrics and platform traces from AWS Lambda directly into Sumo Logic using the new Telemetry API. Through this new integration, Sumo Logic and AWS can gain enhanced observability, making it easier to monitor your Lambda function.
AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use
These functions respond to events such as the passage of data from an SQS queue to a Lambda function, or a change in the state of a file within Amazon S3 or any AWS API gateway. The event is passed into the function as the first parameter. Lambda functions themselves are completely stateless, meaning you have no guarantee where a function will be executed, nor any notion of how many times the function has been executed on the particular server, if at all.
Amazon Web Service (AWS) Lambda allows a developer to create a Lambda Java function, which can be uploaded and configured to execute in the AWS Cloud. This function can be written in various languages, and this post will specifically address how to create an AWS Lambda function with Java 8. We’ll go through the function itself, and then walk through the process of uploading and testing your function through the AWS Console.
The Internet is rife with “Hello, World!” examples, which generally do a less-than-OK job of explaining how a language works, and provide little in solving actual problems in your Java SDK or Java application. For this example, we’re going to create a simple zipcode validator that responds to a new address being added to a DynamoDB table. Admittedly, this is definitely not the smartest use for a Lambda function, but it should demonstrate how to wire a workable Java function solution together.
We’ll need to take care of some basic setup before we start with this example. First, you’ll need access to the AWS management console. If you don’t already have access, you can sign up for a free account.
Creating and deploying this example shouldn’t cost you much, if anything, from within the free tier. However, please ensure you disable any triggers when you’re done, and scale back the provisioning or delete the DynamoDB table. You are responsible for understanding what charges you may incur by using AWS. This blog is merely a demonstration of potential uses.
Log into the AWS Console, and complete the following steps.
You’ll need to create a new table and enable a stream on it. To do this, navigate to the DynamoDB console and click on create table. Choose a name for your table and set the primary key. For this example, we set up a table called US_Address_Table with a primary key of id and type string, and no sort key. Ensure the use default setting is selected, and then click on create.
Ensure your table is selected, and then, from the overview tab, click on manage stream. Select new image from the view type, and then click on enable.
Before you leave this page, look for the Amazon Resource Name (ARN) under table details. It should look like this: arn:aws:dynamodb:us-west-2:321593873910:table/US_Address_Table
Copy it to your clipboard or a text document, as you’ll need it for the next step.
The function will need permissions to execute itself; then, it will also need permissions to update the DynamoDB table with your AWS credentials. To do this, navigate to the Identity and Access Management console and click on roles. Click the create new role button, and choose a name for your role. (For this example, we called ours lambda-validator.) Click next step.
Click on the select button next to AWS Lambda. Use the filter to find the AWSLambdaDynamoDBExecutionRole. Check the box next to the role, and then click next step. On the next page, you’ll be shown the role details. Click on create role. You now have a role that allows your function to run and access DynamoDB streams.
Click on the name of your role to edit it. Click on the inline policies section at the bottom of the page. Since we don’t have any policies yet, this should be empty. You should see something similar to “There are no inline policies to show. To create one, click here.” Click the link. Click select under policy generator. Enter the following information:
Effect: Allow
AWS Service: Amazon DynamoDB
Actions: Update item
Amazon Resource Name (This is the ARN for your DynamoDB table that we copied a few paragraphs ago).
Click add statement, and then next step at the bottom of the page. You can choose a new name for your policy at this point if you would like. I’m going to change mine to lambda-validator-update-dynamodb.
Click apply policy.
That’s enough configuration for now. Let’s move on to the fun stuff.
We’re going to have a few dependencies for this function so start by setting up a project, which includes the following libraries. At the time of writing, we used version 1.1.0 for each.
aws-lambda-java-core
aws-lambda-java-events
Next, you’ll need an address object. We created the one below, and then used IntelliJ to automatically create accessor functions, which are excluded for brevity.
public class Address {
private String address1;
private String address2;
private String city;
private String state;
private String zipcode;
private Boolean validated;
} /*And now, the Lambda function itself. Then, we’ll go over each part in detail.*/
package com.example;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
import com.amazonaws.services.dynamodbv2.document.DynamoDB;
import com.amazonaws.services.dynamodbv2.document.Table;
import com.amazonaws.services.dynamodbv2.document.spec.UpdateItemSpec;
import com.amazonaws.services.dynamodbv2.document.utils.ValueMap;
import com.amazonaws.services.dynamodbv2.model.ReturnValue;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.DynamodbEvent;
import com.amazonaws.services.lambda.runtime.events.DynamodbEvent.DynamodbStreamRecord;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.IOException;
import java.util.regex.Pattern;
public class AddressValidator implements RequestHandler {
private static final String ADDRESS = "address";
private static final String ID = "id";
private static final String INSERT = "INSERT";
private static final String TABLE_NAME = "US_Address_Table";
private static final String ZIP_CODE_REGEX = "^[0-9]{5}(?:-[0-9]{4})?$";
private DynamoDB dynamoDB;
private Table table;
public String handleRequest(DynamodbEvent ddbEvent, final Context context) {
LambdaLogger logger = context.getLogger();
Pattern pattern = Pattern.compile(ZIP_CODE_REGEX);
ObjectMapper objectMapper = new ObjectMapper();
for (DynamodbStreamRecord record: ddbEvent.getRecords()) {
try {
if (record.getEventName().equals(INSERT)) {
if (null == dynamoDB) {
dynamoDB = new DynamoDB(new AmazonDynamoDBClient().withRegion(Regions.fromName(record.getAwsRegion())));
table = dynamoDB.getTable(TABLE_NAME);
}
Address address = objectMapper.readValue(record.getDynamodb().getNewImage().get(ADDRESS).getS().toString(), Address.class);
if (!Boolean.TRUE.equals(address.getValidated())) {
address.setValidated(pattern.matcher(address.getZipcode()).matches());
UpdateItemSpec updateItemSpec = new UpdateItemSpec().withPrimaryKey(ID, record.getDynamodb().getKeys().get(ID).getS()).withUpdateExpression("set address = :a").withValueMap(new ValueMap().withString(":a", objectMapper.writeValueAsString(address))).withReturnValues(ReturnValue.UPDATED_NEW);
table.updateItem(updateItemSpec);
}
}
} catch (IOException e) {
logger.log("Exception thrown when validating Zip Code. " + e.getMessage());
}
}
return "Validated " + ddbEvent.getRecords().size() + " records.";
}
}
If we take a quick walk through that code, there are a few things to note. The name of the main function, and the extension of RequestHandler interface are integral to this function being imported into AWS as a Lambda function.
At first, we initialized the AmazonDynamoDBClient and table up front. We discovered that if you don’t specify a region, the code automatically assigns a region at run time, which may or may not match where your DynamoDB table exists. And we won’t know where the data is coming from until the function is invoked. (This allowed us to instantiate these objects only if validation was required.)
The DynamodbEvent actually includes a list of DynamodbStreamRecords. These records may be updates, inserts, or deletes. Because of our admittedly poor decision to simply update the record in the same table, we ran into a problem where the same record was being sent multiple times to be validated. Given Lambda functions are charged per execution, this could get expensive fairly quickly. In hindsight, it would have made more sense to write invalid records to a different table, or submit to a SQS queue for further processing elsewhere. (Anyway, we only run the validation for inserted records, ignoring all others.)
So we have a function, and now we’re ready to upload it to AWS. With a function written in Java 8, you can upload it as a zip file, or as a fat jar. At the time of writing, there appear to be some complications deploying the function from within a jar, so I would recommend a zip file if you have the option. The zip file should contain all the class files for your project, as well as any included libraries.
Let’s head to the Lambda console. From here, you can create your function and configure it. Click on create a Lambda function. At this point, you are presented with a selection of blueprints. At the time of writing, only Python and Node.js were represented in the samples. Click skip, and then on the triggers page, click next. We’ll configure the triggers when we’ve tested the function.
Choose a name and description for your Lambda, and select Java 8 as the runtime environment. We simply named mine AddressValidator and left the description blank. Go down to the next section, and click upload. Select your zip folder containing your function.
For the handler, you want the full name of your function. If you copied our example exactly, that would be com.example.AddressValidator. For the role, select choose an existing role, and then select the role we created back at the beginning. (Ours was called lambda-validator.)
Under the advanced settings, you can select memory and timeout for the function. We found the function used a maximum of about 80MB of memory, but if we configured it with less than 256MB, it exceeded the 15-second timeout we had configured. We set ours up to use 512MB of memory with a 15-second timeout. Billing is calculated based on requests x memory usage x time for execution. Learn more about optimizing AWS Lambda cost and performance through monitoring. Ultimately, you’ll want to experiment to find your optimal combination.
Finally, we left the VPC field set to NoVPC, and then clicked on next. The function will now be uploaded and converted into a Java Lambda function.
To test the function, click actions, and then select the configure test event. Here you can use an existing event template, but you’ll need to tweak it to match your schema. Alternatively, you can simply use the json below if you’re working in US-WEST-2.
{
"Records": [{
"eventID": "1",
"eventVersion": "1.0",
"dynamodb": {
"Keys": {
"id": {
"S": "111-222-333"
}
},
"NewImage": {
"address": {
"S": "{\"address1\":\"123 Main St\",\"city\":\"Portland\",\"state\":\"OR\",\"zipcode\":\"97229\"}"
},
"id": {
"S": "111-111-111"
}
},
"StreamViewType": "NEW_IMAGES",
"SequenceNumber": "111-222-333",
"SizeBytes": 26
},
"awsRegion": "us-west-2",
"eventName": "INSERT",
"eventSourceARN": "arn:aws:dynamodb:us-west-2:account-id:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899",
"eventSource": "aws:dynamodb"
}]
}
Click on save and test, and your function should execute. You may notice that a new record will appear in your database table at this point.
If you run into null pointer exceptions, check your code and your test code to ensure your field names are properly cased. If you have permission errors, you’ll want to look at the ARN for the table, and check your roles again. Finally, adding some logger.log statements like logger.log(“Creating the updateItemSpec”) to the function will prove invaluable in trying to find bugs in your code.
Updating the function is as easy as clicking the code tab, and uploading a new zip file.
When you have it running perfectly, you can then enable a trigger on your DynamoDB table. Go back to the DynamoDB console and select your table. Click on the triggers tab. Click on create trigger, and then choose existing Lambda function. Choose the validation function from the drop-down, select a batch size, and check enable trigger. For this example, we chose a batch size of one, since we planned to manually enter addresses into the table.
Finally, click on create, and you should be good to go. Manually enter an address record, and within a few seconds, you should see if updated to include the validated property in the address. To see it performing, you can also get into Cloudwatch and look for the logs.
If you're new to Sumo Logic, read on for how we can support the full breadth of your AWS monitoring and observability needs.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial