Firstly, let answer question why do we really need to go cloud services nowadays. Why not to keep everything on company’s private(on-premises) servers? There are numerous benefits to using the cloud, and we will go through them, one by one in this post.
No need to focus on infrastructure, hardware, running and managing big data centres. This all just run things for you but shouldn’t take too much of your attention.
There are different services models available for you:
No need to buy and maintenance hardware anymore to run your applications.
Infrastructure is more ‘liquid’ - it can be treated as ‘short temporary’ assets, even last only for development phase.
Infrastructure around the globe is at your disposal now. You can deploy to multiple, so called, Regions. Every Region is compounded of 2-3 Availability Zones (AZ). This mean your application can be reached by customers fast and with great resilience. For example S3 is design to deliver 99,999999999% durability and 99,99% availability.
No need to make capacity calculations upfront. Your application can scale automatically up and down according to current traffic. This means also you can save a lot of money because you are not obligated to buy infrastructure that is needed only, let say, couple of time per year.
Infrastructure resources needed by your development team are just a few clicks away; not weeks away. Additionally, company has access to vast range of ready to use managed services so some in-house development can be reduced.
Lambda cold starts on Java used to be the worst
With SnapStart introduction Java cold start is reduced significantly, around 10x without any development need and no extra costs. This magic is based on making snapshot of memory and disk state and then cache it for future reuse. More information can be found here: https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
Recently, AWS released SDK version 2 for Java, that allows you to use AWS services like S3, EC2, DynamoDB inside your application. We will run sample code from AWS’s GitHub examples repository and play around storage service S3, but much more services can be use later via AWS SDK.
First, there are couple of steps to use AWS services with Java.
Please follow official instructions:
Note: the ‘free’ account means you are allowed to use AWS services within certain constraints for 1 year after creation. More information can be found here: https://aws.amazon.com/free
For example, you are given for example: 750 hours for running EC2 or RDS server; 5GB for S3 storage; 1 milions calls for Lambda or SNS; and more.
Now that you have a working AWS account, you have your private cloud playground ready.
This is required to make API calls through SDK from CLI or your application.
More information how to proceed can be found via link: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html
Remember to add ‘programmatic access’ and also add policy allowing only to access S3 services.
After this step you will have to copy two variables ‘access key’ and ‘secret access key’ to your local file: C:\Users\<your_login>\.aws\credentials
aws_access_key_id = <your_access_key>
aws_secret_access_key = <your_secret_access_key>
Important tip: Keep this file and variables accessible only for your local user because it allows to trigger your AWS services. It would be strongly advised to keep this file safe.
As a side note, there are many other ways that credentials can be configured: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Let's check if everything is ok with your access to AWS S3 from your local PC.
First please install AWS CLI, this is so straightforward task, just run installed and you have it: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
Then check installation by running: aws –version
Lastly check connection to your AWS S3 by listing all buckets: aws s3 ls
All CLI commands available are described here: https://docs.aws.amazon.com/cli/latest/index.html
AWS has great examples repository with short examples: https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/javav2 Please download it and take some time to check content of ‘example_code’ and ‘usecases’.
Take a look what dependencies we need in pom.xml and compile project:
Finally let’s focus on ‘example_code/s3’ on AWS github repository.
In all our tests interface software.amazon.awssdk.services.s3.S3Client will be used. Take a look directly in your IntelliJ on many methods it provides.
To make changes on S3 we will need first to set couple of variables in config.properties file:
bucketName = andrewbuckettest1 - name has to be unique globally, choose yours
objectPath = c:/file1.txt - your local file that will be uploaded to S3
objectKey=Documents/file1.txt - location where uploaded file will be saved on S3
path = d:/file1_downloaded_from_s3.txt - location where downloaded file will be saved locally
Lastly, run below Java test methods to trigger file operations on S3:
That’s it. After running each test check changes on your AWS S3 directly!
Example code, from AWS official repository, to list objects in a bucket is:
When we run code from a GitHub AWS repository, for example, we use the real AWS S3 service. But what if you don't want to rely on any external resource, no matter how reliable? You can do it on your laptop and simulate an AWS environment. Cloud application mocking and testing has never been easier or faster. You will only need to run another Docker container and connect to it through an application or CLI.
Run LocalStack via docker:
Check from CLI if LocalStack is running:
In Java you need to connect to locally simulated AWS to URL ‘http://localhost:4566’ within your test profile.
AWS Lambda, in essence, is an event-driven processing service. With it, you can run serverless processes triggered by various types of events such as Amazon S3, API Gateway, Kafka, Alexa, IoT, SNS, SQS, and many more. Moreover, there's no need to manage any servers or operating systems to execute your Java code. The application will scale automatically, and you only pay for the compute time consumed. This service is ideal for building serverless backends for web, mobile, and IoT applications. Creating a new Lambda function on AWS can be quick and efficient for development teams, provided they know how to do it.
Pros of using Lambda:
Cons of using Lambda:
Below we will show steps to create and run a Java AWS lambda triggered by saving PDF file to S3. Then we will convert this file to text with Apache PDFBox library and save the result to a new output file on S3.
Step 1. Create Java handler class
There are couple of ways you can run AWS Lambda handler with Java:
1. Custom POJO class with dedicated method.
2. Classes implementing AWS interfaces RequestHandler and RequestStreamHandler.
3. Using Spring Cloud Functions library
Each solution has its own pros:
Option A. is great when you want to run a Lambda with Java code, fast. You need only to just give package.className::HandlerMethodName when creating lambda on AWS.
Option B. gives you interfaces from AWS helping you to write code integrated with AWS Events. Two dependencies will be needed than: aws-lambda-java-coreand aws-lambda-java-events.
Option C. could be used when you would like to run your lambda code on different cloud providers (AWS, GCP, Azure) or even as a local rest endpoint. You only need to replace development start dependency spring-cloud-starter-function-web to a deployment cloud specific dependency such as spring-cloud-function-adapter-aws.
Step 2. Add needed dependencies in pom.xml.
Let’s assume we want to create Java code for solution B (with AWS interfaces), then we need to add dependencies like following:
There is also dependency on Apache PDFBox library, to convert input PDF into text. Also, the dependency on S3 to be able to later read and save a file inside our new Java Lambda handler method. You will also need to add “shade” plugin to have one jar with all classes needed when lambda will be run on AWS.
Step 3. Create a Java handler method.
It will read PDF file uploaded on S3 and with PDFBox convert it to text and save in output file on S3.
Step 4. (optional, but good to have) Install the “AWS Toolkit” plugin for IntelliJ and SAM CLI.
These tools allow you to browse AWS resources inside your IntelliJ and run operation like run/create/deploy lambda function or upload/delete file on s3 with w few clicks. Great to have. See https://aws.amazon.com/intellij/ and https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html.
Step 5. Create and configure Lambda on AWS.
Create an AWS Lamba:
Set permissions to allow to use service S3 by your new lambda:
Add a trigger for your Lambda - it will be an each pdf file saved in given bucket:
Upload the Java code with the handler:
Set Java handler method:
Set SnapStart to reduce cold start time x10 times, for free:
Now you should be in good shape to make some tests!
Step 6. Run and test your new Java AWS Lambda.
You can trigger your Java AWS lambda functions one many ways, to name a few:
In our case let’s do real case scenario test. Plese upload PDF file(s) on you bucket to /input directory. Then check in /output directory for converted TXT file. It should be there as result of your Java lambda processing. Additionally, you can check logs via ‘CloudWatch’ service or even better via AWS Toolkit plugin.
Lastly, using Java and Cloud services for your business can provide numerous benefits, such as increased focus on core business, lower infrastructure costs, global access, scalability, and faster development.
With the simple steps outlined in this post, you can quickly integrate AWS services into your Java applications and take advantage of the cloud's powerful features.
Furthermore, the LocalStack option allows you to locally simulate an AWS environment for testing purposes. Don't pass up the opportunity to improve your company's efficiency and agility by utilizing Java and Cloud services. Contact us today to learn how we can assist your company in smoothly transitioning into the world of cloud computing with Java.