Pricing Login
Pricing
Mike Mackrory

Mike Mackrory

Mike Mackrory is a Global citizen who has settled down in the Pacific Northwest — for now. By day he works as a Lead Engineer on a DevOps team, and by night, he writes and tinkers with other technology projects. When he's not tapping on the keys, he can be found hiking, fishing and exploring both the urban and rural landscape with his kids. Always happy to help out another developer, he has a definite preference for helping those who bring gifts of gourmet donuts, craft beer and/or single-malt Scotch.

Posts by Mike Mackrory

Blog

Monitoring HAProxy logs and metrics with Sumo Logic

Blog

Ensure cloud security with these key metrics

Blog

Extend AWS observability beyond CloudWatch

Blog

Analyze JMX to better assess the health of your Java applications

Blog

How to monitor Amazon DynamoDB performance

Blog

How to Monitor Akamai Logs

Blog

Deploying AWS Microservices

Blog

Service Mesh Comparison: Istio vs. Linkerd

Blog

Log Analysis on the Microsoft Cloud

The Microsoft Cloud, also known as Microsoft Azure, is a comprehensive collection of cloud services available for developers and IT professionals to deploy and manage applications in data centers around the globe. Managing applications and resources can be challenging, especially when the ecosystem involves many different types of resources, and perhaps multiple instances of each. Being able to view logs from those resources and perform log analysis is critical to effective management of your environment hosted in the Microsoft Cloud. In this article, we’re going to investigate what logging services are available within the Microsoft Cloud environment, and then what tools are available to assist you in analyzing those logs. What Types of Logs are Available? The Microsoft Cloud Infrastructure supports different logs depending on the types of resources you are deploying. Let’s look at the logs that are gathered within the ecosystem and then investigate each in more depth. Activity Logs Diagnostic Logs Application logs are also gathered within the Microsoft Cloud. However, these are limited to compute resources and are dependent on the technology used within the resource, and application and services which are deployed with that technology. Activity Logs All resources report their activity within the Microsoft Cloud ecosystem in the form of Activity Logs. These logs are generated as a result of some different categories of events. Administrative – Creation, deletion and updating of the resource. Alerts – Conditions which may be cause for concern, such as elevated processing or memory usage. Autoscaling – When the number of resources is adjusted due to autoscale settings. Service Health – Related to the health of the environment in which the resource is hosted. These logs contain information related to events occurring external to the resource. Diagnostic Logs Complementary to the activity logs are the diagnostic logs. Diagnostic logs provide a detailed view into the operations of the resource itself. Some examples of actions which would be included in these logs are: Accessing a secret vault for a key Security group rule invocation Diagnostic logs are invaluable in troubleshooting problems within the resource and gaining additional insight into the interactions with external resources from within the resource being monitored. This information is also valuable in determining the overall function and performance of the resource. Providing this data to an analysis tool can offer important insights which we’ll discuss more in the next section. Moving Beyond a Single Resource Log viewing tools and included complex search filters are available from within the Microsoft Cloud console. However, these are only useful if you are interested in learning more about the current state of a specific instance. And while there are times when this level of log analysis is valuable and appropriate, sometimes it can’t accomplish the task. If you find yourself managing a vast ecosystem consisting of multiple applications and supporting resources, you will need something more powerful. Log data from the Microsoft Cloud is available for access through a Command Line Interface (CLI), REST API and PowerShell Cmdlet. The real power in the logs lies in being able to analyze them to determine trends, identify anomalies and automate monitoring so that engineers can focus on developing additional functionality, improving performance and increasing efficiencies. There are some companies which have developed tools for aggregating and analyzing logs from the Microsoft Cloud, including Sumo Logic. You can learn more about the value which Sumo Logic can provide from your log data by visiting their Microsoft Azure Management page. I’d like to touch on some of the benefits here in conclusion. Centralized aggregation of all your log data, both from the Microsoft Cloud and from other environments, makes it easier to gain a holistic view of your resources. In addition to making this easier for employees to find the information they need quickly, it also enhances your ability to ensure adherence to best practices and maintain compliance with industry and regulatory standards. Use of the Sumo Logic platform also allows you to leverage their tested and proven algorithms for anomaly detection, and allows you to segregate your data by source, user-driven events, and many other categories to gain better insight into which customers are using your services, and how they are using them.

Blog

Configuring Your ELB Health Check For Better Health Monitoring

Blog

ALB vs ELB: Choosing Between an ELB and an ALB on AWS

Blog

Using AWS Config Rules to Manage Resource Tag Compliance

Blog

Tuning Your ELB for Optimal Performance

Blog

Apache Log Analysis with Sumo Logic

Blog

Jenkins, Continuous Integration and You: How to Develop a CI Pipeline with Jenkins

Continuous Integration, or CI for short, is a development practice wherein developers can make changes to project code and have those changes automatically trigger a process which builds the project, runs any test suites and deploys the project into an environment. The process enables teams to rapidly test and develop ideas and bring innovation faster to the market. This approach allows teams to detect issues much earlier in the process than with traditional software development approaches.With its roots in Oracle’s Hudson server, Jenkins is an open source integration server written in Java. The server can be extended through the use of plugins and is highly configurable. Automated tasks are defined on the server as jobs, and can be executed manually on the server itself, or triggered by external events, such as merging a new branch of code into a repository. Jobs can also be chained together to form a pipeline, taking a project all the way from code to deployment, and even monitoring of the deployed solution in some cases.In this article, we’re going to look at how to set up a simple build job on a Jenkins server and look at some of the features available natively on the server to monitor and troubleshoot the build process. This article is intended as a primer on Jenkins for those who have not used it before, or have never leveraged it to build a complete CI pipeline.Before We Get StartedThis article assumes that you already have a Jenkins server installed on your local machine or on a server to which you have access. If you have not yet accomplished this, the Jenkins community and documentation can be an excellent source of information and resources to assist you. Jenkins is published under the MIT License and is available for download from their GitHub repository, or from the Jenkins website.Within the Jenkins documentation, you’ll find a Guided Tour, which will walk you through setting up a pipeline on your Jenkins box. One of the advantages of taking this tour is that it will show you how to create a configuration file for your pipeline, which you can store in your code repository, side-by-side with your project. The one downside of the examples presented is that they are very generic. For a different perspective on Jenkins jobs, let’s look at creating a build pipeline manually through the Jenkins console.Creating A Build JobFor this example, we’ll be using a project on GitHub that creates a Lambda function to be deployed on AWS. The project is Gradle-based and will be built with Java 8. The principles we’re using could be applied to other code repositories, build and deployment situations.Log in to your Jenkins server, and select New Item from the navigation menu.Jenkins New Item WorkflowChoose a name for your project, select Freestyle project and then scroll down and click OK. I’ll be naming mine Build Example Lambda. When the new project screen appears, follow the following steps. Not all of these steps are necessary, but they’ll make maintaining your project easier.Enter a Description for your project and describe what this pipeline will be doing with it.Check Discard old builds, and select the Log Rotation Strategy with the Max # of builds to keep set to 10. These are the settings I use, but you may select different numbers. Having this option in place prevents old builds from taking too much space on your server.We’ll add a parameter for the branch to build, and default it to master. This will allow you to build and deploy from a different branch if the need arises.Select This project is parameterized.Click on Add Parameter and select String Parameter.Name: BRANCHDefault Value: masterDescription: The branch from which to pull. Defaults to master.Scroll down to Source Code Management. Select Git.Enter the Repository URL. In my case, I entered https://github.com/echovue/Lambda_SQSMessageCreator.gitYou may also add credentials if your Git repository is secure, but setting that up is beyond the scope of this article.For the Branch Specifier, we’ll use the parameter we set up previously. Parameters are added by enclosing the parameter name in curly braces and prefixing it with a dollar sign. Update this field to read */${BRANCH}Git Configuration Using Parameterized BranchFor now, we’ll leave Build Triggers alone.Under Build Environment, select Delete workspace before build starts, to ensure that we are starting each build with a clean environment.Under Build, select Add build step, and select Invoke Gradle script.When I want to build and deploy my project locally, I’ll enter ./gradlew build fatJar on the command line. To accomplish this as part of the Jenkins job, I’ll complete the following steps.Select Use Gradle WrapperCheck From Root Build Script DirFor Tasks, enter build fatJarFinally, I want to save the Fat Jar which is created in the /build/libs folder of my project, as this is what I’ll be uploading to AWS in the next step.Under Post-build Actions, Select Add post-build action and choose Archive the artifacts.In files to archive, enter build/libs/AWSMessageCreator-all-*Finally, click on Save.Your job will now have been created. To run your job, simply click on the link to Build with Parameters. If the job completes successfully, you’ll have a jar file which can then be deployed to AWS Lambda. If the job fails, you can click on the job number, and then click on Console Output to troubleshoot your job.Next StepsIf your Jenkins server is hosting on a network that is accessible from the network which hosts the code repository you’re using, you may be able to set up a webhook to trigger the build job when changes are merged into the master branch.The next logical step is to automate the deployment of the new build to your environment if it builds successfully. Install the AWS Lambda Plugin and the Copy Artifact Plugin on your Jenkins server, and use it to create a job to deploy your Lambda to AWS, which copies the jar file we archived as part of the job we built above.When the deployment job has been successfully created, open the build job, and click on the Configure option. Add a second Post-build action to Build other projects. Enter the name of the deployment project, and select Trigger only if build is stable.At this point, the successful execution of the build job will automatically start the deployment job.Congrats! You’ve now constructed a complete CI pipeline with Jenkins.

Blog

Building Java Microservices with the DropWizard Framework

Blog

AWS CodePipeline vs. Jenkins CI Server

Blog

5 Patterns for Better Microservices Architecture

Microservices have become mainstream in building modern architectures. But how do you actually develop an effective microservices architecture? This post explains how to build an optimal microservices environment by adhering to the following five principles: Cultivate a Solid Foundation Begin With the API Ensure Separation of Concerns Production Approval Through Testing Automate Deployment and Everything Else Principle 1: Great Microservices Architecture is Based on a Solid Foundation No matter how great the architecture, if it isn’t based on a solid foundation, it won’t stand the test of time. Conway’s law states that “…organizations that design systems … are constrained to produce designs which are copies of the communication structures of these organizations…” Before you can architect and develop a successful microservice environment, it is important that your organization and corporate culture can nurture and sustain a microservice environment. We’ll come back to this at the end, once we’ve looked at how we want to design our microservices. Principle 2: The API is King Unless you’re running a single-person development shop, you’ll want to have an agreed-upon contract for each service. Actually, even with a single developer, having a specific set of determined inputs and outputs for each microservice will save you a lot of headaches in the long run. Before the first line of code is typed, determine a strategy for developing and managing API documents. Once your strategy is in place, focus your efforts on developing and agreeing on an API for each microservice you want to develop. With an approved API for each microservice, you are ready to start development, and begin reaping the benefits of your upfront investment. Principle 3: Separation of Concerns Each microservice needs to have and own a single function or purpose. You’ve probably heard of separation of concerns, and microservices are prime examples for the application of that principle. Additionally, if your microservice is data-based, ensure that it owns that data, and exists as the sole access point for that data. As additional requirements come to light, it can be very tempting to add an additional endpoint to your service that kind of does the same thing (but only kind of). Avoid this at all costs. Keep your microservices focused and pure, and you’ll avoid running into the nightmare of trying to remember which service handled that one obscure piece of functionality. Principle 4: Test-Driven Approval Back in the old days, when you were supporting a large monolithic application, you’d schedule your release weeks or months in advance, including an approval meeting which may or may not have included a thumbs up/thumbs down vote, or fists of five to convey approval or confidence in the new release. With microservice architecture, that changes. You’re going to have a significant number of much smaller applications, and if you follow the same release process, you’ll be spending a whole lot more time in meetings. Therefore, if you’re implementing test-driven development (TDD), writing comprehensive contract and integration tests as you develop each application, you’ll finish your service up with a full test suite which you can automate as part of your build pipeline. Use these tests as the basis for your production deployment approval process, rather than relying on the approval meetings of yore. Principle 5: Automate, Automate, Automate As developers and engineers, we’re all about writing code which can automate and simplify the lives of others. Yet, too often, we find ourselves trapped in a world with manual deployments, manual testing and manual approval processes and change management. Automating these processes when it comes to microservices is less a convenience and more of a necessity, especially as your code base and repertoire of microservices expands and matures. Automate your build pipelines so that they trigger as soon as code is merged into the master branch. Automate your tests, static code analysis, security scans and any other process which you run your code through, and then on condition of all checks completing successfully, automate the deployment of the updated microservice into your environment. Automate it all! Once your microservice is live, ensure that you have configured a means by which the service can be automatically configured as well. An automated configuration process has the added benefit of making it easier to both troubleshoot and remember where it was that you set that “obscure” property for that one microservice. Conclusion: Back to the Foundation At the start of this article, I mentioned Conway’s law, and said I’d come back to talking about what kind of organization you need in order to facilitate successful microservice development. This is what your organization should look like: Each team should have a specific aspect of the business on which to focus, and be assigned development of functionality within that area. Determine the interfaces between teams right up front. The key is to encourage active and engaging communication and collaboration between the teams. Doing so will help to avoid broken dependencies and help to ensure that everyone remains focused. Empower teams with the ability to make their own decisions and own their services. Common sense may dictate some basic guidelines, but empowering teams is a lot like automating processes, and it’ll both fuel innovation and make your life easier. Now, with all that said, you probably don’t have the luxury of building a brand new development department from the ground up. More likely, you’re going to be trying to implement changes to a group of folks who are used to doing things in a different way. Just as with microservices, or any software project for that matter, focus on incremental improvements. Determine an aspect of the culture or the organization that you’d like to change, determine how to implement that change, how to measure success, and test it out. When you achieve success, pick another incremental change to work on, and when you fail, try a different approach or a different aspect of the business to improve. About the Author Mike Mackrory is a Global citizen who has settled down in the Pacific Northwest – for now. By day he works as a Senior Engineer on a Quality Engineering team and by night he writes, consults on several web based projects and runs a marginally successful eBay sticker business. 5 Patterns for Better Microservices Architecture is published by the Sumo Logic DevOps Community. If you’d like to learn more or contribute, visit devops.sumologic.com. Also, be sure to check out Sumo Logic Developers for free tools and code that will enable you to monitor and troubleshoot applications from code to production.

Blog

Logging Node.js Lambda Functions with CloudWatch

When AWS Lambda was introduced in 2014, the first language to be supported was Node.js. When a Node.js function is deployed as an AWS Lambda, it allows for the execution of the function in a highly scalable, event-driven environment. Because the user has no control over the hardware or the systems which execute the function, however, the need for logging Node.js Lambda functions is of utmost importance when it comes to monitoring, as well as diagnosing and troubleshooting problems within the function itself. This post will consider the options for logging within a Node.js function. It will then briefly outline the process of uploading a Node.js function into a Lambda function, configuring the function to send the logs to Cloudwatch, and viewing those logs following the execution of the Lambda. Logging with Node.js Within a Node.js function, logging is accomplished by executing the appropriate log function on the console object. Possible log statements include: console.log() console.error() console.warn() console.info() In each case, the message to be logged is included as a String argument. If for example the event passed to the function included a parameter called ‘phoneNumber’, and we validated the parameter to ensure that it contained a valid phone number, we could log the validation failure as follows: console.error(“Invalid phone number passed in: “ + event.phoneNumber); Sometimes the best way to learn something is to play with it. Let’s create an example Lambda so we can see logging in action. First, we’ll need to set up the environment to execute the function. Configuring it all in AWS You’ll need access to an AWS environment in order to complete the following. All of this should be available if you are using the free tier. See Amazon for more information on how to set up a free account. Once you have an AWS environment you can access, and you have logged in, navigate to the Identity and Access Management (IAM) home page. You’ll need to set up a role for the Lambda function to execute in the AWS environment. On the IAM home page, you should see a section similar to the one shown below which lists IAM Resources (The number next to each will be 0 if this is a new account.) Click on the Roles label, and then click on the Create New Role button on the next screen. You’ll need to select a name for the role. In my case, I chose ExecuteLambda as my name. Remember the name you choose, because you’ll need it when creating the function. Click on the Next Step button. You’ll now be shown a list of AWS Service Roles. Click the Select button next to AWS Lambda. Filter: Policy Type box, type AWSLambdaExecute. This will limit the list to the policy we need for this role. Check the checkbox next to the AWSLambdaExecute policy as shown below, and then click on Next Step. You’ll be taken to a review screen, where you can review the new policy to ensure it has what you want, and you can then click on the Create Role button to create the role. Create Your Lambda Function Navigate to the Lambda home page at https://console.aws.amazon.com/lambda/home, and click on Create Lambda Function. AWS provides a number of sample configurations you can use. We’re going to be creating a very simple function, so scroll down to the bottom of the page and click on Skip. AWS also provides the option to configure a trigger for your Lambda function. Since this is just a demo, we’re going to skip that as well. Click on the Next button on the bottom left corner of the page. Now we get to configure our function. Under Configure function, choose a Name for your function and select the Node.js 4.3 Runtime. Under Lambda function code, choose Edit code inline from the Code entry type dropdown, and then enter the following in the Code Window. exports.handler = function(event, context) { console.log("Phone Number = " + event.phoneNumber); if (event.phoneNumber.match(/\d/g).length != 10) { console.error("Invalid phone number passed in: " + event.phoneNumber); } else { console.info("Valid Phone Number.") } context.done(null, "Phone number validation complete"); } Under Lambda function handler and role, set the Handler to index.handler, the Role to Choose an Existing role, and for Existing role, select the name of the role you created previously. Click on Next, review your entries and then click on Create function. Test the Function and View the Logs Click on the Test button, and enter the following test data. Click Save and Test and you should see the test results shown on the bottom of the screen. By default, Lambdas also send their logs to CloudWatch. If you click on the Monitoring tab, you can click on the View logs in CloudWatch link to view the logs in CloudWatch. If you return to the Lambda code page, you can click on Actions, select Configure test event and then enter different data to see how the result changes. Since the validator only checks to ensure that the phone number contains 10 numbers, you can simply try it with 555-1234 as the phone number and watch it fail. Please note that this example is provided solely to illustrate logging within a Node.js function, and should not be used as an actual validation function for phone numbers. About the Author Mike Mackrory is a Global citizen who has settled down in the Pacific Northwest – for now. By day he works as a Senior Engineer on a Quality Engineering team and by night he writes, consults on several web-based projects and runs a marginally successful eBay sticker business. When he’s not tapping on the keys, he can be found hiking, fishing and exploring both the urban and rural landscape with his kids. Always happy to help out another developer, he has a definite preference for helping those who bring gifts of gourmet donuts, craft beer and/or Single-malt Scotch. Logging Node.js Lambda Functions with CloudWatch is published by the Sumo Logic DevOps Community. If you’d like to learn more or contribute, visit devops.sumologic.com. Also, be sure to check out Sumo Logic Developers for free tools and code that will enable you to monitor and troubleshoot applications from code to production.

Blog

Logging S3 API Calls with CloudTrail

Blog

Optimizing AWS Lambda Cost and Performance Through Monitoring

In this post, I’ll be discussing the use of monitoring as a tool to optimize the cost and performance of AWS Lambda. I’ve worked on a number of teams, and almost without exception, the need to put monitoring in place has featured prominently in early plans. Tickets are usually created and discussed, and then placed in the backlog, where they seem to enter a cycle of being important—but never quite enough to be placed ahead of the next phase of work. In reality, especially in a continuous development environment, monitoring should be a top priority, and with the tools available in AWS and organizations like Sumo Logic, setting up basic monitoring shouldn’t take more than a couple of hours, or a day at most. What exactly is AWS Lambda? AWS Lambda from Amazon Web Services (AWS) allows an organization to introduce functionality into the AWS ecosystem without the need to provision and maintain servers. A Lambda function can be uploaded and configured, and then can be executed as frequently as needed without further intervention. Contrary to a typical server environment, you don’t have any control, nor do you require insight into the data elements like CPU usage or available memory, and you don’t have to worry about scaling your functionality to meet increased demand. Once a Lambda has been deployed, the cost is based on the number of requests and a few key elements we’ll discuss shortly. How to setup AWS Lambda monitoring, and what to track Before we get into how to set up monitoring and what data elements should be tracked, it is vital that you put an effective monitoring system in place. And with that decision made, AWS helps you right from the start. Monitoring is handled automatically by AWS Lambda, which means less time configuring and more time analyzing results. Logs are automatically sent to Amazon Cloudwatch where a user can view basic metrics, or harness the power of an external reporting system, and gain key insights into how Lambda is performing. The Sumo Logic App for AWS Lambda uses the Lambda logs via CloudWatch and visualizes operational and performance trends about all the Lambda functions in your account, providing insight into executions such as memory and duration usage, broken down by function versions or aliases. The pricing model for functionality deployed using AWS Lambda is calculated by the number of requests received, and the time and resources needed to process the request. Therefore, the key metrics that need to be considered are: Number of requests and associated error counts. Resources required for execution. Time required for execution or latency. Request and error counts in Lambda The cost per request is the simplest of the three factors. In the typical business environment, the goal is to drive traffic to the business’ offerings; thus, increasing the number of requests is key. Monitoring of these metrics should compare the number of actual visitors with the number of requests being made to the function. Depending on the function and how often a user is expected to interact with it, you can quickly determine what an acceptable ratio might be—1:1 or 1:3. Variances from this should be investigated to determine what the cause might be. Lambda Resources usage and processing time When Lambda is first configured, you can specify the expected memory usage of the function. The actual runtime usage may differ, and is reported on a per-request basis. Amazon then factors the cost of the request based on how much memory is used, and for how long. The latency is based on 100ms segments. If, for example, you find that your function typically completes in a time between 290ms and 310ms, there would be a definite cost savings if the function could be optimized to perform consistently in less than 300ms. These optimizations, however, would need to be analyzed to determine whether they increase resource usage for that same time, and if that increase in performance is worth an increase in cost. For more information on how Amazon calculates costs relative to resource usage and execution time, you can visit the AWS Lambda pricing page. AWS Lambda: The big picture Finally, when implementing and considering metrics, it is important to consider the big picture. One of my very first attempts with a Lambda function yielded exceptional numbers with respect to performance and utilization of resources. The process was blazingly fast, and barely used any resources at all. It wasn’t until I looked at the request and error metrics that I realized that over 90% of my requests were being rejected immediately. While monitoring can’t make your business decisions for you, having a solid monitoring system in place will give an objective view of how your functions are performing, and data to support the decisions you need to make for your business. Optimizing AWS Lambda Cost and Performance API Workflows is published by the Sumo Logic DevOps Community. If you’d like to learn more or contribute, visit devops.sumologic.com. Also, be sure to check out Sumo Logic Developers for free tools and code that will enable you to monitor and troubleshoot applications from code to production. About the Author Mike Mackrory is a Global citizen who has settled down in the Pacific Northwest – for now. By day he works as a Senior Engineer on a Quality Engineering team and by night he writes, consults on several web based projects and runs a marginally successful eBay sticker business. When he’s not tapping on the keys, he can be found hiking, fishing and exploring both the urban and the rural landscape with his kids. Always happy to help out another developer, he has a definite preference for helping those who bring gifts of gourmet donuts, craft beer and/or Single-malt Scotch.