Get the reportMore
END USER MONITORING
Real User Monitoring
Insights you need. Performance you want.
With Sumo Logic Real User Monitoring (RUM), you can surface user journey, website performance and application performance data for complete visibility into the end-user experience.
Need to understand your user experience better?
Don’t play guessing games. Improve your end-user experience and get to the bottom of application performance issues faster with website monitoring, enriched data, machine intelligence and distributed tracing.
Monitor single-page application performance
Sumo Logic supports XML HTTP Requests (XHR) and navigation changes to fully understand user interaction from navigating in-browser apps using single-page apps (SPA) frameworks.
Improve real user experience monitoring
Sumo Logic monitors real user metrics, including detecting performance problems like rendering times and core web vitals for visibility into how fast pages load, interact and render in a single overview dashboard.
Diagnose freezes and increase user satisfaction
Sumo Logic’s RUM solution automatically captures longtask delay events, displaying them as individual spans and dashboard charts, so that you can identify and reduce browser freeze times.
Track unhandled browser UI errors
Automatic log data collection at the client’s browser provides frontend developers full visibility into error categories from browser applications with contextual drill-down, including unhandled exceptions, rejections and console errors.
RUM: Everything you need for a great digital customer experience
Track user behavior and online experience to optimize app performance and digital customer experience.
Rapidly pinpoint where application frontend degradation is occurring and the backend services that might be impacting it.
Accelerate data synthesis and analysis of application telemetry in seconds with machine intelligence via charts and histograms.
Unify and simplify your telemetry pipeline
Use a unified platform to ingest, analyze and correlate all application and infrastructure open-source standardized telemetry to identify and diagnose issues quickly.
Learn more about Real User Monitoring
Deliver a reliable and secure digital customer experienceRead the brief
Setting up Real User Monitoring data collection
What are examples of real user monitoring (RUM)?
RUM provides insights into how end-users experience your web application in their browser. By determining how long activities such as Time to First Paint or Time to Interactive take, RUM enables developers to understand customer experience better and ensure the reliability and performance of SaaS-based services.
It also allows the inspection of each transaction’s end-to-end progress, with data from the browser tied to every service and backend application call. Because RUM covers critical KPIs, like DNS lookup and SSL setup time, as well as how long it took to send the request and receive a full response from the client's browser, observers can compare user cohorts defined by their browser type or geographical location to understand their performance as a group. This information helps performance engineers optimize application response times, rendering performance, network requirements and browser execution to improve the user experience.
What are the basic steps in real user monitoring?
There are six basic steps to RUM:
Data capture of details about requests for pages, images, and other resources from the browser and web servers.
Detecting unusual or problematic behavior, such as slow response times, system problems and web navigation errors for different pages, objects, and visits.
Reporting of individual visit activity with a summary of data or simulation of user experience with synthetic transactions.
Segmenting aggregated data to identify page availability and performance across different browsers and user cohorts.
Alerting whenever a system spots a serious issue.
Tying end-user experience problems to backend performance automatically per each end-to-end transaction.
What are real user monitoring best practices?
Create business objectives to establish overall business goals for RUM. What will the data help you achieve? Concrete goals will ensure you use RUM tools for the right reasons and that there is consistent leadership buy-in.
Ensure that business objectives align with the same goals as the engineering and development teams. Make sure that technical teams monitor metrics that meet business objectives.
Implement RUM across all user experiences
Test your RUM on development and staging environments before deployment and release.
What is the difference between real user monitoring and synthetic monitoring?
Synthetic monitoring tests synthetic interaction for web performance insights, while RUM exposes how your actual (real) users interact with your site or app. RUM offers a top-down view of a wide range of frontend browsers, backend databases and server-level issues as your users experience them.
RUM data reflect the experience of current application users, while synthetic monitoring is a more predictive strategy for developers to conduct tests on a hypothetical basis. Additionally, RUM goes beyond the simple up/down availability and page load monitoring of synthetic monitoring. It provides end-to-end transaction reporting and analysis to pinpoint where problems happen and how to resolve them.
How do RUM and application performance monitoring (APM) work together?
RUM and application performance monitoring (APM) are different but related methods of IT monitoring that share a goal: improved application performance. APM is an umbrella term that includes RUM as one of its strategies. RUM supports APM by analyzing how end-user experience informs application optimization strategies.
RUM doesn’t purely serve as part of an APM strategy. Because RUM tracks user activity with the frontend, RUM data can answer user experience questions pertaining to customer satisfaction to help developers optimize application features.
How does Sumo Logic’s RUM solution work?
This data is gathered directly from your end-user devices and displayed as individual spans representing user-initiated actions (like clicks or document loads) at the beginning of each trace, reflecting its request journey from the client throughout the whole application and back. This includes any unhandled errors, exceptions, and console errors generated by the browser. Then data is aggregated for high-level KPIs displayed on out-of-the-box dashboards.
What types of KPIs does the Sumo Logic RUM solution track?
Browser traces automatically generate RUM metrics aggregates in the Sumo backend. They provide insight into your website's frontend overall user experience for automatically recognized top user actions and user cohorts categorized by their browsers, operating systems and locations.
RUM organizes metrics by user actions representing document loads. This means actual retrieval and execution of web documents in the browser; XHR calls related to, e.g., form submissions or button presses, as well as route changes that are typical navigation actions in Single Page Apps. Metrics are presented in the form of charts and maps on the Website Performance panels on RUM dashboards and as individual measurements inside each frontend originated spans in end-to-end traces representing individual user transactions.
Metrics types include:
Document load metrics collected for document load and document fetch requests, compatible with W3C navigation timing events. They can help you understand the sequence of events from user clicks to a fully loaded document.
Time to first byte measures the delay between the start of the page load and when the first byte of the response appears. It helps identify when a web server is too slow to respond to requests.
Rendering events explain rendering events inside the user's browser. Learn more in our documentation.
- Core Web Vitals (CWV) focus on three aspects of the user experience:
First Input Delay (FID): measures interactivity to provide a good user experience.
Largest Contentful Paint (LCP): measures loading performance to provide a good user experience.
Cumulative Layout Shift (CLS): measures visual stability to provide a good user experience.
XHR monitoring metrics representing how much time was spent in background Ajax/XHR communication with the backend related to data retrieval. Longtask delay indicates when the main browser UI thread becomes locked for extended periods (greater than 50 milliseconds) and blocks other critical tasks (including user input) from being executed, impacting the user's experience. Users can perceive this as a "frozen browser”, even if the communication with the backend has long been completed.