Dynatrace news https://www.dynatrace.com/news The tech industry is moving fast and our customers are as well. Stay up-to-date with the latest trends, best practices, thought leadership, and our solution's biweekly feature releases. Sun, 20 Feb 2022 22:16:15 +0000 en-US hourly 1 Kubernetes made simple? Kelsey Hightower and Andreas Grabner discuss the future of cloud-native technologies https://www.dynatrace.com/news/blog/kubernetes-made-simple-kelsey-hightower-and-andreas-grabner/ https://www.dynatrace.com/news/blog/kubernetes-made-simple-kelsey-hightower-and-andreas-grabner/#respond Thu, 17 Feb 2022 15:21:25 +0000 https://www.dynatrace.com/news/?p=48668 Dynatrace news

Kubernetes, Kelsey Hightower

Kelsey Hightower and Andreas Grabner talk Kubernetes, simplifying complexity, and the future of cloud-native technologies at Dynatrace Perform 2022.

The post Kubernetes made simple? Kelsey Hightower and Andreas Grabner discuss the future of cloud-native technologies appeared first on Dynatrace blog.

]]>
Dynatrace news

Kubernetes, Kelsey Hightower

Kelsey Hightower is no stranger to Kubernetes complexity. Principal engineer at Google and co-founder of KubeCon, Hightower advocates simplicity and automation.

These are two values he shares with DevOps activist Andreas Grabner, who sat down with Hightower at Dynatrace Perform 2022 to talk about taming Kubernetes and the future of cloud-native technologies.

Kubernetes, Kelsey Hightower

The art—and science—of simplicity

“Making complex things simple is important,” Grabner says, noting that simplicity is a guiding principle in Dynatrace’s own evolution. “For anyone who works in an organization and wants to be a game-changer, you need to convince people about something new by breaking it down in simple terms.”

For Hightower, the ability to explain things in simple terms is a journey to his own understanding. “I’m one of those people who takes a while to get some of these complex topics, so I attempt to learn in public,” he says. “I’ll make some assumptions, read multiple blog posts, and I need to run it myself a couple of times. I try to make sure I understand things completely in simple terms. So when people hear me explain things, it’s this process of convincing myself I completely understand it.”

How does Kubernetes work?

To explain Kubernetes, Kelsey Hightower turns to the familiar. “Let’s invent the post office.”

In a clip from the Honeypot documentary, Kubernetes: The Documentary (Part 1), Hightower explains that managing containerized environments is like sending a package through the post office. You bring the box, the address, and a stamp, and the post office does the rest, guaranteeing your package arrives where and when it’s supposed to. The touchpoints in between are abstracted, and you can trust the outcome.

But Kubernetes does not exist in a vacuum; it’s part of a larger ecosystem that’s always evolving.

Kubernetes: A place to start, not the endgame

Grabner was struck by a tweet Hightower made three or four years ago: “Kubernetes is a platform for building platforms. It’s a better place to start, and not the endgame.” While Kubernetes has transformed how organizations build and deliver software, it is part of a larger context.

Kubernetes, Kelsey Hightower

“When we think about Kubernetes, it’s really just one piece of the puzzle,” Hightower says. There are stakeholders, dependencies, and end-users throughout the process. “There’s a big picture and given that Kubernetes is just one part of it, you have to think about what’s missing.”

What’s missing is observability. “You have to have some type of signal you can use to make adjustments,” says Hightower. “For the end-user, I’m thinking, is my workload running? Kubernetes is not something you just install, and you’re done. It’s a good base layer, but you’re going to need to bring in other tools to make it usable.”

Grabner notes that Kubernetes already provides some data of its own. And if you have other tools, like the open-source systems monitoring toolkit, Prometheus, you need a solution to make sense of all the data in context.

Achieving observability in a Kubernetes ecosystem at scale

Here’s where the Dynatrace platform, with Dynatrace OneAgent, Smartscape topology mapping, and PurePath distributed tracing provides the advantage, especially at scale.

Grabner cited one Dynatrace customer that’s deployed 200,000 OneAgents to monitor their environment across four hyperscalers and their own datacenter. At this massive scale, Dynatrace provides real-time awareness of their Kubernetes, multicloud, and on-premises environments. With automatic and intelligent observability of all their infrastructure, apps, services, and workloads and their dependencies, Dynatrace pinpoints exactly where something is going wrong.

What’s at stake: the real people behind the dots on dashboards

Kelsey Hightower recalled a time earlier in his career when their teams met in war rooms to troubleshoot broken systems. “People were taking their sweet time,” Hightower recalls. “We didn’t have any automation tools. People were checking the logs and doing ad-hoc debugging, but we didn’t have a sense of urgency.”

Then, their CTO walked in and made it real. “He explained that one of our customers was in the grocery store with his family and a cart full of groceries,” he remembered. The customer received government assistance through their electronic benefits transfer (EBT) card, but the card system was down. Most often, EBT customers don’t have another way to pay. “There are other people in line looking at someone who can’t afford to pay for their groceries,” he says. “One of those dots on the graph might represent one of our customers unable to buy food.”

Kubernetes, Kelsey Hightower

“When we talk about SLOs and SLAs, those are the promises we make to our customers, and it’s on us to keep them,” continues Hightower. “If you’re going to have an SLO, you should have a story in mind of why you’re setting up all these alerts and collecting all these metrics. They should tell you why it’s important to do what you’re doing.”

Infrastructure as code vs infrastructure as data

Grabner noted that Dynatrace just announced the release of software intelligence as code, an enhancement of API endpoints. This enhancement enables developers to easily incorporate software intelligence capabilities, such as observability, AIOps, and application security data, into their applications. As a result, teams can automate more processes in their software development lifecycle.

Hightower likes to think of it as infrastructure as data. “Not everyone knows how to write code,” he says. “As a developer, I can say these are the metrics I care about and just give them to you. When we say ‘declarative’, we want to boil it down to tell us what you want, and we’ll take care of the rest.”

Tune into Kelsey Hightower explores the future of infrastructure to hear the full conversation and Andi’s lightning-round Q&A with Kelsey Hightower.

The post Kubernetes made simple? Kelsey Hightower and Andreas Grabner discuss the future of cloud-native technologies appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/kubernetes-made-simple-kelsey-hightower-and-andreas-grabner/feed/ 0
Dynatrace SaaS release notes version 1.235 https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-235/ https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-235/#respond Thu, 17 Feb 2022 12:29:42 +0000 https://www.dynatrace.com/news/?p=48797 Dynatrace news

Dynatrace SaaS Release Notes thumbnail

Dynatrace SaaS release notes version 1.235 Announcements TLS 1.0 and 1.1 end-of-support for RUM data As of April 2022, Dynatrace is retiring TLS 1.0 and TLS 1.1 for Dynatrace SaaS RUM data. For more details, see TLS 1.0 and 1.1 end-of-support for RUM data. Session Replay masking v1 end-of-life Starting with Dynatrace version 1.238, Session Replay […]

The post Dynatrace SaaS release notes version 1.235 appeared first on Dynatrace blog.

]]>
Dynatrace news

Dynatrace SaaS Release Notes thumbnail

Dynatrace SaaS release notes version 1.235

Announcements

TLS 1.0 and 1.1 end-of-support for RUM data

As of April 2022, Dynatrace is retiring TLS 1.0 and TLS 1.1 for Dynatrace SaaS RUM data. For more details, see TLS 1.0 and 1.1 end-of-support for RUM data.

Session Replay masking v1 end-of-life

Starting with Dynatrace version 1.238, Session Replay masking v1 will no longer be supported. For details, check Dynatrace SaaS release notes version 1.233.

Automatic connection of traces and log data

Automatically connecting log data to traces works for all log data, no matter how the log data was ingested by Dynatrace. For details, see Connecting log data to traces

Export log data in log viewer

Starting with Dynatrace version 1.235, using the log viewer, you can download displayed table data as a JSON or CSV file. The exported table data will contain only the 1,000 log records visible in the table but the log records will include complete log data for each record, even if it is not displayed in the table column. For details, see Log viewer.

New features and enhancements

Application Security

Dynatrace now automatically handles OneAgent reporting of software components, so new customers don’t need to manually enable it anymore.

  • For existing customers with OneAgent versions 1.231+ (Node.js and PHP) and 1.233+ (.NET), no action is necessary.
  • For earlier OneAgent versions, you need to manually activate the Software Component Reporting features. For details, see Troubleshoot Application Security.

Synthetic Monitoring

The Record again feature for browser clickpaths now lets you choose between recording a clickpath over completely (from the first event URL) or after playing back to a specified script event.

Record a clickpath again

Data explorer

You can now create a stacked graph (column) visualization in the Data explorer.

Stacked graph (column) example

Dynatrace API

To learn about changes to the Dynatrace API in this release, see Dynatrace API changelog version 1.235.

Resolved issues

General Availability (Build 1.235.121)

The 1.235 GA release contains 17 resolved issues (including 2 vulnerability resolutions).

Component Resolved issues
Cluster 14 (1 vulnerability)
Cloud Automation 1
Extensions 1 (1 vulnerability)
appsec 1

Cluster

  • Vulnerability: For security reasons, it is now mandatory to use an HTTPS endpoint to set up user session export and for the access token URL field if you are using OAuth2 authentication. Existing configurations using an HTTP endpoint still work, but they need to be updated to HTTPS if edited. (APM-338346)
  • Fixed a bug that prevented users with write access to the `builtin:anomaly-detection.metric-events` schema from editing custom events for alerting. (APM-349748)
  • Custom process group monitoring rule values are now correctly checked for illegal separator characters. (APM-347408)
  • Kubernetes event date/time is now corrected to the current date/time if the received date/time is in the future due to a host/time issue. (APM-345661)
  • Request naming rules that use request attributes of type integer no longer cast the integer value to double. (APM-348343)
  • Added support for German umlauts in problem notification placeholder values. (APM-348640)
  • Fixed an issue, in which performance degradation happened on some settings write API requests. (APM-354058)
  • Fixed an issue in which Kubernetes workload label and Kubernetes namespace label might be removed when going from a “Service health” tile on a dashboard to service list. (APM-352095)
  • Fixed, browser exclusion settings 2.0 that were shown for applications other than auto injected applications. (RUM-4394)
  • Resolved an issue where metric data would be incorrectly or only partially displayed when a metric was ingested with a monitored entity dimension. (APM-352822)
  • User ID breadcrumb is now correctly parsed and displayed on the user details page. (APM-347834)
  • Service request naming no longer uses invalid data for custom services. (APM-349598)
  • A 404 error no longer occurs when the user removes a tag on the ESXi host page. (APM-346125)
  • Fixed issue that extensions could not be uploaded via Hub UI for Dynatrace for Government (FedRAMP). (APM-349255)

Cloud Automation

  • The SLO `timeframe` property (mandatory, used to create/update SLOs) was not present in the Swagger documentation with its respective example. It can now be found in the corresponding request body examples for POST /slo and PUT /slo. (APM-348140)

Extensions

  • Vulnerability: The aiohttp library used in the Docker extension has been updated to version 3.8.1. (APM-345850)

appsec

  • Inconsistent host coverage data, which can occur if software component reporting is enabled but the reported hosts are not ensured in Dynatrace, now only logs a warning (downgraded from throwing an exception) to avoid breaking the Application Security overview page. (APM-350518)

The post Dynatrace SaaS release notes version 1.235 appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-235/feed/ 0
Perform 2022: Recognizing customer and partner digital gamechangers https://www.dynatrace.com/news/blog/recognizing-customer-and-partner-digital-gamechangers/ https://www.dynatrace.com/news/blog/recognizing-customer-and-partner-digital-gamechangers/#respond Tue, 15 Feb 2022 15:46:00 +0000 https://www.dynatrace.com/news/?p=48729 Dynatrace news

Perform 2022 Awards

Throughout the year, we have the honor of collaborating with organizations across the world, each with a unique approach to digital transformation. Every year at our annual user conference, Dynatrace Perform, we recognize the most inspiring success stories from our most innovative, transformative customers and partners. During this conference, we had the honor of again […]

The post Perform 2022: Recognizing customer and partner digital gamechangers appeared first on Dynatrace blog.

]]>
Dynatrace news

Perform 2022 Awards

Throughout the year, we have the honor of collaborating with organizations across the world, each with a unique approach to digital transformation. Every year at our annual user conference, Dynatrace Perform, we recognize the most inspiring success stories from our most innovative, transformative customers and partners. During this conference, we had the honor of again hosting our awards ceremony to publicly acknowledge the organizations that leverage observability, automation, and intelligence to manage modern cloud complexity and drive meaningful impact across their business and for their customers.

This annual award ceremony covers three categories:

  1. Community Rock Stars: recognizing the Most Valuable Customer and Partner contributors to the Dynatrace Community. These are individual awards.
  2. Most Innovative Development Partner (R&D Mover and Shaker Award), recognizing organizations who collaborate with our R&D teams to drive significant improvements to the Dynatrace platform.
  3. Software Intelligence Awards, recognizing accomplishments across four key areas:
    • Digital Business Excellence: companies who deliver remarkable experiences across every user journey
    • DevOps: companies who shorten innovation cycles, automate their CI/CD pipelines, and improve code quality for production
    • Automated and Intelligent Observability: companies who harness intelligent observability with contextual information, AI, and automation
    • Digital Transformation Leader: our Best in Show Award, recognizing tangible accomplishments in all three areas of Software Intelligence.

This past week during our Perform 2022 Award Ceremony, the following winners were announced:

Award Recipient
Most Valuable Customer Contributor Chad Turner – Senior Systems Engineer, Monitoring & Event Management (AIOps)
Geico
Most Valuable Partner Contributor António Sousa – CTO
Marketware
R&D Mover and Shaker Award ADP
Software Intelligence – Digital Business Excellence Award loanDepot
Software Intelligence – DevOps Award The NAIC
Software Intelligence – Automated and Intelligent Observability Award BCLC
Digital Transformation Leader – Best in Show Award Dell Technologies

These awards recognize the industry gamechangers whose outstanding contributions have helped push the boundaries of software intelligence. We hope that these stories will inspire innovation, empower change, and enable the confidence necessary for organizations to accelerate their digital transformation.

Congratulations to the winners!

The post Perform 2022: Recognizing customer and partner digital gamechangers appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/recognizing-customer-and-partner-digital-gamechangers/feed/ 0
Artificial intelligence: The ultimate technology for game-changers – Max Tegmark at Perform 2022 https://www.dynatrace.com/news/blog/artificial-intelligence-the-ultimate-technology-for-game-changers-max-tegmark-at-perform-2022/ https://www.dynatrace.com/news/blog/artificial-intelligence-the-ultimate-technology-for-game-changers-max-tegmark-at-perform-2022/#respond Mon, 14 Feb 2022 22:31:23 +0000 https://www.dynatrace.com/news/?p=48627 Dynatrace news

artificial intelligence, AI, Max Tegmark

MIT physics professor and Future of Life Institute co-founder Max Tegmark shares his big thoughts on the big possibilities of AI to change human innovation.

The post Artificial intelligence: The ultimate technology for game-changers – Max Tegmark at Perform 2022 appeared first on Dynatrace blog.

]]>
Dynatrace news

artificial intelligence, AI, Max Tegmark

“Think big. Really big. Cosmically big.” When it comes to artificial intelligence, MIT physics professor and futurist Max Tegmark thinks in terms of 13.8 billion years of cosmic history and the potential of the human race to influence the next 13.8 billion.

As a guest keynote speaker at Dynatrace Perform 2022, Tegmark set the stage for AI as the ultimate technology for game-changers. And more importantly, the role of humans in commanding the power in our grasp.

“When we use technology wisely, we can accomplish things our ancestors could only dream of,” Tegmark says. Through the frame of technological accomplishments in the past half-century, Tegmark laid out the possibilities for AI to transform life on earth. Using AI, we are already accelerating our capacity to bring forth life-saving technologies, like diagnosing cancer and solving the protein-folding problem for biomedical research.

“The technology we’re developing is giving life the opportunity to flourish,” Tegmark says. “Not just for the next election cycle, but for billions of years.”

How far will artificial intelligence go?

Max Tegmark defines artificial intelligence simply as the “ability to accomplish complex goals”. The more complex the goals, the more intelligence is needed. There’s no law of physics that precludes artificial general intelligence (AGI), or the ability for technology to learn and accomplish anything a human can. Polls show that most AI researchers expect AGI within decades.

artificial intelligence, AI, Max Tegmark

But if a technology can learn like a human through recursive self-improvement, does that mean AI will leave humanity in the dust? Will self-learning technologies create a superintelligence that far exceeds human capacity? And if so, are we doomed or saved?

To answer these questions, Tegmark suggests it’s a matter of perspective. Through human ingenuity, we’ve improved our computational ability many millions of times since computers were invented. And we have extracted only a minute fraction of the energy that is theoretically possible from energy sources. That includes known sources, like gasoline and coal, or what we could possibly extract from other sources.

Max Tegmark sees the enormous benefits of AI as long as we cultivate the wisdom we need to minimize risks.

Winning the wisdom race with artificial intelligence

“I’m confident we can have an inspiring future with high tech, but it’s going to require winning the wisdom race,” Tegmark says. “The race between the growing power of the technology and the wisdom with which we manage it.”

In the analog world, we learn by making mistakes. If we try something and it fails (or someone dies in a car crash), we adjust our approach (and invent seat belts). But at the scale of AGI, a reactive trial-and-error approach can be costly and potentially catastrophic. Instead, we can begin to proactively predict what could go wrong and apply safety engineering principles.

To help win the wisdom race, Tegmark and four colleagues co-founded the Future of Life Institute, designed to keep powerful technologies going in the right direction. Isaac Asimov’s three laws of robotics were too limited, so Tegmark and his colleagues developed the 23 Asilomar AI Principles, a set of practical and ethical guidelines for developing and applying artificial intelligence. More than 1,000 researchers and scientists worldwide have adopted and signed these principles.

Aligning AI’s goals with our own

“Any science can be used as a new way of harming people or a new way of helping people,” Tegmark says. To illustrate, he shared three of the 23 Asilomar Principles:

  • Avoid a destabilizing arms race in lethal autonomous weapons. We shouldn’t allow AI algorithms to decide to kill people.
  • Mitigate AI-fueled inequality. We should share the great wealth artificial intelligence helps produce so everyone is better off.
  • Invest in AI safety research. This effort can make systems robust, secure, and trustworthy.

artificial intelligence, AI, Max Tegmark

AGI safety requires what Max Tegmark calls “AI alignment.”

“The biggest threat from AGI is not that it’s going to turn evil, like in some silly movie,” Tegmark says. “The worry is it’s going to turn really competent and accomplish goals that aren’t aligned with our goals.” For example, one way to look at the extinction of the West African black rhino is that humans’ goals weren’t aligned with the rhinos’ goals.

So humanity doesn’t go the way of those rhinos, we must design AI to understand, adopt, and retain our goals. “This way we can steer AGI to accomplish our goals for an inspiring future,” Tegmark says.

Envision an amazing future, not a dystopic one

As any captain of industry knows, a positive vision is essential for business success. Once you know where you want to go, then you can identify the problems and potential pitfalls. Instead of imagining a dystopic future, we should envision an amazing future. The United Nations’ 17 Sustainable Development Goals, for example, provide a roadmap to a future in which humanity thrives.

artificial intelligence, AI, Max Tegmark

“These are challenging and noble goals adopted by nearly every country on earth,” Tegmark says. “Artificial intelligence can help us attain these sustainability goals better and faster. As we continue toward AGI and beyond, let’s not just aim toward them by 2030, let’s accomplish all of them and raise our ambition to go beyond them.”

For individuals, we have an important role to figure out how to steer artificial intelligence and make these changes happen.

As a company, Dynatrace and its causation-based Davis AI is building this future for our customers by delivering our vision of a world where software works perfectly.

For the 30,000 AI game-changers and technologists attending Dynatrace Perform across the world, Professor Tegmark gave an assignment. “Be proactive. Think in advance about how to steer technology and where you want to go with it. We will be the masters of our own destiny by actually building it.”

To see Professor Max Tegmark’s full presentation and hear our thought-provoking Q&A with him afterward, tune into the recording, Professor Max Tegmark on being human in the age of artificial intelligence.

The post Artificial intelligence: The ultimate technology for game-changers – Max Tegmark at Perform 2022 appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/artificial-intelligence-the-ultimate-technology-for-game-changers-max-tegmark-at-perform-2022/feed/ 0
New Prometheus-based extensions enable intelligent observability for more than 200 additional technologies https://www.dynatrace.com/news/blog/new-prometheus-based-extensions-enable-intelligent-observability-for-more-than-200-additional-technologies/ https://www.dynatrace.com/news/blog/new-prometheus-based-extensions-enable-intelligent-observability-for-more-than-200-additional-technologies/#respond Mon, 14 Feb 2022 16:51:55 +0000 https://www.dynatrace.com/news/?p=48715 Dynatrace news

What is Prometheus and 4 challenges for enterprise adoption

Building on its advanced analytics capabilities for Prometheus data, Dynatrace now enables you to create extensions based on Prometheus metrics. This allows teams to extend the intelligent observability Dynatrace provides to all technologies that provide Prometheus exporters. Without any coding, these extensions make it easy to ingest data from these technologies and provide tailor-made analysis […]

The post New Prometheus-based extensions enable intelligent observability for more than 200 additional technologies appeared first on Dynatrace blog.

]]>
Dynatrace news

What is Prometheus and 4 challenges for enterprise adoption

Building on its advanced analytics capabilities for Prometheus data, Dynatrace now enables you to create extensions based on Prometheus metrics. This allows teams to extend the intelligent observability Dynatrace provides to all technologies that provide Prometheus exporters. Without any coding, these extensions make it easy to ingest data from these technologies and provide tailor-made analysis views and zero-config alerting.

Many technologies expose their metrics in the Prometheus data format. Among these, you can find essential elements of application and infrastructure stacks, from app gateways (like HAProxy), through app fabric (like RabbitMQ), to databases (like MongoDB) and storage systems (like NetApp, Consul, Memcached, and InfluxDB, just to name a few).

This creates challenges in gaining actionable insights for these technologies across heterogeneous and dynamic environments:

  • Achieving global visibility requires extra components and introduces new requirements for security and access control.
  • Effective analysis of metrics requires more context, especially understanding of the dependencies between applications and respective components and how they relate to other telemetry data.
  • Manual approaches to keeping alerting configurations up to date are impossible at enterprise scale.
  • Multiple Prometheus servers might be required, creating significant maintenance efforts.

Our monitoring coverage already includes  Prometheus in Kubernetes  and integration with the recently introduced  Amazon Managed Service for Prometheus.

Now we’re excited to announce the release of Dynatrace Extensions support for Prometheus metrics. No matter whether you’re using official, custom-made, or third-party Prometheus exporters, you can now easily collect metrics for intelligent observability of your environment, enabling you to:

  • Achieve global visibility and visualize metrics with state-of-the-art dashboards.
  • Reduce alert noise and accelerate your mean time to repair (MTTR) for infrastructure incidents.
  • Analyze Prometheus metrics in full context of topology, traces, logs, user sessions, and more.
  • Leverage auto-baselining to reduce alert noise and minimize configuration efforts.
  • Benefit from the scale, manageability, and security of the Dynatrace platform for large ecosystems.

Prometheus metrics in Dynatrace

Extensions allow you to define relevant metrics as well as add elements such as topology dependencies, dashboards, Unified Analysis views, and pre-defined alerts. Setting up extensions for Prometheus-compatible technologies is as easy as listing Prometheus metrics.

Without any coding skills required, you can declare extensions in a human-readable YAML format and activate them in your environment via the Dynatrace Software Intelligence Hub.

Code example Prometheus

Dynatrace connects straight to the Prometheus exporters on your systems. It’s easy—no intermediaries and no redundant moving parts. You control which metrics are scraped, so you don’t need to worry about redundant metrics.

Alternatively, if you’re already using a Prometheus Server to gather data from your distributed systems, you can now seamlessly make this data work for you in Dynatrace as well—use your Prometheus server as the endpoint for scraping metrics.

The RabbitMQ Prometheus Extension showcases the capabilities of the new extensions and can be used as a template to jumpstart creation of your own extensions.

This extension enables you to monitor RabbitMQ outside of Kubernetes with data obtained either from a Prometheus server or directly from a RabbitMQ Prometheus exporter. By monitoring specific RabbitMQ nodes, you can easily identify and mitigate performance issues. Moreover, you can effortlessly adapt this extension’s capabilities to your specific use cases.

Prometheus overview Dynatrace

The extension carries all the configurations you need to analyze any possible problems or metric anomalies. As soon as a problem is detected, you have all the tools needed to dive deep and analyze the root cause on the Unified Analysis page. Everything is presented in the context of your RabbitMQ topology, both host and instance.

This extension package contains:

  • The Prometheus data source configuration
  • Customizable dashboard
  • Specialized Unified Analysis page
  • Topology definition and entity extraction rules
  • Predefined alerts

Dynatrace makes it radically simple to ingest all the monitoring data you need by integrating with a wide variety of platforms, applications, programming languages, and data formats.

Setting up a Dynatrace Extension to scrape Prometheus endpoints allows you to monitor all your instrumented applications together with all the infrastructure components and services already integrated into Dynatrace.

We’re further extending the support of extensions for additional protocols and technologies, and improving the process of creating extensions, so be sure to stay tuned.

To start leveraging your Prometheus metrics in Dynatrace, please visit:

If you aren’t a Dynatrace user, consider signing up for a free trial to bring all your monitoring data into one platform.

The post New Prometheus-based extensions enable intelligent observability for more than 200 additional technologies appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/new-prometheus-based-extensions-enable-intelligent-observability-for-more-than-200-additional-technologies/feed/ 0
What is mobile app monitoring? And the importance of mobile analytics https://www.dynatrace.com/news/blog/what-is-mobile-app-monitoring/ https://www.dynatrace.com/news/blog/what-is-mobile-app-monitoring/#respond Mon, 14 Feb 2022 08:30:29 +0000 https://www.dynatrace.com/news/?p=48527 Dynatrace news

Users downloaded more than 218 billion mobile apps in 2020, according to a Statista report. And they now spend over four hours per day using these apps. With so much competition for user attention, it’s crucial that your apps work perfectly. Mobile app monitoring and mobile analytics make this possible. By providing insight into how […]

The post What is mobile app monitoring? And the importance of mobile analytics appeared first on Dynatrace blog.

]]>
Dynatrace news

Users downloaded more than 218 billion mobile apps in 2020, according to a Statista report. And they now spend over four hours per day using these apps. With so much competition for user attention, it’s crucial that your apps work perfectly. Mobile app monitoring and mobile analytics make this possible.

By providing insight into how apps are operating and why they crash, mobile analytics lets you know what’s happening with your apps and what steps you can take to solve potential problems. With the right monitoring solution, you can get ahead of problems to help increase overall app adoption and user satisfaction.

What is mobile app monitoring?

Mobile app monitoring is the process of collecting and analyzing data about application performance. This performance is influenced by a variety of factors, including the application code itself, the device being used, the server handling the data, and the network supplying the connection. Mobile analytics and monitoring provide context around your mobile application performance—the better the performance, the better for your bottom line. If apps crash or run into connection or loading issues, users will delete your app and use something else.

Mobile app monitoring provides a way to quantify issues with application performance. For example, you can track the number of crashes on your app over a specific time period to get a sense of how often crashes are occurring. While reports from end-users about crashes are also useful, they’re more difficult to correlate with specific issues over time.

What is mobile analytics?

While mobile app monitoring focuses on collecting application performance data, mobile analytics focuses on collecting and analyzing user-driven data. This includes app downloads and installs, time spent in apps, actions taken while in the application, and the type of devices used to access the app.

Analytics provides insight into the user-facing aspects of your application—such as the user interface (UI) and user experience (UX). This insight makes it possible to better understand user behaviors and discover trends impacting the overall success of your application.

Why use mobile analytics and app monitoring?

Think of mobile analytics and app monitoring as two sides of the same coin. Mobile app monitoring focuses on functions inherent to the app itself plus issues related to connection and networking. Mobile analytics examines the role of the user and the impact of application responses (or lack thereof) on their decision to download, install, and use your application.

One important factor to consider is application crashes. Many users point to a lack of functionality as a reason for uninstalling apps. But regular crashes can have a significant impact on the “stickiness” of your app on user devices. As Tech Beacon notes, some of the most common reasons for application crashes include memory management, lack of testing, exception handling, excessive code, and the speed of the mobile software life cycle. However, the challenge is user crash reports don’t pinpoint exactly what went wrong. Instead, they serve as a starting point for more in-depth mobile analytics and app monitoring to help identify root causes. As a result, crash analysis tools are critical for companies to get the whole picture.

How do mobile analytics and mobile monitoring work?

Mobile analytics and monitoring work by identifying and collecting key incident data as it occurs. For example, a mobile app monitoring solution can map the connected components and microservices of your application, which delivers in-depth insights about how they interact and where users encounter problems. Mobile analytics capabilities capture and record user-facing information to help companies understand how UI and UX components impact mobile interactions.

Common data captured by mobile monitoring and analysis tools can include:

  • Total page views
  • Number of unique visits and visitors
  • Strings of actions taken by users
  • Login/logout behaviors

Equipped with this information, mobile monitoring solutions can help organizations determine how effective users find their applications to be. This data enables teams to measure behaviors such as the average length of user visits and the features and functions they interact with most. Teams can also measure what specific factors lead to actions, like purchases or newsletter signups. Most importantly, teams can pinpoint where users may encounter problems with mobile apps.

By leveraging the data collected and outputs generated by mobile app monitoring solutions, organizations can create continual feedback loops. These ecosystems can detect issues, identify solutions and reevaluate apps in relation to ongoing user behavior.

What is a mobile monitoring solution, and why do you need it?

A mobile monitoring solution provides a unified platform for monitoring and analytics. Instead of using multiple tools to pinpoint crashes, lag, UI hangs, downloads, installs, and time spent in apps, monitoring solutions make it possible to collect, access, and analyze key data in real-time.

As mobile apps continue to proliferate, monitoring solutions are critical for long-term application success. And as development goes faster, the amount of data apps generate outpaces teams’ ability to manually capture and correlate it.

Mobile applications and development are distinct from their web application counterparts. As such, they introduce distinct challenges for monitoring. Here are four critical differences:

  • Separate development teams. Because mobile teams are often separated from web development teams, they need specialized capabilities to make app monitoring part of a broader monitoring platform. This ensures both mobile and web app development teams have insight across all channels.
  • More involved development processes. Given the variety of mobile platforms and devices, the development process of native mobile apps is more involved. It requires more intensive app monitoring to ensure performance meets expectations.
  • User-involved updates. Unlike web apps, which are updated centrally, new mobile app releases must go through an app store and require users to download the new version for updates.
  • Reduced visibility. Lacking URLs and other web identifiers, it’s often more difficult for teams to pinpoint problems in mobile applications.

Best practices for mobile app monitoring

No matter what type of applications you’re monitoring — from iOS to Android to Cordova apps — four best practices are key to deliver maximum benefits of mobile app monitoring:

  1. Collect data ASAP. The more quickly you can start collecting data, the better understanding you’ll have about where apps are performing and where they need improvement.
  2. Continually review app data. Continual analysis of collected data lets you identify trends and patterns that may suggest the need for more in-depth review.
  3. Incorporate automation. Automating key processes such as crash detection and time spent in app lets teams focus on finding answers.
  4. Focus on root causes. Make the best use of monitoring tools to discover common features of application crashes — which, in turn, makes it possible to address root causes rather than surface symptoms.

Automatic and intelligent mobile app monitoring

Mobile application monitoring with the Dynatrace platform delivers in-depth insights into the operations, functions, and failures of your app to help drive targeted and effective remediation and optimization strategies.

This starts with end-to-end visibility, all the way to the backend, so you know exactly what’s going on when and why. Next is AI-powered crash and error analysis that delivers deterministic root cause identification to provide answers for any issue. Then, go even further in-depth with mobile session replay to see exactly what users were doing at the time of the crash, and streamline the entire process with auto-instrumentation to focus on what matters most: improving application performance.

In the end, mobile matters more than ever. Make sure your applications are ready to compete in the growing mobile marketplace with robust app monitoring and analytics from Dynatrace.

Looking to learn more about the impact of mobile analytics on app performance? Sign up for our on-demand Performance Clinic, Best practices for utilizing Dynatrace on your mobile apps.

The post What is mobile app monitoring? And the importance of mobile analytics appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/what-is-mobile-app-monitoring/feed/ 0
Modern observability platform is onramp to digital transformation: Dynatrace Perform 2022, reporter’s notebook https://www.dynatrace.com/news/blog/modern-observability-platform-is-onramp-to-digital-transformation-dynatrace-perform-2022-reporters-notebook/ https://www.dynatrace.com/news/blog/modern-observability-platform-is-onramp-to-digital-transformation-dynatrace-perform-2022-reporters-notebook/#respond Thu, 10 Feb 2022 14:42:04 +0000 https://www.dynatrace.com/news/?p=48596 Dynatrace news

At this year’s Perform, CEO Rick McConnell and CMO Mike Maciag unpack the power of modern observability and AIOps as organizations traverse digital transformation.

The post Modern observability platform is onramp to digital transformation: Dynatrace Perform 2022, reporter’s notebook appeared first on Dynatrace blog.

]]>
Dynatrace news

CEORickMcConnel
Dynatrace CEO Rick McConnell at Perform 2022 in Las Vegas.

Today, businesses are racing ever faster to accommodate customer demands and innovate without sacrificing product quality or security.

“Organizations are accelerating movement to the cloud, resulting in complex combinations of hybrid, multicloud [architecture],” said Rick McConnell, Dynatrace chief executive officer at the annual Perform conference in Las Vegas this week. “They bring a scale and complexity that is well beyond that of the data center world and it isn’t manageable manually.”

The demands of digital transformation can create a difficult tightrope for organizations to walk. As they increase the speed of product innovation and software development, organizations have an increasing number of applications, microservices and cloud infrastructure to manage. That ushers in IT complexity.

COVID-19 has accelerated the benefits—and downsides—of digital transformation. In 2021, Deloitte reported that 77% of CEOs stated that the COVID-19 pandemic has accelerated their digital transformation plans.

Further, many organizations—more than 90%—have turned to cloud computing to navigate the highwire act of balancing speed and quality. But as they turn to cloud environments to develop new products and manage IT infrastructure, they have introduced a host of complex systems that need to be managed and secured. For these vast environments, traditional, manual methods no longer suffice.

McConnell noted that the audience of “game changers”–engineers, developers, cloud architects and other IT managers–have their hands full as these environments grow and increase in complexity.

“We know your job has never been more difficult or challenging,” McConnell said. IT practitioners need a platform that can alleviate some of this burden through modern, end-to-end observability and automated problem remediation.

Why modern observability is the onramp to digital transformation

To innovate faster, at scale and more securely, organizations have come to recognize that they need the right kinds of tools to shepherd digital transformation. As a result, modern observability has become a key technology to enable enterprise success as companies digitally transform.

Unlike traditional monitoring, a modern observability platform provides precise data in real-time about the root cause of application issues. And it is fueled by AIOps, or artificial intelligence for IT operations, which provides contextualized data—without the time-consuming need to train data with machine learning.

Consider a true self-driving car as an example of how this software intelligence works. A self-driving car needs real-time insight in order not to collide with other cars or cause bodily harm. This contextual insight is akin to how modern observability operates.

“A truly self-driving car constantly and continuously senses its environment,” said Mike Maciag, chief marketing officer, at Perform 2022. “Every other car, every road sign, every pedestrian, every stray ball and much more. It would also need to think about all this input through AI and make decisions in real time with precise accuracy,” Maciag said.

The self-driving car enlists three phases of data ingestion and analysis to generate contextual, real-time insights, then generate informed insight.

The three components of modern observability

“Modern observability involves taking in data inputs, analyzing that data, then acting with precision on that information,” Maciag explained. “It starts with deep and broad observability. We gather logs, metrics and traces. We go further include distributed tracing, code-level detail, user experience and even behavioral data. All in one place, all in context, and all tied wo what it means for your business.”

Let’s explore how these three components of observability work.

  1. Sensing. Sensing starts with Dynatrace OneAgent. Because dynamic elements in multicloud environments are always spinning up and down, manual efforts can’t keep up.  With OneAgent, there is no need to configure or script. Getting the data is all automatic with no wasted time and no wasted resources, so teams can focus on what matters: innovation. OneAgent continuously self-discovers what’s new and incorporates it into the whole. With open APIs, OneAgent can collect even more sources of data for the platform to consume.
  2. Thinking. With the data collected, Dynatrace PurePath technology automatically captures and analyzes transactions end-to-end with no code changes from the browser to the code to the database. Dynatrace Smartscape automatically builds and continuously maintains a real-time topology map of how everything works including millions to billions of dependencies. With real-time context from Smartscape, the Dynatrace Davis AI engine provides automatic analysis for precise answers, including root-cause analysis, anomaly detection, and business impact analysis. This precision reduces wasted motion and accelerates response times. With its causation-based approach, Davis AI doesn’t need to learn and doesn’t make guesses. It knows the impact of problems, suppresses noise, and provides precise root causes.
  3. Acting. Put together, the Dynatrace Software Intelligence platform provides teams across the organization with actionable answers. DevSecOps teams can take automated action to increase speed and improve the quality of innovation, the effectiveness of operations, and the security of apps. Teams can resolve issues within minutes before end-users are affected. Instead of being tied up in war rooms, teams can focus on innovation, optimize user experiences, and optimize business outcomes.

Ultimately, as McConnell noted, organizations are charging forward into their future with digital transformation. But they need help to do so securely and at scale.

“Digital transformation at scale requires assistance,” McConnell said. “You need automatic and intelligent observability spanning your applications, infrastructure, and user experience. You also need a solution that continuously maps, analyzes and optimizes the apps, microservices and interdependencies across hybrid and multicloud [architecture]. You need precise answers, not statistical guesses.”

For our complete Perform 2022 conference coverage, check out our guide.

The post Modern observability platform is onramp to digital transformation: Dynatrace Perform 2022, reporter’s notebook appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/modern-observability-platform-is-onramp-to-digital-transformation-dynatrace-perform-2022-reporters-notebook/feed/ 0
Dynatrace Application Security automatically detects and blocks attacks in real time https://www.dynatrace.com/news/blog/automatic-detection-and-blocking-of-attacks/ https://www.dynatrace.com/news/blog/automatic-detection-and-blocking-of-attacks/#respond Thu, 10 Feb 2022 12:50:23 +0000 https://www.dynatrace.com/news/?p=48562 Dynatrace news

Dynatrace Application Security detects and blocks attacks automatically in real-time

Dynatrace adds real-time attack protection to the Dynatrace Application Security module. Based on code-level insights and transaction analysis, attacks can be detected and blocked without configuration, achieving a perfect OWASP benchmark score for injection attacks—100% accuracy and zero false positives.

The post Dynatrace Application Security automatically detects and blocks attacks in real time appeared first on Dynatrace blog.

]]>
Dynatrace news

Dynatrace Application Security detects and blocks attacks automatically in real-time

In today’s world, the speed of innovation is key to business success. Cloud-native technologies, including Kubernetes and OpenShift, help organizations accelerate innovation and drive agility. Unfortunately, they also introduce risk.

One key element for securing applications in modern environments is vulnerability management. Static Application Security Testing (SAST) solutions are a traditional way of addressing this. They are part of continuous delivery pipelines and examine code to find vulnerabilities. But this approach doesn’t work in new environments and today’s container security scanners fail to provide comprehensive answers to new security threats. That’s why Dynatrace added Application Security to its platform, powered by full production insights and enabling automatic vulnerability management.

Real-time attack protection isn’t the same as fixing security problems

There is another critical aspect that needs to be addressed: how do you protect applications against attacks that exploit vulnerabilities while DevSecOps teams simultaneously work to resolve those issues in the code?

Without real-time attack protection in place, the only possible next step is forensics after the attack to investigate what happened. In the worst case, you have to inform customers and the public about security breaches and stolen data. As important as this is, current approaches to attack detection also leave gaps:

WAFs (web application firewalls) generate massive continuous configuration efforts, create false positives, and they don’t cover unknown attacks.

WAFs protect the network perimeter. They monitor, filter, or block HTTP traffic. Compared to intrusion detection systems (IDS/IPS), WAFs are focused on the application traffic. While WAFs work great for some scenarios, they have significant weaknesses for others:

  • WAFs are rule-based. Configuring rules and potential permutations requires significant effort and makes it nearly impossible for teams to keep up with new threats and highly dynamic application landscapes.
  • WAF rules produce false positives that can potentially block valid requests. So they need to be continuously improved.
  • WAFs are often managed by dedicated teams, which creates complexity in end-to-end DevSecOps collaboration.
  • As WAFs are rule-based, they are not able to detect unanticipated attacks.

Current Runtime Application Self Protection (RASP) solutions don’t live up to their promise or work in enterprise environments.

RASP solutions sit in or near applications and analyze application behavior and traffic. When issues are detected, RASP solutions can identify and block individual requests. While the promise of RASP is compelling, existing solutions don’t live up to expectations:

  • RASP solutions rely on agent technology, which introduces deployment challenges across large-scale, heterogenous, and highly dynamic environments.
  • For most enterprises, using a RASP solution means running multiple agents on their production systems, potentially creating risk due to incompatibilities.
  • A key requirement for agent technology is avoiding a negative impact on performance. Existing RASP solutions often introduce significant overhead, which negatively impacts application performance and customer experience.
  • RASP solutions may lack the precision required to confidently apply automatic blocking of attacks.

Dynatrace Application Security adds real-time attack detection and protection

Our customers know that Dynatrace resolves the challenges associated with RASP solutions. Our platform and OneAgent® technology support highly automatic deployments and the lowest overhead. They provide intelligence and automation for the world’s largest applications and environments every day.

We’re happy to announce that Dynatrace has added real-time attack detection and protection to our Application Security module.

To explain this new capability, let’s look at an example you may have stumbled upon: Log4Shell.

When Log4Shell became public, Dynatrace Application Security customers already had an advantage: literally 10 minutes after information about this vulnerability hit the wire, Dynatrace customers were notified if they had an issue, how severe the issue was, and where to start remediation most effectively.

Of course, as with all vulnerability management solutions, there was and is the risk that vulnerabilities can be exploited while DevSecOps teams are working to fix them.

With the new ability to identify and block attacks, Dynatrace Application Security can protect your applications from the very beginning. Dynatrace now detects attacks like Log4Shell automatically in real-time, with no configuration required.

Injection attack overview Dynatrace screenshot
Dynatrace can identify and block attacks in real-time using code-level insights to pinpoint the location and underlying vulnerability and topology information to assess the impact of the attack.

100% accuracy and zero false positives

With transaction analysis and code-level insights, Dynatrace detects whenever user-generated inputs are sent to vulnerable application components without sanitization. With this approach, you can identify SQL injection attacks, command injection attacks, and JDNI attacks like log4shell or the H2 vulnerability.

This means that Dynatrace doesn’t rely on vulnerability databases but is rather able to identify and block such attacks automatically even if they are exploiting unknown weaknesses. A perfect OWASP benchmark score for injection attacks—100% accuracy and zero false positives—impressively proves the precision of our approach.

This is one step towards delivering the promise of runtime application self-protection. We will further enhance the detection and blocking capability of Dynatrace to cover additional attack types, so stay tuned for updates!

How to get started

Real-time attack detection and blocking for Java will be available in the next 120 days.

  • If you’re already a Dynatrace customer and want to start using the Application Security module, just select Application Security from the main menu in the Dynatrace web UI.
  • If you’re not using Dynatrace yet, it’s easy to get started in under 5 minutes with the Dynatrace free trial.

For more information, visit our website to watch the demo or read our previous Application Security blog posts. To learn more, see Application Security in Dynatrace Documentation.

The post Dynatrace Application Security automatically detects and blocks attacks in real time appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/automatic-detection-and-blocking-of-attacks/feed/ 0
Dynatrace launches DevSecOps partner integrations for context-aware adaptive automation https://www.dynatrace.com/news/blog/devsecops-partner-program-for-context-aware-adaptive-automation/ https://www.dynatrace.com/news/blog/devsecops-partner-program-for-context-aware-adaptive-automation/#respond Thu, 10 Feb 2022 12:50:16 +0000 https://www.dynatrace.com/news/?p=48554 Dynatrace news

Dynatrace launches DevSecOps partner integrations for context-aware adaptive automation

Disparate toolchains and manual tasks that create bottlenecks in the software development lifecycle compromise DevOps teams’ ability to release software faster and with less risk. Now, with the Dynatrace Cloud Automation module and integrations, plus an ecosystem of best-in-breed DevSecOps partners, teams get deep and broad observability, run-time application security, and advanced AIOps to automate toolchains for delivery and remediation.

The post Dynatrace launches DevSecOps partner integrations for context-aware adaptive automation appeared first on Dynatrace blog.

]]>
Dynatrace news

Dynatrace launches DevSecOps partner integrations for context-aware adaptive automation

While digital transformation initiatives have obvious advantages for organizations, they also bring growing complexity to technology and digital services teams. The need for automation and orchestration across the software development lifecycle (SDLC) has increased, but many DevOps and SRE (site reliability engineering) teams struggle to unify disparate tools and cut back on manual tasks. As a result, teams often struggle to advance digital transformation because they’re preoccupied with cobbling together their toolsets. Some organizations even struggle to meet their SLOs (service-level objectives) or find themselves on the verge of penalty payments that undercut their ability to achieve critical business standards.

From an organizational point of view, one can understand digital transformation as a maturity trend where organizations evolve from a toolchain approach to a platform approach. Gartner® states that by 2023, “70% of organizations will use value stream management to improve flow in the DevOps pipeline, leading to faster delivery of customer value.”¹

Platforms allow teams to focus on Customer Value

Observability and AIOps help drive automated delivery and operations processes

Dynatrace provides developers, Security, DevOps, and SRE teams with integrated, end-to-end observability and application security platform that provides advanced AIOps capabilities to support organizations from planning and execution to the monitoring of pre-production and production systems. The Dynatrace artificial intelligence engine, Davis, and our Cloud Automation module adaptively trigger these solutions during required steps in the SDLC.

As a result, development teams:

  • Reduce the time to production from 10-15 days to less than 60 minutes. This reflects typical use cases that many organizations face when attempting to integrate disparate tools across departments.
  • Reduce the time from outage discovery to resolution from 60 minutes to 9 seconds. Unification of development tools, extensive automation across the SDLC, and orchestration to adaptively detect, analyze, and react to issues makes this possible.

New out-of-the-box integrations with ecosystem partners

The Dynatrace Cloud Automation module and the Dynatrace Software Intelligence Hub ecosystem provide out-of-the-box integrations with key DevSecOps alliance and solutions partners, all connected and configured with just a few clicks. Now, Security, DevOps, and SRE teams can automate their delivery pipeline. For example, you can:

  • Integrate with Jira in your planning phase.
  • Verify the quality of your application and apply the approach of SLO-driven quality gates in conjunction with the load testing tool Neoload, or chaos testing tools like Gremlin and ChaosNative.
  • Scan for security vulnerabilities with Snyk and prevent vulnerabilities from making it into production with security quality gates
  • Keep your team up to date about the delivery progress with notifications via Slack.
  • Connect to JFrog Pipelines, Atlassian Bitbucket, GitLab, or Azure DevOps to automatically deploy applications into various stages, from staging and testing to production
  • Observe the behavior of your application and, if problems occur, automatically initiate incident management actions with PagerDuty, xMatters, or OpsGenie
  • Toggle feature flags with LaunchDarkly and Split to proactively optimize application performance
  • Optimize your Kubernetes clusters to maximize service performance with Akamas
Cloud Automation Dynatrace screenshot
Out-of-the-box integrations with key DevSecOps alliance and solutions partners, all connected and configured with just a few clicks.

The Dynatrace DevSecOps partner ecosystem includes best-in-class solutions like:

  • Azure DevOps
  • Atlassian Bitbucket
  • JFrog Pipelines
  • GitLab
  • Jenkins
  • Slack
  • Jira Software
  • Gremlin
  • ChaosNative
  • NeoLoad
  • Akamas
  • PagerDuty
  • xMatters
  • OpsGenie
  • LaunchDarkly
  • Split

Dynatrace Cloud Automation ensures automation for the entire SDLC

The Dynatrace Cloud Automation module is available as a SaaS instance to all Dynatrace SaaS and Managed customers. If you’re already a Dynatrace customer, enabling the Cloud Automation module is easy—simply reach out to your Dynatrace product specialist or our sales experts.

If you would like expert guidance and support throughout your company’s digital transformation, our Autonomous Cloud Enablement (ACE) team can help translate your strategic vision into an executable action plan.

New to Dynatrace?

Visit our free trial page for a free 15-day Dynatrace trial and learn how Dynatrace can help your business. You may also read recent blog posts about Cloud Automation:

__________

¹ Gartner, The Future of DevOps Toolchains Will Involve Maximizing Flow in IT Value Streams, Manjunath Bhat, Daniel Betts, et al, June 4, 2021

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post Dynatrace launches DevSecOps partner integrations for context-aware adaptive automation appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/devsecops-partner-program-for-context-aware-adaptive-automation/feed/ 0
Seamless AI-powered observability for multicloud serverless applications https://www.dynatrace.com/news/blog/seamless-ai-powered-observability-for-serverless/ https://www.dynatrace.com/news/blog/seamless-ai-powered-observability-for-serverless/#respond Wed, 09 Feb 2022 12:55:13 +0000 https://www.dynatrace.com/news/?p=48503 Dynatrace news

Seamless AI-powered observability for multicloud serverless applications

Serverless applications often consist of hundreds of loosely coupled services from multiple disparate sources, which can make it hard to ensure observability and automate tasks. Dynatrace has extended its end-to-end observability and advanced AIOps capabilities for the most widely used serverless services across major cloud vendors, including AWS, Microsoft Azure, and Google Cloud. As a result, DevOps, and site reliability engineering (SRE) teams can automatically analyze, troubleshoot, and optimize serverless applications to drive innovation at scale.

The post Seamless AI-powered observability for multicloud serverless applications appeared first on Dynatrace blog.

]]>
Dynatrace news

Seamless AI-powered observability for multicloud serverless applications

Cloud vendors such as Amazon Web Services (AWS), Microsoft, and Google provide a wide spectrum of serverless services for compute and event-driven workloads, databases, storage, messaging, and other purposes. Engineers often choose best-of-breed services from multiple sources to create a single application. It’s becoming increasingly difficult, however, to get end-to-end visibility and real-time insights into these heavily distributed, complex environments. With the increase of interconnected functions and other services, end-to-end traceability becomes essential.

AI-powered automation and deep, broad observability for serverless architectures

Dynatrace extends deep and broad observability and advanced AIOps capabilities to cover the most important serverless services. In addition to existing support for AWS Lambda, this support now covers Microsoft Azure Functions and Google Cloud Functions as well as managed Kubernetes environments, messaging queues, and cloud databases across all major cloud providers.

This enables your DevOps teams to get a holistic overview of their multicloud serverless applications.

Multicloud serverless application dashboard at a glance
Fig. 1 Multicloud serverless application dashboard at a glance

Have a look at the full range of supported technologies.

Tracing becomes simple thanks to an easy and extensible approach that leverages existing Dynatrace technology, such as PurePath® distributed tracing for end-to-end, automatic, code-level visibility, and Davis, the Dynatrace AI engine, for root-cause-analysis. This enables proactive AI-driven analysis and easy troubleshooting in serverless scenarios.

In addition, Davis provides automatic alerting of service-to-service communication problems using queues and other event systems. This, in turn, helps DevOps teams to pinpoint common problem patterns in their serverless functions rather than in an event-driven architecture.

Automatic detected queues anomaly by AI engine Davis
Fig. 2 Automatic detected queues anomaly by AI engine Davis

Easy and effortless FaaS insights with a single line of code

With limitations of FaaS services to run third-party agents, like the restrictions in executing third party tools and limited access to underlying infrastructure, Open observability standards such as OpenTelemetry are now increasingly important in overcoming the hurdles of instrumentation.

Dynatrace uses OpenTelemetry and expands it by adding a language-specific Dynatrace exporter to unlock PurePath distributed tracing capabilities such as automatic service-detection and analysis.

Using OpenTelemetry typically requires a decent amount of boiler-plate code for initialization and basic instrumentation. While cloud vendors continue to invest in open standards (for example, AWS Distro for OpenTelemetry and Azure Monitor OpenTelemetry Exporter for .NET, Node.js, and Python applications), a lot of setup effort is still required to use them.

To optimize the developer experience, Dynatrace provides FaaS libraries, making it as easy as adding a single line of code to your functions to enable OpenTelemetry-based tracing.

With the expanded tracing across your entire stack, the full end-to-end visibility enables you to understand and deeply analyze the impact of the serverless tiers in your applications (see example below), showing a trace with multiple serverless services including functions, Cloud-Queues, serverless databases, and application services.

End-to-end distributed trace including Azure Functions
Fig. 3 End-to-end distributed trace including Azure Functions

New to Dynatrace?

Within the next 90 days, all enhancements mentioned in this blog post will be available to all Dynatrace customers. Stay tuned for updates.

Visit our trial page for a free 15-day Dynatrace trial.

To learn more about how Dynatrace can help your business, visit Dynatrace.com and read our recent blogs.

The post Seamless AI-powered observability for multicloud serverless applications appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/seamless-ai-powered-observability-for-serverless/feed/ 0
Software intelligence as code enables tailored observability, AIOps, and application security at scale https://www.dynatrace.com/news/blog/software-intelligence-as-code-for-tailored-observability/ https://www.dynatrace.com/news/blog/software-intelligence-as-code-for-tailored-observability/#respond Wed, 09 Feb 2022 12:55:10 +0000 https://www.dynatrace.com/news/?p=48505 Dynatrace news

Software intelligence as code enables tailored observability, AIOps, and application security at scale

Dynatrace enhances API endpoints, its open-source command line interface, and cloud automation configurability to enable organizations to apply observability, AIOps, and application security as code. This enables developers to easily incorporate software intelligence capabilities into their applications' lifecycle and apply service-level objectives (SLOs) for critical metrics, including performance, quality, and security while adhering to operations standards. With this approach, Dynatrace customers can reduce application onboarding time from hours to just a few minutes.

The post Software intelligence as code enables tailored observability, AIOps, and application security at scale appeared first on Dynatrace blog.

]]>
Dynatrace news

Software intelligence as code enables tailored observability, AIOps, and application security at scale

One of the primary drivers behind digital transformation initiatives is the desire to streamline application development and delivery to bring higher quality, more secure software to market faster.

To accomplish this, organizations have widely adopted DevOps, which encompasses significant changes to team culture, operations, and the tools used throughout the continuous development lifecycle.

More recently, teams have begun to apply DevOps best practices to infrastructure automation, giving developers a more active role with GitOps as an operational framework. Modern infrastructure needs to be elastic and GitOps approaches are used to automate the provisioning of infrastructure and applications using Git, an open-source control system that provides the change processes including reviews and approvals. Key components of GitOps are declarative infrastructure as code, orchestration, and observability.

Observability is required for effective collaboration and automation

Site Reliability Engineering (SRE) relies on observability and the automated setup of observability to find answers to questions like, “Did my deployment work?”, “Did the change improve our users’ experience?”, or “Did the last update cause the application issue or was it something else?” But this is hard to achieve at scale:

  • Development teams need specific insights into the microservices they are responsible for, reflecting particular metrics, dashboards, custom alerts, service-level objectives (SLOs), or even automatic remediation steps. But setting up the required tooling requires in-depth knowledge and causes massive effort if done manually.
  • Operations and observability teams can’t provide the required customizations for hundreds of other teams. If they aren’t able to provide an automated self-service approach, they will not only fail to provide observability, they’ll fail to establish organizational standards at scale.

Many observability solutions don’t support an “as code” approach. They require manual effort or might even render automated approaches impossible due to:

  • Missing or limited API support for the configuration of the observability platform.
  • Missing capabilities or lack of configuration templates that effectively handle configuration dependencies.
  • Required third-party tools that create additional complexity and massive effort during configuration, maintenance, or automation at scale.

Because of these issues, developers often still lack control over the behavior of their monitoring platform. Configurations, such as custom metrics, service-level indicators (SLIs), SLOs, dashboards, and alerting rules are often created manually without central management and don’t meet corporate requirements.

Dynatrace enables software intelligence as code

Dynatrace uniquely provides software intelligence as code by combining observability, AIOps (AI for IT operations), and application security. Organizations that embrace GitOps can rely on automated software intelligence and bring new features to market faster with higher quality by ensuring insights and common repeatable standards and goals. This enables effective DevSecOps collaboration, as well as observability-driven automation against all critical metrics (speed, security, stability, availability, productivity, and business metrics) at enterprise scale.

As a result, Dynatrace customers can reduce application onboarding time from hours to just a few minutes. Considering that large organizations often have hundreds of applications in ever-changing multicloud environments, this is a massive accelerator and creates a foundation for improved cross-functional collaboration.

Dynatrace provides powerful API endpoints for automated operations and observability so that you can configure the Dynatrace platform at scale. Configurations can be managed centrally as a single source of truth for easier revision, including versioning support.

SRE teams can easily provide templates for customized configuration of specific components, for example, tailored dashboards, metrics, alerts, SLOs, or remediation steps. Application and DevOps teams can use and adapt these templates to get specific insights and drive automation for their applications while adhering to company standards.

Teams can easily define relevant SLOs, build them into their GitOps flows, and configure them at scale, getting required insights while adhering to organizational standards

Dynatrace further extends functionality to easily customize observability, application security, and AIOps as code with the following upcoming enhancements:

  • New API endpoints that provide granular observability customization for containers and processes, web and mobile applications, server-side services, and much more. This builds on existing functionality, including configurable dashboards and business analytics via API.
  • Additional API endpoints that extend AIOps configuration, enabling DevOps teams to fine-tune anomaly detection and alerting based on management-zone permissions, which enable a secure approach to self-service.
  • The open-source command-line interface for monitoring as code with Dynatrace to provide new functionality that empowers SRE teams that provide self-service SLO management to DevOps teams.
  • Application security capabilities can be unlocked via API, activating automatic vulnerability management at scale.
  • Additional Dynatrace Cloud Automation integrations-as-code enable the orchestration of DevOps toolchains, as well as the automation of these toolchains based on observability and security metrics.

How to get started

All Dynatrace enhancements mentioned in this blog post will be available within the next 90 days.

If you are a Dynatrace customer and want to get started, you’ll find more information and first steps on GitHub. Otherwise, contact our Services team.

If you’re not using Dynatrace yet, it’s easy to get started in less than five minutes with the Dynatrace free trial.

The post Software intelligence as code enables tailored observability, AIOps, and application security at scale appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/software-intelligence-as-code-for-tailored-observability/feed/ 0
Log4Shell highlights need for secure digital transformation with observability, vulnerability management https://www.dynatrace.com/news/blog/log4shell-highlights-need-for-secure-digital-transformation-with-observability-vulnerability-management/ https://www.dynatrace.com/news/blog/log4shell-highlights-need-for-secure-digital-transformation-with-observability-vulnerability-management/#respond Tue, 08 Feb 2022 21:10:30 +0000 https://www.dynatrace.com/news/?p=48476 Dynatrace news

Avisi uses Dynatrace to find vulnerabilities in production.

The Log4Shell vulnerability highlighted the importance of developing a secure digital transformation strategy. Modern observability, combined with vulnerability management, helped Avisi keep its customers secure as they digitally transform.

The post Log4Shell highlights need for secure digital transformation with observability, vulnerability management appeared first on Dynatrace blog.

]]>
Dynatrace news

Avisi uses Dynatrace to find vulnerabilities in production.
Avision uses Dynatrace for Log4Shell
Avisi uses Dynatrace to find vulnerabilities in production.

In December 2021, a security vulnerability known as Log4Shell emerged with force.

It left the applications, systems, and IT infrastructure of millions of organizations open to widespread exploitation. This zero-day vulnerability enables a remote attacker to take control of a device or Internet-based application if the device or app runs certain versions of Log4j 2, a popular Java library.

In the ensuing hours and days, Log4Shell became a showstopper for many organizations, requiring them to take devices and applications offline to prevent malicious attackers from gaining access to networks and sensitive data.

For Avisi, a software development and cloud services company in the Netherlands, its Log4Shell response was immediate and automatic.

When Avisi’s IT team first learned about Log4Shell from a CVE RSS feed on December 10, Jeroen Veldhorst, Avisi’s chief technology officer, immediately consulted their Dynatrace dashboards.

“Dynatrace gave us an overview of all the places where we used Log4J 2 and might be vulnerable,” Veldhorst said. With Dynatrace Application Security, the Avisi team resolved Log4Shell on all their systems before they went home that night, and no one worked over the weekend.

Ultimately, this precise observability into affected systems enabled Avisi and its customers to pursue secure digital transformation, innovating quickly without sacrificing software quality and security.

Log4Shell: What’s at stake for Avisi and its customers

Avisi presides over a complex and highly changeable cloud environment for itself and its customers. It operates 55 Kubernetes clusters for large corporate and public-sector clients, with as many as 600 applications running on top of each cluster.

Many of Avisi’s customers develop custom applications for their industries, including financial services, transportation, and healthcare. The applications include custom code and, in some cases, sensitive data. When Log4Shell emerged, it put that data at risk.

As a service provider, Avisi is also subject to compliance with regulations such as the General Data Protection Regulation, or GDPR. Some of Avisi’s customers, such as those in the financial industry, “are quite strict on all the processes,” Veldhorst says. “They need certification that risks are mitigated as soon as possible, so they can trust the system.”

How Dynatrace Application Security changed the game for Avisi’s Log4Shell response

Like most technology organizations, Veldhorst’s team used to spend a lot of time manually tracking vulnerabilities with numerous lists, tools, and scans. “If you have to identify a vulnerability manually, you have to know all the components it consists of and what other kinds of attack factors are there,” Veldhorst says.

The Avisi team’s experience aligns with data from recent Dynatrace research that has found nearly 60% of organizations spend the largest amount of time “ensuring security vulnerabilities are detected and eliminated quickly.” While many scanning tools and manual methods are effective, they are designed to detect vulnerabilities  earlier in the lifecycle. For existing code already in production, this approach was unable to detect whether newly published vulnerabilities were exposed.

Unlike a traditional approach, Dynatrace automatically identified the Log4Shell-vulnerable systems in Avisi’s production environment and provided his team with a prioritized list of systems to remediate first.

“Since Dynatrace scans our platform continuously, it could tell us if there was a vulnerability [in production],” Veldhorst said.

Code-level visibility shows what matters—and what can wait

Veldhorst noted that it can be difficult to identify whether a vulnerable component is truly in use in the environment. Prior to turning to a modern observability approach, the team would waste precious time fixing low-priority instances, even if the affected library was used only for testing but not in production.

However, because of its code-level visibility, Dynatrace revealed where Avisi’s systems used the Log4j 2 application programming interfaces and code, indicating which systems required immediate attention. In some cases, “Dynatrace let us know [Log4j 2] was in there, but that it wasn’t a priority issue,” Veldhorst said.

As a result, what would have taken days, weeks, or even months to address using traditional methods, Veldhorst’s team resolved in hours with no costly fallout or follow-up. For their regulation-bound clients, Veldhorst used Dynatrace dashboards to demonstrate they had no Log4Shell-vulnerable systems.

“Dynatrace helps us to be in control and to know we’re secure while not having to spend a ton of money and effort to achieve this control,” Veldhorst said. This capability of control fits with Avisi’s philosophy of embracing change.

Secure digital transformation: Embracing change

Avisi’s services benefit its customers that are on the road to secure digital transformation and use cloud-native technologies to get there. These organizations have come to recognize that they need to keep pace with innovation cycles that have sped up. At the same time, they don’t want to sacrifice software quality or security.

Avisi’s customers “want to iterate as fast as possible and change the software as quickly as possible,” Veldhorst says. “[Even while they are] changing they want everything to perform perfectly. That’s why we developed our Avisi managed environment using Dynatrace for performance monitoring and application security.”

Veldhorst notes that some companies mistakenly equate security with not changing. “We are change driven, not state-driven,” he says. “Companies sometimes think, ‘We are going to fix this, put it in production and hope it won’t change for the next three years. But if you don’t change, that is the biggest risk of all.”

Companies must embrace the notion that digital transformation is not just about moving faster or digitizing paper-based or otherwise manual processes. Rather, digital transformation is about embracing the reality that change is the new constant in today’s business landscape. Shoring up systems against performance problems and security vulnerabilities requires embracing dynamism rather than expecting things to stay the same.

Having an observability platform that can capture this dynamism and respond intelligently to changing states in real time is now key to success in an ever-changing landscape. This puts IT teams—whether infrastructure, development, operations or DevOps—on the frontlines of ensuring business success.

For more about how Dynatrace helps organizations address Log4Shell, check out the Dynatrace Log4Shell resource center.

The post Log4Shell highlights need for secure digital transformation with observability, vulnerability management appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/log4shell-highlights-need-for-secure-digital-transformation-with-observability-vulnerability-management/feed/ 0
Dynatrace Managed release notes version 1.234 https://www.dynatrace.com/news/blog/dynatrace-managed-release-notes-version-1-234/ https://www.dynatrace.com/news/blog/dynatrace-managed-release-notes-version-1-234/#respond Tue, 08 Feb 2022 11:47:17 +0000 https://www.dynatrace.com/news/?p=48469 Dynatrace news

Announcements Dynatrace Managed on Centos/RHEL/Oracle Linux 7.6. Starting with this release, you can no longer install new clusters and new nodes on Centos/RHEL/Oracle Linux 7.6 See End-of-support news for details. Kubernetes events integration in Dynatrace Managed For full observability into your Kubernetes events, automatic Davis analysis, and custom alerting, you need to enable Log Monitoring v2 and […]

The post Dynatrace Managed release notes version 1.234 appeared first on Dynatrace blog.

]]>
Dynatrace news

Announcements

Dynatrace Managed on Centos/RHEL/Oracle Linux 7.6.

Starting with this release, you can no longer install new clusters and new nodes on Centos/RHEL/Oracle Linux 7.6

Kubernetes events integration in Dynatrace Managed

For full observability into your Kubernetes events, automatic Davis analysis, and custom alerting, you need to enable Log Monitoring v2 and Kubernetes event integration.

New features and enhancements

New dtCookie format and Max user actions per minute limit

With Dynatrace version 1.234, the new dtCookie format as indicated in Cookies will be automatically enabled for all customers. This was previously announced in Dynatrace SaaS release notes version 1.215. If an environment still uses the old dtCookie format, the maximum user actions per minute limit will also become effective.

Actions required

If you haven’t previously enabled the new cookie format and your environment exceeds the default limit of 3,500 user actions per minute, you can adjust Maximum user actions per minute in Cluster Management Console > Environments in the Cluster overload prevention settings section.

Adjusting user action limit in Cluster Management Console

Environment tags

Environment tags are now displayed in Environment Details page in Cluster Management Console.

Performance improvements

To take advantage of performance improvements, the latest security-vulnerability enhancements, and bug fixes, we’ve upgraded JRE for the following cluster node components: Cassandra and Elasticsearch will now use JRE 8u312. Other components will now use JRE 11.0.13.

OpenID Connect (SSO)

  • Signature validation can be configured for OpenID Connect (SSO).
  • Refreshing user group assignments, every 30 minutes, for sign-in users with OpenID, to have user permissions up to date.

See Manage users and groups with OpenID in Dynatrace Managed for details.

The post Dynatrace Managed release notes version 1.234 appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/dynatrace-managed-release-notes-version-1-234/feed/ 0
Dynatrace SaaS release notes version 1.234 https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-234/ https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-234/#respond Tue, 08 Feb 2022 06:02:07 +0000 https://www.dynatrace.com/news/?p=48387 Dynatrace news

Dynatrace SaaS Release Notes thumbnail

Dynatrace SaaS release notes version 1.234 Announcements TLS 1.0 and 1.1 end-of-support for RUM data Starting with April 2022, Dynatrace is retiring TLS 1.0 and TLS 1.1 for Dynatrace SaaS RUM data. For more details, see TLS 1.0 and 1.1 end-of-support for RUM data. Session Replay masking v1 end-of-life Starting with Dynatrace version 1.238, Session Replay […]

The post Dynatrace SaaS release notes version 1.234 appeared first on Dynatrace blog.

]]>
Dynatrace news

Dynatrace SaaS Release Notes thumbnail

Dynatrace SaaS release notes version 1.234

Announcements

TLS 1.0 and 1.1 end-of-support for RUM data

Starting with April 2022, Dynatrace is retiring TLS 1.0 and TLS 1.1 for Dynatrace SaaS RUM data. For more details, see TLS 1.0 and 1.1 end-of-support for RUM data.

Session Replay masking v1 end-of-life

Starting with Dynatrace version 1.238, Session Replay masking v1 will no longer be supported. For details, check Dynatrace SaaS release notes version 1.233.

Service-level objectives

Dynatrace provides additional information about your SLOs in the Details section of the Service-level objectives page.

Dashboards

You can now specify the resolution for graphs and heatmaps.

Example graph (column chart) with timeframe Last 7 days, resolution 1 day:

Example chart resolution: graph: column chart: Last 7 days: resolution 1 day

Dynatrace API

To learn about changes to the Dynatrace API in this release, see Dynatrace API changelog version 1.234.

Resolved issues

General Availability (Build 1.234.107)

The 1.234 GA release contains 11 resolved issues (including 1 vulnerability resolution).

Component Resolved issues
Cluster 8 (1 vulnerability)
Application Security 2
User interface 1

Cluster

  • Vulnerability: Updated Log4j in Elasticsearch client to version 2.17.1. (APM-345471)
  • Resolved an issue that was causing alerts for infrastructure (for example, Host CPU) to be generated even when disabled. (APM-348563)
  • Fixed an issue in which “View PurePath” button on waterfall might lead to a wrong PurePath. (APM-347284)
  • Resolved display issues with sparkline and value positioning on “Single value” tile. (APM-348240)
  • Service and database list now displays correct value aggregation. Last value was displayed instead of average value under certain conditions (non-recent timeframes). (APM-346952)
  • Fixed a bug where some entitySelector queries lead to empty results, depending on the order of the filters. (APM-350114)
  • Fixed a bug that resulted in missing information in the Dynatrace web UI Problems and problem notifications. (APM-348206)
  • Improved error message when attempting to create a service metric with key in wrong format. (APM-343455)

Application Security

  • Resolved security monitoring rule issue (property “Management zone” was evaluated incorrectly) that caused vulnerabilities to continuously toggle between `OPEN` and `RESOLVED` states. (CASP-13279)
  • Improved permission handling for Application Security process group overview to allow restricted access to users that do not have global permissions. (CASP-12963)

User interface

  • The “Data explorer” page, “Single value” visualization, no longer crashes when “Last value” is turned on. (APM-346388)

The post Dynatrace SaaS release notes version 1.234 appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/dynatrace-saas-release-notes-version-1-234/feed/ 0
OneAgent release notes version 1.233 https://www.dynatrace.com/news/blog/oneagent-release-notes-version-1-233/ https://www.dynatrace.com/news/blog/oneagent-release-notes-version-1-233/#respond Mon, 07 Feb 2022 07:52:42 +0000 https://www.dynatrace.com/news/?p=48455 Dynatrace news

With this release, the oldest supported OneAgent versions are: Dynatrace ONE Dynatrace ONE Premium 1.215 1.209 z/OS Added tracing support for IMS Fast Path transactions. For details, see Install OneAgent on IMS. Java Added support for reactor-core 3.x Added support for Jedis Redis 4.x Go Added support for Go 1.17 Current Dynatrace OneAgent technology support changes Dynatrace OneAgent 1.233 is the last version to support the […]

The post OneAgent release notes version 1.233 appeared first on Dynatrace blog.

]]>
Dynatrace news

With this release, the oldest supported OneAgent versions are:

Dynatrace ONE Dynatrace ONE Premium
1.215 1.209

z/OS

Added tracing support for IMS Fast Path transactions. For details, see Install OneAgent on IMS.

Java

Go

  • Added support for Go 1.17

Current Dynatrace OneAgent technology support changes

Dynatrace OneAgent 1.233 is the last version to support the following technologies
  • Node.js 15 for Node.js
    • The vendor has de-supported this technology and version with 2021-06-01

Future Dynatrace OneAgent operating systems support changes

The following operating systems will no longer be supported starting 01 March 2022
The following operating systems will no longer be supported starting 01 April 2022
The following operating systems will no longer be supported starting 01 July 2022
The following operating systems will no longer be supported starting 01 August 2022
The following operating systems will no longer be supported starting 01 October 2022

Past Dynatrace OneAgent technology support changes

Dynatrace OneAgent 1.215 was the last version to support the following technologies
  • OpenTelemetry 0.18.0 for Go
Dynatrace OneAgent 1.217 was the last version to support the following technologies
  • OpenTelemetry 0.19.0 for Go
Dynatrace OneAgent 1.221 was the last version to support the following technologies
  • OpenTelemetry 0.20.0 for Go
  • OpenTelemetry 0.18.x, 1.0.0-rc.0, 1.0.0-rc.3 for Node.js
Dynatrace OneAgent 1.227 was the last version to support the following technologies
  • OpenTelemetry 1.0.0-RC1 for Go
  • OpenTelemetry 1.0.0-RC2 for Go

Past Dynatrace OneAgent operating systems support changes

The following operating systems are no longer supported since 01 June 2021
The following operating systems are no longer supported since 01 July 2021
The following operating systems are no longer supported since 01 September 2021
  • Linux: Red Hat Enterprise Linux CoreOS 4.4
    • x86-64
    • Last compatible version: 1.223
  • Linux: openSUSE 15.1
The following operating systems are no longer supported since 01 October 2021
  • Linux: Google Container-Optimized OS 77 LTS
The following operating systems are no longer supported since 01 December 2021
The following operating systems are no longer supported since 01 February 2022

The post OneAgent release notes version 1.233 appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/oneagent-release-notes-version-1-233/feed/ 0
Common SLO pitfalls and how to avoid them https://www.dynatrace.com/news/blog/common-slo-pitfalls-and-how-to-avoid-them/ https://www.dynatrace.com/news/blog/common-slo-pitfalls-and-how-to-avoid-them/#respond Wed, 02 Feb 2022 19:40:40 +0000 https://www.dynatrace.com/news/?p=48378 Dynatrace news

Today, online services require near 100% uptime. This demand creates an increasing need for DevOps teams to maintain the performance and reliability of critical business applications. Architecting service-level objectives (SLOs), along with service-level agreements and service-level indicators, is a great way for teams to evaluate and measure software performance that stays within error budgets. But […]

The post Common SLO pitfalls and how to avoid them appeared first on Dynatrace blog.

]]>
Dynatrace news

Today, online services require near 100% uptime. This demand creates an increasing need for DevOps teams to maintain the performance and reliability of critical business applications. Architecting service-level objectives (SLOs), along with service-level agreements and service-level indicators, is a great way for teams to evaluate and measure software performance that stays within error budgets. But there are SLO pitfalls. As such, it’s important when creating your SLOs to avoid these common mistakes that can cause more headaches for your DevOps teams.

SLO pitfalls

Pitfall 1: SLOs not aligned with your business goals

One common pitfall is creating an SLO that is not aligned to your business goals or a service level agreement (SLA). This can create an unnecessary distraction and steal time away from critical tasks. For example, the IT team of a bank wants to ensure that for a trailing 30-day period there is 99.9% service availability with <50ms latency for an application with no revenue impact. Setting a stringent SLO for an application that’s not business-critical can lead to wasted time and resources when it comes to remediating issues or performing tasks to ensure uptime.

If an SLO is not tied back to a key business objective or external SLAs, it is best to reconsider or recalibrate the objective. The best investment is in managing SLOs for customer-facing, revenue-generating, high visibility applications. For example, constant SLO violations of service availability for the check deposit application would create customer dissatisfaction leading to potential revenue impact.

Pitfall 2: SLOs with no ownership or accountability

When SLOs are violated, who do you call? Who owns it? SLOs created by upper management without buy-in from relevant development, operations, and SRE stakeholders can lead to finger-pointing, blaming, and chaotic war rooms when violations occur. A broken SLO with no owner can take longer to remediate and is more likely to recur compared to an SLO with an owner and a well-defined remediation process.

To avoid orphaned SLOs, ensure there are high levels of collaboration between key stakeholders during the creation of an SLO and that SLOs are vetted, viable, and agreed upon. Establish the relevant service level indicators (SLIs) that need to be monitored, the process for remediating any issues, the relevant tools required, and timeframes for resolution. You should discuss and agree upon all these questions before your team adopts an SLO.

Pitfall 3: Using SLOs reactively vs. proactively

Commonly, teams create SLOs because they are simply following what others in the industry are doing, or because they are common best practices. But many fail to understand the business objective it is tied to. In these organizations, IT teams may not pay attention to the SLOs until violations happen, after which individual owners scramble to resolve them. This is reactive in nature and erodes the value SLOs bring to an organization in maintaining the health, reliability, and resiliency of an application. Being reactive also does not prevent similar violations from reoccurring in the future, instead takes away critical time from your developers.

To avoid this, start the SLO discussion early in the design process. Push for SLO evaluation to be incorporated into the CI/CD pipeline and not just in production. Ensure error budgets are set up and tracked with alerting and root cause analysis, so development teams can understand and triage issues before they become problems and cause violations.

Pitfall 4: SLO thresholds that are too high or too low

One of the most common SLO pitfalls is overpromising by setting SLO targets too high or underdelivering by setting SLO targets too low. SLOs are important for evaluating how successful your team is at delivering what has been agreed upon, either in the customer-facing SLA or the internally agreed-upon business objective. If you set SLOs so they are in constant violation or in constant compliance, then they become meaningless and do not help you understand the health of your application.

Let’s take service availability for example. According to Google G-Suite researchers, a good availability metric should be meaningful (captures user experience), proportional (change in the metric should be proportional to the change in user-perceived availability), and actionable (insight into why the metric is low or high).

A good rule of thumb is this: your success in SLOs should correlate with customer and user experiences, and violations should represent deteriorating services. For example, setting an SLO with a service availability of 89% can be problematic, as the amount of downtime of 11% can impact a significant set of users. Meanwhile, DevOps teams would not get any alerts or be worried about customer impact as their SLOs are within the threshold.

To set meaningful thresholds, work with your relevant stakeholders to establish SLOs that are achievable but also impactful for user experiences. Review with owners to calibrate SLIs that best capture the specific use case. Tailoring SLOs in this way ensures that you’re spending resources making sure that SLOs are met, used efficiently, driving customer value, and helping Developers improve their QA and resolution processes.

Pitfall 5: Manual evaluation of SLOs through dashboards and spreadsheets

Developing dashboards and spreadsheets to track SLO performance can be extremely useful for organizing and visualizing your SLOs and SLIs. However, another of the common SLO pitfalls is that many organizations assemble these metrics manually using disparate tools, which can take time from innovation. Simply performing eyeball analytics by looking at multiple dashboards slows down the quality evaluation process and introduces a higher risk of failures.

Continuous and automated release validation is the answer. The ability to automatically evaluate test results, leverage key SLIs from your monitoring tools, and calculate quality scores that can automate the go/no-go decision at every stage of the lifecycle is critical in reducing human error and scaling the QA process. The power to automatically stop bad code in its tracks through an intelligent, data-driven approach is significant for development teams that are constantly constrained by manual processes, yet asked to deliver higher quality software at speed.

An automatic and intelligent approach to creating and monitoring SLOs

Avoiding SLO pitfalls and meeting the challenges of creating SLOs can be frustrating, especially with today’s complex IT processes. However, with adequate planning and high collaboration between Biz, Dev, Ops, and Security teams, stakeholders can be better prepared to establish SLOs that ensure you’re delivering software that’s reliable, resilient, and meets customer expectations.

An observability platform like Dynatrace provides all the SLIs you need to build and calibrate effective SLOs. Dynatrace has SLOs natively built into the platform and can automate the evaluation process to enable continuous release validation. Leveraging a platform like Dynatrace is a great boon for modern IT teams that are resource-constrained but looking to be nimble and agile. When implemented successfully, SLOs can provide numerous benefits to your business, including reducing expensive and time-consuming service outages, eliminating silos, and increasing collaboration.

To get started with SLOs in Dynatrace, download the free trial today.

Download an overview of common SLO pitfalls and how to avoid them.

The post Common SLO pitfalls and how to avoid them appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/common-slo-pitfalls-and-how-to-avoid-them/feed/ 0
Dynatrace SaaS on Azure now Generally Available https://www.dynatrace.com/news/blog/dynatrace-saas-on-azure-ga/ https://www.dynatrace.com/news/blog/dynatrace-saas-on-azure-ga/#respond Wed, 02 Feb 2022 14:00:38 +0000 https://www.dynatrace.com/news/?p=48383 Dynatrace news

Dynatrace SaaS Release Notes thumbnail

In September, we announced the availability of the Dynatrace Software Intelligence Platform on Microsoft Azure as a SaaS solution and natively in the Azure portal. Today, we are excited to provide an update that Dynatrace SaaS on Azure is now generally available (GA) to the public through Dynatrace sales channels. Organizations are continuing to prioritize digital […]

The post Dynatrace SaaS on Azure now Generally Available appeared first on Dynatrace blog.

]]>
Dynatrace news

Dynatrace SaaS Release Notes thumbnail

In September, we announced the availability of the Dynatrace Software Intelligence Platform on Microsoft Azure as a SaaS solution and natively in the Azure portal. Today, we are excited to provide an update that Dynatrace SaaS on Azure is now generally available (GA) to the public through Dynatrace sales channels.

Organizations are continuing to prioritize digital transformation as they race to keep up with evolving customer demands. Dynatrace SaaS availability on Azure helps the world’s largest organizations achieve this through enabling faster cloud adoption and more effective digital transformation. With this solution, customers will be able to use Dynatrace’s deep observability, advanced AIOps capabilities, and application security to all applications, services, and infrastructure, out-of-the-box. This enables organizations to tame cloud complexity, minimize risk, and reduce manual effort so teams can focus on driving innovation.

Additional benefits of Dynatrace SaaS on Azure include:

  • No infrastructure investment: Dynatrace manages the infrastructure for you, including automatic visibility, problem detection, and smart alerting across virtual networks, virtual infrastructure, and container orchestration. This means you no longer have to procure new hardware, which can be a time-consuming and expensive process. Additionally, the investment for on-prem installations is a lot more expensive to manage over time.
  • No operational duties: Dynatrace operates the product for you with auto-discovery of your entire stack, end-to-end, including processes running inside containers. Upgrades are automatic as well – you can control when they happen and Dynatrace does the rest. If any issues do arise on your SaaS tenant, Dynatrace is the first to respond and proactively prevent issues before the customer notices.
  • Security: Data is stored securely in the Dynatrace cloud (powered by Azure). Dynatrace captures all your data, including host and application metrics, basic-network metrics, real-user metrics, mobile metrics, cloud-infrastructure metrics, log metrics, and much more. As such data may contain private or sensitive user information, Dynatrace offers data masking features to assist in complying with data privacy and data protection obligations. All data at rest is stored in Azure Storage and is encrypted and decrypted using 256-bit AES encryption (FIPS 140-2 compliant).
  • SaaS-first innovation: Innovative product updates and features are released to SaaS first and then released to managed deployments later. At Dynatrace, we innovate faster and schedule releases more frequently on SaaS vs. our Managed deployments.
  • Scalability: Dynatrace provides easy and limitless horizontal scalability for SaaS deployments., and scales up as monitoring environments grow by simply adding nodes, with built-in failover and automatic load balancing to ensure optimal resource usage.
  • Reduced time-to-benefit: A traditional on-prem deployment model is time-consuming. You must procure hardware, install the OS on the server, install the application, and configure it. With SaaS, the applications and server now reside in the cloud on our Azure tenant. With a push of a button, Dynatrace automatically provisions and deploys your SaaS tenant on-demand in Azure. This provides major time savings, which teams can spend on other business priorities and initiatives.
  • Azure Region – Dynatrace SaaS on Azure will initially be available in two Azure regions: Azure US East (Virginia) and Azure EU West (Netherlands). More Azure regions will be brought online over time as customer demand follows.

Dynatrace is excited about our strategic partnership with Microsoft and the technology integrations and advancements we have made jointly over the past few years, as well as new ones to be announced. Check out Dynatrace’s listing in the Azure Marketplace to see how easy it is to get started. If you would like to learn more about Dynatrace and Microsoft, check out our webinar for a deep dive on how Dynatrace can modernize your Azure operations.

The post Dynatrace SaaS on Azure now Generally Available appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/dynatrace-saas-on-azure-ga/feed/ 0
Advance DevSecOps practices with a vulnerability management strategy https://www.dynatrace.com/news/blog/advance-devsecops-practices-with-a-vulnerability-management-strategy/ https://www.dynatrace.com/news/blog/advance-devsecops-practices-with-a-vulnerability-management-strategy/#respond Tue, 01 Feb 2022 21:24:44 +0000 https://www.dynatrace.com/news/?p=48367 Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

Adding a vulnerability management strategy to your DevSecOps practices can be key to handling threats like Log4Shell.

The post Advance DevSecOps practices with a vulnerability management strategy appeared first on Dynatrace blog.

]]>
Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

As organizations struggle to combat vulnerabilities in their IT environments, they need real-time data on performance problems and security issues. At the annual conference Dynatrace Perform 2022, the theme is “Empowering the game changers.” In the Advancing DevOps and DevSecOps track, sessions aim to help security pros, developers, and engineers as they brace for new threats that are costly and time-consuming to address.

In this preview video for Dynatrace Perform 2022, I talk to Ajay Gandhi, VP of product marketing at Dynatrace, about how adding a vulnerability management strategy to your DevSecOps practices can be key to handling threats posed by vulnerabilities.

Consider the Log4Shell vulnerability, which emerged in December 2021 and is estimated to have affected hundreds of millions of systems worldwide. The vulnerability is located in Log4j 2, an open-source Apache Java software used to run logging services in a host of front-end and backend applications. Log4j 2 can grant access to internal networks, and if exploited, makes networks, applications, and devices susceptible to data theft and malware attacks. Because the Log4j 2 library is used so pervasively, it has had a dramatic impact on business.

By integrating runtime vulnerability management into DevSecOps practices, teams can immediately detect and remediate exploitable vulnerabilities like Log4Shell in their environments.

Why DevSecOps practices benefit from vulnerability management

Without a centralized approach to vulnerability management, DevSecOps teams waste time figuring out how a vulnerability affects the production environment and which systems are affected.

A real-time observability platform with code-level application insights can automatically identify vulnerabilities in runtime and production environments. Moreover, modern observability capabilities provide context about activity in an IT environment so teams know what is most critical to address first. As a result, IT teams can quickly prioritize remediation efforts, which can make the difference between a successful and an unsuccessful attack.

“The requirements for vulnerability management have evolved, and Log4Shell has crystallized that,” says Gandhi. “You need more context to be effective in addressing vulnerabilities quickly, precisely, and at scale and being able to prioritize which apps and which code segments need to be addressed first.”

Observability is the game-changer. “What we found is that by combining observability context (which apps are affected and infrastructure monitoring) with security intelligence, Dynatrace AI can prioritize what to focus on first, second, and third and automatically generate a risk assessment. Teams can then identify all affected apps in their environment in real-time.”

A key DevSecOps practice in regard to vulnerability management is not only “shifting left” (moving testing early in the development cycle to identify vulnerabilities) but also “shifting right” (continuously testing software in production to ensure security and quality). As DevSecOps practices mature, teams can benefit from observability that spans the software development cycle to identify vulnerabilities in development and in production.

For our complete Perform 2022 conference coverage, check out our guide.

Register for Perform 2022 today, and check out the Advancing DevOps and DevSecOps track.

The post Advance DevSecOps practices with a vulnerability management strategy appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/advance-devsecops-practices-with-a-vulnerability-management-strategy/feed/ 0
Shift left vs shift-right: A DevOps mystery solved https://www.dynatrace.com/news/blog/what-is-shift-left-and-what-is-shift-right/ https://www.dynatrace.com/news/blog/what-is-shift-left-and-what-is-shift-right/#respond Mon, 31 Jan 2022 19:39:49 +0000 https://www.dynatrace.com/news/?p=48286 Dynatrace news

shift-left, shift-right

The DevOps approach to developing software aims to speed applications into production by releasing small builds frequently as code evolves. As part of the continuous cycle of progressive delivery, DevOps teams are also adopting shift-left and shift-right principles to ensure software quality in these dynamic environments. All this shifting may sound abstract, but I’ll explain […]

The post Shift left vs shift-right: A DevOps mystery solved appeared first on Dynatrace blog.

]]>
Dynatrace news

shift-left, shift-right

The DevOps approach to developing software aims to speed applications into production by releasing small builds frequently as code evolves. As part of the continuous cycle of progressive delivery, DevOps teams are also adopting shift-left and shift-right principles to ensure software quality in these dynamic environments.

All this shifting may sound abstract, but I’ll explain how this software verification approach benefits DevOps methods and outcomes—and makes software more reliable.

In DevOps, what is shift-left? And what is shift-right?

To understand shift left and shift right, consider the software development cycle as a continuum, or infinity loop, from left to right. On the left side of the loop, teams plan, develop, and test software in pre-production. The main concern in pre-production on the left side of the loop is building software that meets design criteria. When teams release software into production on the right side of the loop, they make the software available to users. The concern in production is to maintain software that meets business goals and reliability criteria.

shift left, shift right

Shift-left is the practice of moving testing, quality, and performance evaluation early in the software development process, thus the process of shifting to the “left” side of the DevOps lifecycle. This concept has become increasingly important as teams face pressure to deliver software faster and more frequently with higher quality. Shift-left speeds up development efficiency and reduces costs by detecting and addressing software defects earlier in the development cycle before they get to production.

Likewise, shift–right is the practice of performing testing, quality, and performance evaluation into production under real-world conditions. Shift-right methods ensure that applications running in production can withstand real user load while ensuring the same high levels of quality. With shift right, DevOps teams test a built application to ensure performance, resilience, and software reliability. The goal is to detect and remediate issues that would be difficult to anticipate in development environments.

Both shift-left and shift-right testing have become important components of Agile software development, enabling teams to develop and release software incrementally and reliably but also test software at various points in the lifecycle.

We’ve already had some conversations about shift-left, so let’s take a closer look at shift-right.

Want to learn more about DevOps?

Streamline the way IT operates and enterprises grow with observability and AIOps. Read our DevOps eBook – A Beginners Guide to DevOps Basics

Why shift-right is important

With shift-right, teams can test code in an environment that mimics real-world production conditions that can’t be simulated in development. This practice enables teams to catch runtime issues before customers do. To automate part of the process, teams can use application programming interface calls. Organizations can also apply shift-right testing to code that gets configured or is monitored in the field.

Similar to shift-left testing, the objective of shift-right testing is to fail small and fail fast. The assumption is that problems caught early in the pre-deployment environment are easier to solve than issues caught by customers in live production.

Once established, shift-right becomes part of the continuous feedback loop that characterizes DevOps and more closely aligns development and operations activities.

Shift-right testing is especially useful for organizations practicing progressive delivery, wherein developers release new software features incrementally to minimize the impact of unforeseen issues. Testing in a production-ready environment is a crucial final phase before declaring features ready for prime-time.

Why shift to shift-left and shift-right testing?

The shift-left/shift-right mentality differs in some important ways from how testing is handled in traditional “waterfall” methodologies.

The waterfall method follows a structured process in which requirements are translated into specifications and then into code in a series of handoffs. In this scenario, testing is usually left until a project is ready to be released into production. By waiting to test until the end, teams can miss issues that developers could quickly fix while they are still actively working on a feature. This approach wastes time is error-prone and often misses the opportunity to address production-environment issues before deploying.

Shift-left testing can reduce software defects and speed software’s time to market. In a shift-left scenario, teams incorporate testing early, often before any code is written, and throughout development. Rather than testing for functionality, shift-left testing checks that software adheres to the specifications created by the business.

On the other side of the equation, shift-right practices can better ensure production reliability by testing software in production and under real-world conditions. As a result, teams get more comprehensive testing coverage that better addresses user experience concerns.

Why shift-right is critical for microservice architecture

Testing in production is especially important for software built from microservices. The performance of microservices-based applications depends on the responsiveness of individual services, which makes testing in a simulated environment difficult. Shifting right enables teams to observe real-world forces and measure their impact.

Shift-right tests typically cover functionality, performance, failure tolerance, and user experience. Teams often automate such production-environment testing and translate feedback into technical specifications for developers. Testers can isolate issues to the greatest degree possible so teams can fix and tackle improvements in parallel. As an application becomes more stable, teams can start testing and optimizing performance.

Types of shift-right tests

A shift-right approach may enlist various types of test suites. Here are a few your team might find useful.

A/B testing. This method is commonly used in web design. Users are presented with one of two versions of a page and the results are measured to determine which generates a greater response. This type of test is almost always conducted in a production environment so real-world feedback can be gathered.

Synthetic monitoring. Another variety of shift-right testing is synthetic monitoring, which is the use of software tools to emulate the paths users might take when engaging with an application. Synthetic monitoring can automatically keep tabs on application uptime and tell you how your application responds to typical user behavior. It uses scripts to generate simulated user behavior for various scenarios, geographic locations, device types, and other variables.

Chaos testing. With chaos engineering, developers intentionally “break” the application by introducing errors to determine how well it recovers from disruption. DevOps and IT teams set up monitoring tools so they can see precisely how the application responds to different types of stresses. This test is usually performed in a controlled production environment to minimize the impact on mission-critical systems.

Canary releases. This strategy is named for the canaries that miners use to lower into coal mines to detect toxic gases. Technology has thankfully rendered this inhumane tactic obsolete, but the term survives to describe a slow rollout of changes to a small subset of instances for testing before applying them to the full infrastructure. Closely related to this controlled, iterative method of updating software is.

Blue-green deployment. With a blue-green deployment, an organization runs two nearly identical production environments, shifting users (real or synthetic) between the two as they make small changes to one or the other. This practice is important to shift-right methodology as it can minimize downtime and provide a mechanism for rapid rollback should something go wrong with the latest version.

The application security dividend of shift-right and shift-left

An important benefit of shifting right is improved application security. “Scanning a static image, either in a repository or in a development environment, can’t give you the same rich insights you can get if you observe the application running in production,” a Dynatrace report on security evolution in the cloud notes. “For example, you don’t get to see what libraries are actually called, how they are used, whether a process is exposed to the Internet, or whether a process interacts with sensitive corporate data.”

The rapidly proliferating use of software containers has complicated aspects of cybersecurity. Containers can obscure the processes running in them, and attackers even containerize exploits. Production testing exposes the behavior of container-based software, even if the contents of containers are obscured. Shift-right testing can also be used to test for the presence of “zero-day exploits,” which are attacks that haven’t been seen before.

From a shift-left perspective, security testing during development helps identify vulnerabilities as early in the life cycle as possible, when they are easiest to remediate.

Shift-right done right with full-stack monitoring

Automated full-stack monitoring is an important tool in shift-right testing. It gives developers, operations teams, and testers a way to discover and monitor all requests and processes from all services across sprawling and complex multi-cloud applications — from a single interface. Testers can push deployment information and metadata to the monitoring environment using scripts and track builds, revisions, and configuration changes. The better a platform understands the full context of an issue, the better it will be able to detect root causes, flag with the proper parties, and even implement self-healing measures.

Whether your organization has shifted testing left to the development phase or right in production — or simply wants to monitor performance in the field — an AI-driven, full-stack observability solution can take your software development to the next level.

To learn more about how Dynatrace helps developers automate testing and release, join us for the on-demand performance clinic, Why Devs Love Dynatrace – Episode 3 – Automated Release Comparison.

The post Shift left vs shift-right: A DevOps mystery solved appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/what-is-shift-left-and-what-is-shift-right/feed/ 0
IT teams seek observability for, and control over, serverless architecture https://www.dynatrace.com/news/blog/it-teams-seek-visibility-control-over-serverless-architecture/ https://www.dynatrace.com/news/blog/it-teams-seek-visibility-control-over-serverless-architecture/#respond Mon, 31 Jan 2022 18:01:53 +0000 https://www.dynatrace.com/news/?p=48313 Dynatrace news

vulnerability management, modern observability, Dynatrace Perform 2022, serverless architecture

At Dynatrace Perform 2022, we highlight how modern observability helps augment the advantages and cure the ills of serverless architecture.

The post IT teams seek observability for, and control over, serverless architecture appeared first on Dynatrace blog.

]]>
Dynatrace news

vulnerability management, modern observability, Dynatrace Perform 2022, serverless architecture
Key takeaways from this article on modern observability for serverless architecture:

  • As digital transformation accelerates, organizations need to innovate faster and continually deliver value to customers. Companies often turn to serverless architecture to accelerate modernization efforts while simplifying IT management.
  • While serverless architecture provides benefits, the dynamic containers and distributed microservices that come with them introduce new types of complexity. Gaining visibility to manage the performance and security of serverless applications in distributed public clouds can be difficult.
  • With an intelligent observability platform, operations and site reliability engineering teams can analyze, optimize, and troubleshoot applications in modern, large-scale, and heterogeneous environments.
  • At Perform 2022, Dynatrace will showcase how its observability platform extends AI-powered insights to serverless architecture spanning multiple cloud environments. As a result, instead of losing issues in the noise, DevOps and engineering teams can pinpoint problems and optimize performance across serverless platforms to deliver better software

Serverless architecture is the default for modern organizations

As digital transformation accelerates, organizations need to innovate faster. To keep pace, these enterprises have turned to serverless architecture on multiple cloud platforms to accelerate without getting bogged down in manual IT management and security tasks.

The cloud-based, on-demand execution model of serverless architecture helps teams innovate more efficiently and effectively by removing the burden of managing the underlying infrastructure. Using serverless architecture, teams can focus on strategic and revenue-generating tasks rather than firefighting break-fix issues.

Simply put, cloud-based serverless architecture helps teams maximize performance while also reducing the cost of maintaining IT infrastructure. According to recent Dynatrace data, some 99% of organizations have adopted cloud architecture.

But what makes serverless cloud architecture nimble can also introduce complexity. Modern multicloud architecture is distributed, dynamic, and highly interconnected, comprising many individual applications and microservices. A single modern application often consists of a serverless architecture that includes services from multiple vendors. To get a handle on observability, teams often adopt open-source observability tools, such as Prometheus, OpenTelemetry, and StatsD. But between multicloud platforms and open-source tools, teams can also experience data silos.

As a result, this distributed application model makes it difficult to get real-time visibility and analytics, and even harder to automate operations.

At Perform 2022: Why AI-based observability aids serverless architecture management

With real-time, causation-based AI, customers can identify issues before they affect users without having to train data models upfront. With Dynatrace’s AI-powered observability, the platform is continually learning, rather than having data fed to it as a bolt-on.

This integrated approach to AIOps also enables teams to accelerate innovation by spending less time on manual troubleshooting tasks.

Unlike traditional monitoring tools, a modern observability platform provides visibility into services running in serverless environments across multiple clouds. With this intelligence, teams can instantly identify issues from one end to the other, and implement time-saving automation.

At Dynatrace Perform 2022, the theme is “Empowering the game changers,” where we explore the benefits of modern observability for IT pros who rely on serverless architecture. In the “Advancing dynamic, cloud-native workloads” track, we’ll explore the ability of AI to automatically discover issues and provide real-time answers on how to resolve them. Themes from this track include:

  • Gaining insights across multiple serverless platforms. Teams use different cloud platforms to take advantage of features for different purposes. But this diversity also leads to data silos. We‘ll explore how an end-to-end view of multicloud environments with context-driven insights enables teams to boost application performance and automate end-to-end processes.
  • Harnessing data from open-source observability tools. Many companies that adopt open-source observability tools such as OpenTelemetry and Prometheus struggle with scattered telemetry data. We’ll discuss how An AI-driven platform approach to observability integrates open-source data with data generated by serverless architectures for comprehensive analysis.
  • Getting the most out of OpenTelemetry. The open-source observability framework, OpenTelemetry, provides a common format for how observability data is collected and sent. We’ll explore how an AI-based platform approach can build on that standard, extend it with context-based analysis, and scale it across the enterprise.
  • Accelerating cloud migration strategically. The myriad interdependencies of modern cloud applications make refactoring a complex monolithic application challenging. We’ll discuss methods for managing and accelerating cloud migration, and how a complete view of all serverless services and their interdependencies helps you migrate more strategically.

For our complete Perform 2022 conference coverage, check out our guide.

Register for Perform 2022 today, and check out the Advancing dynamic, cloud-native workloads track.

The post IT teams seek observability for, and control over, serverless architecture appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/it-teams-seek-visibility-control-over-serverless-architecture/feed/ 0
DevOps orchestration breaks quality-speed stalemate in SDLC https://www.dynatrace.com/news/blog/devops-orchestration-breaks-quality-speed-stalemate-in-sdlc/ https://www.dynatrace.com/news/blog/devops-orchestration-breaks-quality-speed-stalemate-in-sdlc/#respond Fri, 28 Jan 2022 17:16:07 +0000 https://www.dynatrace.com/news/?p=48300 Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

DevOps orchestration enables developers, site reliability engineers, and DevOps teams to develop at the pace of business without sacrificing code quality as they develop code throughout the software development lifecycle.

The post DevOps orchestration breaks quality-speed stalemate in SDLC appeared first on Dynatrace blog.

]]>
Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

DevOps orchestration is essential for development teams struggling to balance speed with quality.

In a preview video for Dynatrace Perform, Andreas Grabner, DevOps activist at Dynatrace, and Lauren Horwitz, content director at Dynatrace, talk about how DevOps orchestration helps companies deliver on speed and quality by integrating tools and automating the software delivery life cycle (SDLC).

Why DevOps orchestration needs cloud automation

As the pace of business accelerates, developers are feeling the pain. They struggle to accelerate development cycles, and code quality can suffer. They may also work with a variety of tools that create a fragmented, siloed, and manual environment that slows innovation and impedes code quality.

DevOps orchestration enables teams to reduce the friction and quality issues as they develop software and collaborate on software artifacts throughout the SDLC. A modern observability platform that employs cloud automation brings this orchestration to reality, enabling development teams to break down silos, automate tasks and develop software at higher quality.

“There doesn’t need to be a tradeoff between quality and speed as long as we use SLOs as our guardrails,” says Grabner in the video conversation.

At Perform 2022, we’ll explore various DevOps themes, including the importance of DevOps orchestration.

Register for Perform 2022 today, and check out the Advancing DevOps and DevSecOps track.

For our complete Perform 2022 conference coverage, check out our guide.

The post DevOps orchestration breaks quality-speed stalemate in SDLC appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/devops-orchestration-breaks-quality-speed-stalemate-in-sdlc/feed/ 0
AIOps capabilities drive intelligent cloud observability https://www.dynatrace.com/news/blog/aiops-capabilities-drive-intelligent-cloud-observability/ https://www.dynatrace.com/news/blog/aiops-capabilities-drive-intelligent-cloud-observability/#respond Thu, 27 Jan 2022 14:58:33 +0000 https://www.dynatrace.com/news/?p=48282 Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

AIOps capabilities help IT teams cope with the overwhelming complexity of multicloud and hybrid cloud environments. While AIOps that relies on correlation-based machine learning isn’t new, causation-based AIOps is a game-changer.

The post AIOps capabilities drive intelligent cloud observability appeared first on Dynatrace blog.

]]>
Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

AIOps capabilities have emerged as the best way to cut through the noise of IT operations for a reason.

AIOps helps IT teams cope with the overwhelming complexity of multicloud and hybrid cloud environments. While AIOps that relies on correlation-based machine learning isn’t new, causation-based AIOps is a game-changer.

In a preview video for Dynatrace Perform 2022, Joel Alcon, Dynatrace product marketing director of services, and Lauren Horwitz, content director at Dynatrace, discuss the role of causal AI and AIOps capabilities in cloud observability.

Why AIOps capabilities–and how causation-based AI changes the game

Organizations today are pressured to release quality software faster than ever and to ensure seamless digital experiences without downtime. The combination of customer demands and business pressure has fueled the need to build, test, deploy, and manage software with greater flexibility.

To gain this scale and dexterity many organizations have moved to cloud architecture.

But with that shift comes more data, more systems to monitor, and more processes and services to manage. At the same time, many organizations still rely on manual, error-prone processes. They’re scrambling to coordinate team responses via war rooms when issues occur, or relying on static, moment-in-time dashboards to analyze alerts across their environment.

AIOps capabilities have emerged as a solution to deliver continuous, intelligent automation and help overcome the complexity of these dynamic, multicloud environments.

With causation-based AIOps, IT teams can automatically identify issues. Moreover, with AI at the core of an observability platform, an IT environment can evolve and change and the AI won’t need additional manual intervention to continue delivering value.

Listen in as Alcon and Horwitz preview the themes of Dynatrace’s game-changing approach to AIOps for Perform 2022. And check out our preview of the AIOps track at Perform 2022.

For our complete Perform 2022 conference coverage, check out our guide.

Register for Dynatrace Perform 2022 here and follow the track “Advancing your AIOps agenda.”

The post AIOps capabilities drive intelligent cloud observability appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/aiops-capabilities-drive-intelligent-cloud-observability/feed/ 0
Uplevel your gamechanging skills at Perform 2022 https://www.dynatrace.com/news/blog/uplevel-your-skills-at-perform-2022/ https://www.dynatrace.com/news/blog/uplevel-your-skills-at-perform-2022/#respond Tue, 25 Jan 2022 19:04:01 +0000 https://www.dynatrace.com/news/?p=48243 Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

Despite having to reboot Perform 2022 from onsite in Vegas to virtual, due to changing circumstances, we’re still set to offer just the same high-quality training. And, what’s more – Dynatrace offers virtual training year-round in Dynatrace University, our product education platform. This means that despite not being in Vegas, our hands-on training (HOT) session […]

The post Uplevel your gamechanging skills at Perform 2022 appeared first on Dynatrace blog.

]]>
Dynatrace news

AIOps capabilities, DevOps orchestration, DevSecOps practices

Despite having to reboot Perform 2022 from onsite in Vegas to virtual, due to changing circumstances, we’re still set to offer just the same high-quality training. And, what’s more – Dynatrace offers virtual training year-round in Dynatrace University, our product education platform.

This means that despite not being in Vegas, our hands-on training (HOT) session attendees will see very minimal changes as we migrate to a virtual Perform 2022.  Attendees will have the same great Dynatrace experts teaching the sessions live via webcam (and we’d highly encourage you to turn on your webcam as well), the same virtual classroom to join and complete the hands-on exercises, and the same great content. The differences are no flying, no jet lag, and saving money on travel costs (which some might actually see as an added benefit!).

At Perform 2022, we’re offering 23 unique sessions with a session for everyone covering all different areas of technology related to Dynatrace, for all experience levels.

For those who are new to Dynatrace, or checking out Dynatrace for the first time, we recommend the following HOT sessions:

  1. Getting started with Dynatrace – See how to get started with Dynatrace, including a hands-on look at how to install the OneAgent, understand the full-stack metrics captured, and review key use cases covered by the platform.
  2. Getting started with Digital Experience Management (DEM) analytics – Start building your observability expertise with Log Monitoring in the Dynatrace platform.

If you’re new to the Dynatrace Application Security module specifically, we also recommend registering for our Intelligent vulnerability detection and remediation session with Aleksey Sirenko, Robin Wyss, and Stuart Butcher.

Register now*

If you are paying for your Perform HOT sessions via Flexpoints, you MUST contact your Services Representative BEFORE you register.

In light of the change in location, we’ve also rebooted our Perform 2022 certification promotion; after attending our Perform 2022 event (Feb 7 – 11), make sure you send an email to performcertification@dynatrace.com to request your free Dynatrace Associate attempt. You should receive a response to your email with your special promotion code by February 18, 2022.

Already Dynatrace Associate Certified?

If you’ve already got your Associate Certification, and attended at least one HOT session at Perform 2022, you qualify for one free Professional Certification attempt.

After attending Perform 2022, send an email to performcertification@dynatrace.com communicating that you qualified for the Professional Certification attempt as a Perform 2022 HOT session attendee. You should receive a response to your email with your special promotion code by February 18, 2022.

Please note this is a limited-time promotion associated with our Perform 2022 event. Any free Certification attempts must be redeemed by March 31, 2022, 5:00 PM EST. There will be no exceptions, reissuing of codes, or extensions for any reason for this promotion.

We can’t wait to see you virtually during the week of Perform 2022!

The post Uplevel your gamechanging skills at Perform 2022 appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/uplevel-your-skills-at-perform-2022/feed/ 0
Dynatrace Perform 2022: Themes to watch at Dynatrace’s annual conference https://www.dynatrace.com/news/blog/dynatrace-perform-2022-themes-to-watch/ https://www.dynatrace.com/news/blog/dynatrace-perform-2022-themes-to-watch/#respond Mon, 24 Jan 2022 13:17:24 +0000 https://www.dynatrace.com/news/?p=47971 Dynatrace news

vulnerability management, modern observability, Dynatrace Perform 2022, serverless architecture

At Perform 2022, the theme is “empowering the game changers.” The conference and this guide indicate how modern observability helps IT pros manage complex multicloud environments.

The post Dynatrace Perform 2022: Themes to watch at Dynatrace’s annual conference appeared first on Dynatrace blog.

]]>
Dynatrace news

vulnerability management, modern observability, Dynatrace Perform 2022, serverless architecture

As strained IT, development, and security teams head into 2022, the pressure to deliver better, more secure software faster has never been more consequential. Modern observability that helps teams securely regain control of complex, dynamic, ever-expanding cloud environments can be game-changing.

At our virtual conference, Dynatrace Perform 2022 on February 7 – 9, the theme is “Empowering the game changers.”

Empowering the game changers at Dynatrace Perform 2022

Managing cloud complexity becomes critical as organizations continue to digitally transform. Over the past 18 months, the need to utilize cloud architecture has intensified. Organizations seek to modernize, reduce costs, and adjust to the realities of globalization, increased competition in virtually every industry, and shifts in economic development since the emergence of COVID-19.

A key arrow in the quiver for game-changers when developing and managing modern software is automatic, intelligent observability. Modern IT and development environments include multiple public clouds, dynamic containers, and widely distributed microservices. But managing and securing these environments can be downright impossible without technology to automatically identify and alert users to issues.

Teams can no longer effectively manage and secure today’s multicloud environments using traditional monitoring tools. While conventional monitoring scans the environment using correlation and statistics, it provides little contextual information for remediating performance or security issues. On the other hand, modern observability enables IT pros to gather real-time information on their environments, identify the root cause of issues, and take prompt, precise action to remediate problems.

IT professionals understand they need a new approach to monitoring and securing their environments. They know these environments are too expansive, porous, and complex for IT teams to manage with human resources alone. Instead, teams need modern observability to automatically discover and fix performance and security problems immediately.

At Dynatrace Perform 2022, we’ll explore how a modern observability platform helps IT teams boost performance and ensure application security through vulnerability management. The advantage of modern observability helps IT teams free their time for revenue-generating tasks rather than fire-fighting ones. In what follows, we survey some of the capabilities of a modern observability platform and themes we will highlight at Dynatrace Perform 2022.

Modern observability vs. monitoring

As dynamic systems architectures increase in complexity and scale, IT teams face mounting pressure to track and respond to the activity in their multi-cloud environments. As a result, teams need a solution that provides immediate, actionable answers to save time and effort.

Monitoring that relies solely on correlation produces a large volume of data teams must sift through to deduce the underlying causes of performance and security issues. But this statistics-based approach generates too much data and not enough context, which requires expert analysts to draw conclusions that amount to educated guesses.

In contrast, a modern observability platform uses artificial intelligence (AI) to gather information in real-time and automatically pinpoint root causes in context. This precision gives teams immediate and reliable insight they can use to automate responses. Teams can understand exactly which systems and services are affected and have a clear path of action.

Check out these resources to learn more about modern observability and how it contrasts with traditional monitoring.

Modern observability platform is onramp to digital transformation: Dynatrace Perform 2022, reporter’s notebook – Blog

At this year’s Perform, CEO Rick McConnell and CMO Mike Maciag unpack the power of modern observability and AIOps as organizations traverse digital transformation.

vulnerability management, modern observability, Dynatrace Perform 2022, serverless architecture IT teams seek observability for, control over, serverless architectures – Blog

At Dynatrace Perform 2022, we highlight how modern observability helps augment the advantages and cure the ills of serverless architecture.

modern obervability Observability vs. monitoring: What’s the difference? – Blog

Understanding the difference between observability and monitoring helps DevOps teams understand root causes and deliver better applications.

observability and monitoring, modern observability Modern approaches to observability and monitoring for multicloud environments – Blog

Observability and monitoring solutions are not created equal. Only observability can transform multicloud data into actionable intelligence.

cloud complexity wall, cloud observability wall, multicloud observability, modern observability 5 challenges to achieving observability at scale – eBook

To understand highly distributed cloud-native technologies, teams need observability that scales using fewer tools, not more. Explore the five main challenges to achieving observability—and learn how to overcome them.

What are microservices? modern observability Upgrade to advanced observability for answers in cloud-native environments – eBook

With automation and AI, observability delivers actionable answers that ensure cloud-native applications work perfectly across the enterprise.

For more about the value of Dynatrace observability, follow the Advancing Dynamic Cloud-Native Workloads ​track at Dynatrace Perform 2022 and check out the Observability resource center.

Application security and vulnerability management

Modern cloud-native environments rely heavily on microservices architectures. This poses a dilemma for application teams responsible for innovation: How can they comply with ever-increasing security requirements while managing fast release cycles for hundreds of microservices? If teams lack an automated approach to application security, it can drastically slow down your ability to release new application functionality securely.

As teams adopt DevSecOps practices, which integrate security and vulnerability management into development and operations, they also incorporate a security mindset into their operational culture.

An automatic and intelligent observability platform with runtime vulnerability management capabilities can change the game. With comprehensive vulnerability detection and analysis that span pre-production and production environments, organizations can consolidate and streamline their security, development, and operations toolchains and processes.

As a result, responsible teams across an organization can develop and operate secure, vulnerability-free software. This comprehensive run-time approach to vulnerability management can detect critical security exposure such as Log4Shell (a vulnerability in the Log4j Apache library detected in late 2021) in production.

Discover more about Dynatrace Application Security and its vulnerability management capabilities from these sources:

AIOps capabilities, DevOps orchestration, DevSecOps practices Advance DevSecOps practices with a vulnerability management strategy – Video

Adding a vulnerability management strategy to your DevSecOps practices can be key to handling threats like Log4Shell.

vulnerability management Why vulnerability management enhances your cloud application security strategy – Blog

Application security and managing software vulnerabilities are more important than ever as organizations use open-source software and cloud-based services. At Dynatrace Perform 2022, the Advancing DevOps and DevSecOps track explores how you can better secure applications in dynamic environments.

Davis Security Advisor extends Dynatrace Application Security with automatic vulnerability prioritization Dynatrace Application Security protects your applications in complex cloud environments – Blog

Utilizing cloud-native platforms, Kubernetes, and open-source technologies requires a radically different approach to application security.

What is application security? And why it needs a new approach – Blog

Dynamic IT environments have made application security more complex. Learn how your organization can create software quickly and securely.

vulnerability assessment Vulnerability assessment: Protecting applications and infrastructure – Blog

What is a vulnerability assessment? Vulnerability assessment tools are essential for protecting IT infrastructure, applications, and data.

To learn more about Dynatrace Application Security and vulnerability assessment, follow the Advancing DevOps and DevSecOps track at Dynatrace Perform 2022, and check out the Application Security resource center.

DevOps and DevSecOps orchestration

DevOps and DevSecOps adoption has exploded in response to the increasing demand for technology teams to deliver greater functionality faster and more securely. DevOps brings developers and operations teams together and enables more agile IT. DevSecOps adds application security into shift-left (pre-production) and shift-right (production) operations.

Still, while DevOps and DevSecOps practices enable development agility and speed, they can also fall victim to tool complexity and data silos. Many organizations suffer from inefficiency because they’re juggling too many DevOps tools or using tools that don’t meet their needs. Some DevOps toolchains fail to yield value because teams select tools based only on individual technology considerations rather than the business value they provide.

Successful DevOps orchestration is a constant evolution of tools, processes, and communication on a journey to speed, stability, and scale. An automatic and intelligent observability platform optimized for the DevOps and DevSecOps pipeline—from CI/CD to user experience—promotes a culture of experimentation, risk, and trust teams need to succeed.

Learn more about how the Dynatrace platform approach to DevOps and DevSecOps facilitates the software delivery life cycle from these resources:

AIOps capabilities, DevOps orchestration DevOps orchestration breaks quality-speed stalemate in SDLC – Video

DevOps orchestration enables developers, site reliability engineers, and DevOps teams to develop at the pace of business without sacrificing code quality as they develop code throughout the software development lifecycle.

DevOps practices, automating DevOps Automating DevOps practices fuels speed and quality – Blog

DevOps practices enable business speed and innovation. But increasing toolchain complexity and the faster pace of software development can undermine DevOps benefits. At Dynatrace Perform 2022, the DevOps track will highlight how automating DevOps practices reduces DevOps workflow problems.

DevOps, What is DevOps What is DevOps? Unpacking the rise of an IT cultural revolution – Blog

What is DevOps? Learn how development and operations teams can improve delivery and outcomes with this approach and what tools they need to succeed.

DevSecOps, What is DevSecOps What is DevSecOps? And what you need to do it well – Blog

What is DevSecOps? DevSecOps connects three different disciplines: development, security, and operations. Learn how security improves DevOps.

DevOps loop Successfully scaling DevOps – Webinar

In this webinar, we talk with IT systems integrator avodaq AG about their DevOps and Kubernetes adoption journey.

To see Dynatrace in action for DevOps and DevSecOps, follow the Advancing DevOps and DevSecOps track at Dynatrace Perform 2022.

For more about how Dynatrace does DevOps, see the DevOps resource center.

For more about Dynatrace DevSecOps and Application Security, see the Application Security resource center.

AIOps solution

As organizations adopt more cloud-native technologies, their burgeoning multicloud environments offer many benefits, such as modular app design, dynamic app scalability, and faster time to market.

But these dynamic environments also pose challenges for IT teams across the organization. Apps and services depend on other services and infrastructure, but each tool and cloud platform stands alone. Dispersed environments mean teams struggle to detect and anticipate issues, optimize applications, and automate DevSecOps workflows. While digital transformation is in full swing across the industry, a fragmented IT operations strategy can slow these modernization efforts and limit their benefits.

AIOps, or artificial intelligence for IT operations, uses AI and advanced analytics to manage IT. But not all AI is created equal. An AIOps solution that uses automatic and intelligent observability and causation-based AI can unlock productivity across the organization. An AIOps platform designed for dynamic multicloud environments turns teams from reactive to proactive. This agility enables teams to optimize apps and DevSecOps workflows, and accelerates every team’s transformation.

Dynatrace Davis® is a radically different AI engine. To learn more about Dynatrace AIOps, check out these resources.

artificial intelligence, AI, Max Tegmark Artificial intelligence: The ultimate technology for game-changers – Max Tegmark at Perform 2022

MIT physics professor and Future of Life Institute co-founder Max Tegmark shares his big thoughts on the big possibilities of AI to change human innovation.

AIOps capabilities, DevOps orchestration AIOps capabilities drive intelligent cloud observability – Video

AIOps capabilities help IT teams cope with the overwhelming complexity of multicloud and hybrid cloud environments. While AIOps that relies on correlation-based machine learning isn’t new, causation-based AIOps is a game-changer.

cloud application security, runtime application security protection AIOps strategy central to proactive multicloud management – Blog

At Dynatrace Perform 2022, the AIOps track will explore how an AIOps strategy helps organizations manage dynamic, multicloud environments.

Synthetics dashboard What is AIOps? AI for ITOps–and beyond – Blog

What is AIOps? It brings AI to ITOps–but a modern approach to AIOps using deterministic AI makes it so much more. Learn why.

Meet the mind behind the magic: Davis® AI explained – Interactive eBook

In this animated eBook, we dive into Davis AI and reveal how it changes the game for AIOps across the IT organization.

An observability-based AIOps platform that can shift left How an AIOps platform can shift left–and why it should – Blog

As organizations layer more technologies into their DevOps toolchains, an AIOps platform that can shift left is a good strategy.

Follow the Advancing your AIOps agenda track at Dynatrace Perform 2022. For even more on Dynatrace AIOps, check out the AIOps/AI and Automation resource center.

Digital experience monitoring and business analytics

Successful user journeys begin and end with a user’s experience of your digital touchpoints. Whether users are on a PC or accessing your services on a mobile device, digital experiences now define successful business outcomes. Lagging applications, broken functionality, or confusing experiences can easily result in lost business: Switching to a competitor’s offering is just a tap or mouse click away.

But the proliferation of tools and user data from cloud-native applications, open-source solutions, and native mobile apps makes a platform approach to digital experience monitoring (DEM) essential for analyzing user experiences.

Connecting business outcomes to IT metrics for both mobile and web apps requires automatic and intelligent observability that extends into digital experience and business analytics. An AI-driven observability platform delivers front-to-back visibility and actionable analytics in context with a multitude of apps, services, and multicloud infrastructure. It also works with third-party tools and standards, like Google’s Core Web Vitals, to continuously improve site performance and user experience.

Observability-driven DEM enables teams to produce five-star, crash-free mobile apps and optimize every step of the user journey for both web and mobile users, ultimately improving customer experience and driving better business outcomes.

To learn more about the Dynatrace observability platform approach to digital experience monitoring, check out these resources:

What is digital experience? – Blog

DEM helps teams understand the context of what’s going on amid the interactions happening across the multitude of apps, services, and infrastructure in a multicloud environment.

What is session replay? Discover user pain points with session recordings – Blog

Session replay is a technology that creates video-like recordings of actions taken by users interacting with a website or mobile application. Analysts can then watch a user’s mouse movements to identify the user’s activity, problem spots, and what’s frustrating them or causing them to abandon their journey.

synthetic monitoring, synthetic monitoring tools What is synthetic monitoring? How emulating user paths improves outcomes – Blog

Synthetic monitoring is an application performance monitoring practice that emulates the paths users might take when engaging with an application

Business observability and the travel and hospitality industry: a key to successful recovery – Blog

A lot has changed in the global travel industry. Understanding how customers’ digital experience impacts business outcomes is critical to recovery.

To see the value of the Dynatrace platform for digital experience and business analytics, follow the Driving better business outcomes for LOB track at Dynatrace Perform 2022.

For even more about Dynatrace and its observability platform approach to DEM, see the Digital Experience resource center.

Experience modern observability at Dynatrace Perform 2022

Explore all the ways modern observability transforms application performance monitoring, application security, DevOps/DevSecOps, AIOps, and more by joining us for our virtual event, Dynatrace Perform 2022, on February 7 – 9.

Meet Simone Biles, the world’s most decorated gymnast, Kelsey Hightower, principal engineer at Google, and AI and physics professor Max Tegmark from MIT. And learn from Dynatrace’s own technologists, including Chief Technology Officer Bernd Greifeneder, Technology Strategy Fellow and Pure Performance podcast host Andreas Berger, CEO Rick McConnell, and many more.

As a risk-free virtual event, you can attend every session and event from anywhere in the world.

The post Dynatrace Perform 2022: Themes to watch at Dynatrace’s annual conference appeared first on Dynatrace blog.

]]>
https://www.dynatrace.com/news/blog/dynatrace-perform-2022-themes-to-watch/feed/ 0