<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>The KEDA Blog on KEDA</title><link>https://deploy-preview-1758--keda.netlify.app/blog/</link><description>Recent content in The KEDA Blog on KEDA</description><generator>Hugo</generator><language>en-us</language><atom:link href="https://deploy-preview-1758--keda.netlify.app/blog/index.xml" rel="self" type="application/rss+xml"/><item><title>Google Cloud deprecations</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2025-09-15-gcp-deprecations/</link><pubDate>Mon, 15 Sep 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2025-09-15-gcp-deprecations/</guid><description>&lt;p&gt;One year ago, Google Cloud deprecated its &lt;a href="https://cloud.google.com/monitoring/mql" target="_blank"&gt;Monitoring Query Language&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; in favor of a PromQL-based API:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Announcement: Starting on October 22, 2024, Monitoring Query Language (MQL) will no longer be a recommended query language for Cloud Monitoring. Certain usability features will be disabled, but you can still run MQL queries in Metrics Explorer, and dashboards and alerting policies that use MQL will continue to work. For more information, see the &lt;a href="https://cloud.google.com/stackdriver/docs/deprecations/mql" target="_blank"&gt;deprecation notice for MQL&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>KEDA is graduating to CNCF Graduated project 🎉</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2023-08-22-keda-cncf-graduation/</link><pubDate>Tue, 22 Aug 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2023-08-22-keda-cncf-graduation/</guid><description>&lt;p&gt;In 2019, KEDA embarked on a mission to make application autoscaling on Kubernetes dead-simple. Our aim was to make sure that every Kubernetes platform can use it to scale applications without having to worry about the underlying autoscaling infrastructure.&lt;/p&gt;
&lt;p&gt;As part of that mission, we wanted to build a vendor-neutral project that is open to everyone and nicely integrates with other tools. Because of that, the KEDA maintainers decided that the Cloud Native Computing Foundation (CNCF) was a natural fit and got accepted as a sandbox project in 2020.&lt;/p&gt;</description></item><item><title>Securing autoscaling with the newly improved certificate management in KEDA 2.10</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2023-05-02-certificate-improvements/</link><pubDate>Tue, 16 May 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2023-05-02-certificate-improvements/</guid><description>&lt;p&gt;Recently, we have release KEDA v2.10 that introduced a key improvement for managing certificates and your autoscaling security:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Encryption of any communication between KEDA components.&lt;/li&gt;
&lt;li&gt;Support for providing your own certificates for internal communications.&lt;/li&gt;
&lt;li&gt;Support for using customs certificate authorities (CA).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With these new improvements, we can dramatically improve the security between KEDA components, the Kubernetes API server and scaler sources. Let&amp;rsquo;s take a closer look.&lt;/p&gt;
&lt;h2 id="where-do-we-come-from"&gt;Where do we come from?&lt;/h2&gt;
&lt;p&gt;KEDA is a component that runs on kubernetes, receiving request from kubernetes API server (from the HPA Controller) but also integrates with multiple external sources (upstreams).&lt;/p&gt;</description></item><item><title>Help shape the future of KEDA with our survey 📝</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2023-05-04-keda-survey/</link><pubDate>Thu, 04 May 2023 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2023-05-04-keda-survey/</guid><description>&lt;p&gt;As maintainers, we are always eager to learn who is using KEDA (&lt;a href="https://github.com/kedacore/keda#adopters---become-a-listed-keda-user" target="_blank"&gt;become a listed end-user!&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;) and how they are using KEDA to scale their cloud-native workloads.&lt;/p&gt;
&lt;p&gt;Our job is to make sure that your are able to scale the workloads that you run with as less friction as possible, production-grade security, insights on what is going on, etc.&lt;/p&gt;
&lt;p&gt;In order to be successful, we need to learn how big of KEDA deployments end-users are running, what is causing frustration and what we can improve. This is why we have created a survey to gain more insights and make KEDA better.&lt;/p&gt;</description></item><item><title>Announcing KEDA v2.9 🎉</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2022-12-12-keda-2.9.0-release/</link><pubDate>Mon, 12 Dec 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2022-12-12-keda-2.9.0-release/</guid><description>&lt;p&gt;We recently completed our most recent release: 2.9.0 🎉!&lt;/p&gt;
&lt;p&gt;Here are some highlights:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Newly published Deprecations and Breaking Change policy (&lt;a href="https://github.com/kedacore/governance/blob/main/DEPRECATIONS.md" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Introduce new CouchDB, Etcd &amp;amp; Loki scalers&lt;/li&gt;
&lt;li&gt;Introduce off-the-shelf Grafana dashboard for application autoscaling&lt;/li&gt;
&lt;li&gt;Introduce improved operational metrics in Prometheus&lt;/li&gt;
&lt;li&gt;Introduce capability to cache metric values for a scaler during the polling interval (experimental feature)&lt;/li&gt;
&lt;li&gt;Performance improvements and architecture changes on how metrics are exposed to Kubernetes&lt;/li&gt;
&lt;li&gt;Azure Key Vault authentication provider now supports pod identities for authentication&lt;/li&gt;
&lt;li&gt;A ton of new features and fixes for some of our 50+ scalers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Potential breaking changes and deprecations include:&lt;/p&gt;</description></item><item><title>HTTP add-on is looking for contributors by end of November</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2022-09-27-http-add-on-is-on-hold/</link><pubDate>Tue, 27 Sep 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2022-09-27-http-add-on-is-on-hold/</guid><description>&lt;p&gt;On Nov 25, 2020, we started the HTTP add-on based on &lt;a href="https://github.com/arschles" target="_blank"&gt;@arschles&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; his initial POC which closed a big gap in KEDA&amp;rsquo;s story - HTTP autoscaling without a dependency on an external system, such as Prometheus.&lt;/p&gt;
&lt;p&gt;To this day, the autoscaling community has a very high demand for a solution in this area that auto scales and works in the same manner as the KEDA core.&lt;/p&gt;
&lt;p&gt;With the add-on, we want to cover all traffic patterns ranging from ingress, to service meshes and service-to-service communication and make it super simple to autoscale (and with scale-to-zero support).&lt;/p&gt;</description></item><item><title>Announcing KEDA v2.8 🎉</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2022-08-10-keda-2.8.0-release/</link><pubDate>Wed, 10 Aug 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2022-08-10-keda-2.8.0-release/</guid><description>&lt;p&gt;We recently completed our most recent release: 2.8.0 🎉!&lt;/p&gt;
&lt;p&gt;Here are some highlights:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Introduction of new AWS DynomoDB Streams &amp;amp; NATS JetStream scalers.&lt;/li&gt;
&lt;li&gt;Introduction of new Azure AD Workload Identity authentication provider.&lt;/li&gt;
&lt;li&gt;Support for specifying &lt;code&gt;minReplicaCount&lt;/code&gt; in ScaledJob.&lt;/li&gt;
&lt;li&gt;Support to customize the HPA name.&lt;/li&gt;
&lt;li&gt;Support for permission segregation when using Azure AD Pod / Workload Identity&lt;/li&gt;
&lt;li&gt;Additional features to various scalers such as AWS SQS, Azure Pipelines, CPU, GCP Stackdriver, Kafka, Memory, Prometheus&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here are the new deprecation(s) as of this release:&lt;/p&gt;</description></item><item><title>How Zapier uses KEDA</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2022-03-10-how-zapier-uses-keda/</link><pubDate>Thu, 10 Mar 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2022-03-10-how-zapier-uses-keda/</guid><description>&lt;p&gt;&lt;a href="https://www.rabbitmq.com/" target="_blank"&gt;RabbitMQ&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; is at the heart of Zap processing at &lt;a href="https://zapier.com" target="_blank"&gt;Zapier&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;. We enqueue messages to RabbitMQ for each step in a Zap. These messages get consumed by our backend workers, which run on &lt;a href="https://kubernetes.io" target="_blank"&gt;Kubernetes&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;. To keep up with the varying task loads in Zapier we need to scale our workers with our message backlog.&lt;/p&gt;
&lt;p&gt;For a long time, we scaled with CPU-based autoscaling using Kubernetes native Horizontal Pod Autoscale (HPA), where more tasks led to more processing, increasing CPU usage, and triggering our workers&amp;rsquo; autoscaling. It seemed to work pretty well, except for certain edge cases.&lt;/p&gt;</description></item><item><title>Introducing PredictKube - an AI-based predictive autoscaler for KEDA made by Dysnix</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2022-02-09-predictkube-scaler/</link><pubDate>Mon, 14 Feb 2022 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2022-02-09-predictkube-scaler/</guid><description>&lt;h2 id="introducing-predictkubean-ai-based-predictive-autoscaler-for-keda-made-by-dysnix"&gt;Introducing PredictKube—an AI-based predictive autoscaler for KEDA made by Dysnix&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://dysnix.com/" target="_blank"&gt;Dysnix&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; has been working with high-traffic backend systems for a long time,
and the efficient scaling demand is what their team comes across each day.
The engineers have understood that manually dealing with traffic fluctuations and preparations of infrastructure is
inefficient because you need to deploy more resources &lt;em&gt;before&lt;/em&gt; the traffic increases,
not at the moment the event happens. This strategy is problematic for two reasons: first, because it&amp;rsquo;s often too late to scale when traffic has already arrived and second, resources will be overprovisioned and idle during the times that traffic isn&amp;rsquo;t present.&lt;/p&gt;</description></item><item><title>How CAST AI uses KEDA for Kubernetes autoscaling</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2021-08-04-keda-cast-ai/</link><pubDate>Wed, 04 Aug 2021 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2021-08-04-keda-cast-ai/</guid><description>&lt;h1 id="how-cast-ai-uses-keda-for-kubernetes-autoscaling"&gt;How CAST AI uses KEDA for Kubernetes autoscaling&lt;/h1&gt;
&lt;p&gt;Kubernetes comes with several built-in &lt;a href="https://cast.ai/blog/guide-to-kubernetes-autoscaling-for-cloud-cost-optimization/" target="_blank"&gt;autoscaling mechanisms&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; - among them the Horizontal Pod Autoscaler (HPA). Scaling is essential for the producer-consumer workflow, a common use case in the IT world today. It’s especially useful for monthly reports and transactions with a huge load where teams need to spin up many workloads to process things faster and cheaper (for example, by using spot instances).&lt;/p&gt;</description></item><item><title>Announcing KEDA HTTP Add-on v0.1.0</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2021-06-24-announcing-http-add-on/</link><pubDate>Thu, 24 Jun 2021 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2021-06-24-announcing-http-add-on/</guid><description>&lt;p&gt;Over the past few months, we’ve been adding more and more scalers to KEDA making it easier for users to scale on what they need. Today, we leverage more than 30 scalers out-of-the-box, supporting all major cloud providers &amp;amp; industry-standard tools such as Prometheus that can scale any Kubernetes resource.&lt;/p&gt;
&lt;p&gt;But, we are missing a major feature that many modern, distributed applications need - the ability to scale based on HTTP traffic.&lt;/p&gt;</description></item><item><title>Autoscaling Azure Pipelines agents with KEDA</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2021-05-27-azure-pipelines-scaler/</link><pubDate>Thu, 27 May 2021 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2021-05-27-azure-pipelines-scaler/</guid><description>&lt;p&gt;With the addition of Azure Piplines support in KEDA, it is now possible to autoscale your Azure Pipelines agents based on the agent pool queue length.&lt;/p&gt;
&lt;p&gt;Self-hosted Azure Pipelines agents are the perfect workload for this scaler. By autoscaling the agents you can create a scalable CI/CD environment.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;💡 The number of concurrent pipelines you can run is limited by your &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents#parallel-jobs" target="_blank"&gt;parallel jobs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;KEDA will autoscale to the maximum defined in the ScaledObject and does not limit itself to the parallel jobs count defined for the Azure DevOps organization.&lt;/p&gt;</description></item><item><title>Why Alibaba Cloud uses KEDA for application autoscaling</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2021-04-06-why-alibaba-cloud-uses-keda-for-app-autoscaling/</link><pubDate>Tue, 06 Apr 2021 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2021-04-06-why-alibaba-cloud-uses-keda-for-app-autoscaling/</guid><description>&lt;blockquote&gt;
&lt;p&gt;This blog post was initially posted on &lt;a href="https://www.cncf.io/blog/2021/03/30/why-alibaba-cloud-uses-keda-for-application-autoscaling/" target="_blank"&gt;CNCF blog&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; and is co-authored by Yan Xun, Senior Engineer from Alibaba Cloud EDAS team &amp;amp; Andy Shi, Developer Advocator from Alibaba Cloud.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When scaling Kubernetes there are a few areas that come to mind, but if you are new to Kubernetes this can be a bit overwhelming.&lt;/p&gt;
&lt;p&gt;In this blog post; we will briefly explain the areas that need to be considered, how KEDA aims to make application auto-scaling simple, and why Alibaba Cloud’s &lt;a href="https://www.alibabacloud.com/product/edas" target="_blank"&gt;Enterprise Distributed Application Service (EDAS)&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; has fully standardized on KEDA.&lt;/p&gt;</description></item><item><title>Migrating our container images to GitHub Container Registry</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2021-03-26-migrating-to-github-container-registry/</link><pubDate>Fri, 26 Mar 2021 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2021-03-26-migrating-to-github-container-registry/</guid><description>&lt;p&gt;We provide &lt;strong&gt;various ways to &lt;a href="https://keda.sh/docs/latest/deploy/" target="_blank"&gt;deploy KEDA&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; in your cluster&lt;/strong&gt; including by using &lt;a href="https://github.com/kedacore/charts" target="_blank"&gt;Helm chart&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;, &lt;a href="https://operatorhub.io/operator/keda" target="_blank"&gt;Operator Hub&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; and raw YAML specifications.&lt;/p&gt;
&lt;p&gt;These deployment options all rely on the container images that we provide which are available on &lt;strong&gt;&lt;a href="https://hub.docker.com/u/kedacore" target="_blank"&gt;Docker Hub&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;, the industry standard for public container images&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;However, we have found that Docker Hub is no longer the best place for our container images and are migrating to GitHub Container Registry (Preview).&lt;/p&gt;</description></item><item><title>Announcing KEDA 2.0 - Taking app autoscaling to the next level</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2020-11-04-keda-2.0-release/</link><pubDate>Wed, 04 Nov 2020 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2020-11-04-keda-2.0-release/</guid><description>&lt;p&gt;A year ago, we were excited to &lt;strong&gt;announce our 1.0 release with a core set of scalers&lt;/strong&gt;, allowing the community to start autoscaling Kubernetes deployments. We were thrilled with the response and encouraged to see many users leveraging KEDA for event driven and serverless scale within any Kubernetes cluster.&lt;/p&gt;
&lt;p&gt;With KEDA, any container can scale to zero and burst scale based directly on event source metrics.&lt;/p&gt;
&lt;p&gt;

&lt;figure x-data="{ open: false }"&gt;
 &lt;img class="has-mouse-icon" src="https://deploy-preview-1758--keda.netlify.app/img/logos/keda-horizontal-color.png" alt="Logo" id="keda-horizontal-color" @click="open = true"&gt;

 &lt;div class="modal" id="modal-keda-horizontal-color" :class="{ 'is-active': open }"&gt;
 &lt;div class="modal-background" @click="open = !open"&gt;&lt;/div&gt;
 &lt;div class="modal-content"&gt;
 &lt;p class="image"&gt;
 &lt;img src="https://deploy-preview-1758--keda.netlify.app/img/logos/keda-horizontal-color.png" alt="Logo"&gt;
 &lt;/p&gt;</description></item><item><title>Give KEDA 2.0 (Beta) a test drive</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2020-09-11-keda-2.0-beta/</link><pubDate>Fri, 11 Sep 2020 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2020-09-11-keda-2.0-beta/</guid><description>&lt;p&gt;Today, we are happy to share that our first &lt;strong&gt;beta version of KEDA 2.0 is available&lt;/strong&gt;! 🎊&lt;/p&gt;
&lt;h1 id="highlights"&gt;Highlights&lt;/h1&gt;
&lt;p&gt;With this release, we are shipping majority of our planned features.&lt;/p&gt;
&lt;p&gt;Here are some highlights:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Making scaling more powerful&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Introduction of &lt;code&gt;ScaledJob&lt;/code&gt; (&lt;a href="https://keda.sh/docs/2.0/concepts/scaling-jobs/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Introduction of Azure Log Analytics scaler (&lt;a href="https://keda.sh/docs/2.0/scalers/azure-log-analytics/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Support for scaling Deployments, Stateful Sets and/or any Custom Resources (&lt;a href="https://keda.sh/docs/2.0/concepts/scaling-deployments/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Support for scaling on standard resource metrics (CPU/Memory)&lt;/li&gt;
&lt;li&gt;Support for multiple triggers in a single &lt;code&gt;ScaledObject&lt;/code&gt; (&lt;a href="https://keda.sh/docs/2.0/concepts/scaling-deployments/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Support for scaling to original replica count after deleting &lt;code&gt;ScaledObject&lt;/code&gt; (&lt;a href="https://keda.sh/docs/2.0/concepts/scaling-deployments/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Support for controlling scaling behavior of underlying HPA&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Easier to operate KEDA&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Introduction of readiness and liveness probes&lt;/li&gt;
&lt;li&gt;Introduction of Prometheus metrics for Metrics Server (&lt;a href="https://keda.sh/docs/2.0/operate/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Provide more information when querying KEDA resources with &lt;code&gt;kubectl&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Extensibility&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;Introduction of External Push scaler (&lt;a href="https://keda.sh/docs/2.0/scalers/external-push/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Introduction of Metric API scaler (&lt;a href="https://keda.sh/docs/2.0/scalers/metrics-api/" target="_blank"&gt;docs&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Provide KEDA client-go library&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For a full list of changes, we highly recommend going through &lt;a href="https://github.com/kedacore/keda/blob/v2/CHANGELOG.md#v200" target="_blank"&gt;our changelog&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt;! With our stable release, we&amp;rsquo;ll provide a full overview of what&amp;rsquo;s released in a new blog post.&lt;/p&gt;</description></item><item><title>Kubernetes Event-driven Autoscaling (KEDA) is now an official CNCF Sandbox project 🎉</title><link>https://deploy-preview-1758--keda.netlify.app/blog/2020-03-31-keda-cncf-sandbox/</link><pubDate>Tue, 31 Mar 2020 00:00:00 +0000</pubDate><guid>https://deploy-preview-1758--keda.netlify.app/blog/2020-03-31-keda-cncf-sandbox/</guid><description>&lt;p&gt;Over the past year, We&amp;rsquo;ve been contributing to Kubernetes Event-Driven Autoscaling (KEDA), which makes application autoscaling on Kubernetes dead simple. If you have missed it, read about it in our &lt;a href="https://blog.tomkerkhove.be/2019/06/11/a-closer-look-at-kubernetes-based-event-driven-autoscaling-keda/" target="_blank"&gt;&amp;ldquo;Exploring Kubernetes-based event-driven autoscaling (KEDA)&amp;rdquo;&lt;sup&gt;&lt;i class="fas fa-xs fa-up-right-from-square"&gt;&lt;/i&gt;&lt;/sup&gt;&lt;/a&gt; blog post.&lt;/p&gt;
&lt;p&gt;We started the KEDA project to address an essential missing feature in the Kubernetes autoscaling story. Namely, the ability to autoscale on arbitrary metrics. Before KEDA, users were only able to autoscale based on metrics such as memory and CPU usage. While these values are essential for autoscaling, they disregard a rich world of external metrics from sources such as Azure, AWS, GCP, Redis, and Kafka (among many more).&lt;/p&gt;</description></item></channel></rss>