As the VP of Product at NGINX, I speak frequently with customers and users. Whether you’re a Platform Ops team, Kubernetes architect, application developer, CISO, CIO, or CTO – I’ve talked to someone like you. In our conversations, you gave me your honest thoughts about NGINX, including our products, pricing and licensing models, highlighting both our strengths and weaknesses.
The core thing we learned is that our “NGINX is the center of the universe” approach does not serve our users well. We had been building products that aimed to make NGINX the “platform” – the unified management plane for everything related to application deployment. We knew that some of our previous products geared towards that goal had been lightly used and adopted. You told us that NGINX is a mission critical component of your existing platform, homegrown or otherwise, but that NGINX was not the platform. Therefore, we needed to integrate better with the rest of your components to make it easier to deploy, manage, secure our products with (and this is important) transparent pricing and consumption models. And to make it all possible via API, of course.
The underlying message was straightforward: make it easier for you to integrate NGINX into your workflows, existing toolchains, and processes in an unopinionated manner. We heard you. In 2024, we will be taking a much more flexible, simple, repeatable, and scalable approach towards use-case configuration and management for data plane and security.
Your desire makes complete sense. Your world has changed and continues to change! You transitioned through various stages, moving from cloud to hybrid to multi-cloud and multi-cloud-hybrid setups. There have also been changes from VMs to Kubernetes, and from APIs to microservices and serverless. Many of you have shifted left and that has led to complexity. More teams have more tools that require more management, observability, and robust security – all powering apps that must be able to scale out in minutes; not hours, days, or weeks. And the latest accelerant, artificial intelligence (AI), puts significant pressure on legacy application and infrastructure architectures.
While the bones of NGINX products have always been rock solid, battle-tested, and performant, the way our users could consume, manage, and observe all aspects of NGINX didn’t keep up with the times. We are moving quickly to remedy that with a new product launch and a slew of new capabilities. We will be announcing more about this at F5’s conference AppWorld 2024, happening February 6 through 8. Here are specific pain points we plan on addressing in upcoming product releases.
Today, CIOs and CTOs can pick from a wide variety of application deployment modalities. This is a blessing because it enables far more choice in terms of performance, capabilities, and resilience. It’s also a curse because diversity leads to complexity and sprawl. For example, managing applications running in AWS requires different configurations, tools, and tribal knowledge than managing applications in Azure Cloud.
While containers have standardized, large swathes of application deployment, everything below containers (or going in and out of containers) remains differentiated. As the de facto container orchestration platform, Kubernetes was supposed to clean that process up. But anyone who has deployed on Amazon EKS, Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) can tell you – they’re not at all alike.
You have told us that managing NGINX products across this huge diversity of environments requires significant operational resources and leads to waste. And, frankly, pricing models based on annual licenses collapse in dynamic environments where you might launch an app in a serverless environment, scale it up on a Kubernetes environment, and maintain a small internal deployment running on the cloud for development purposes.
The complexity of diverse environments can make it difficult to discover and monitor where modern apps are deployed and then apply the right security measures. Maybe you deployed NGINX Plus as your global load balancer and NGINX Open Source for various microservices, with each running in different clouds or on top of different types of applications. Additionally, they could be requiring different things for privacy, data protection, and traffic management.
Each permutation adds a new security twist. There is no standard, comprehensive solution and that injects operational complexity and potential for configuration errors. Admittedly, we’ve added to that complexity by making it confusing as to which types of security can be applied to which NGINX solutions.
We understand. Customers need a single way to secure all applications that leverage NGINX. This unified security solution must cover the vast majority of use cases and deploy the same tools, dashboards, and operational processes across all cloud, on-prem, serverless, and other environments. We also recognize the importance of moving towards more intelligent security approach, leveraging the collective intelligence of the NGINX community and the unprecedented view of global traffic that we are fortunate to have.
In a shift-left world, every organization wants to empower developers and practitioners to do their jobs better, without filing a ticket or sending a Slack. The reality has been different. Some marginal abstraction of complexity has been achieved with Kubernetes, serverless, and other mechanisms for managing distributed applications and applications spanning on-prem, cloud, and multi-cloud environments. But this progress has largely been confined inside the container and application. It has not translated well to the layers around applications like networking, security, and observability, nor to CI/CD.
I have hinted at these issues in the previous pain points, but the bottom line is this: complexity has great costs when it comes to hours and toil, compromised security, and resilience. Maintaining increasingly complex systems is fundamentally challenging and resource intensive. Pricing and license complexity adds another unhappy layer. NGINX has never been a “true-up” company that sticks it to users when they mistakenly overconsume.
But in a world of SaaS, APIs, and microservices, you want to pay as you go and not pay by the year, nor by the seat or site license. You want an easy-to-understand pricing model based on consumption, for all NGINX products and services, across your entire technology infrastructure and application portfolio. You also want a way to incorporate support and security for any open source modules that your teams run, paying for just the bits that you want.
This will require some shifts in how NGINX packages and prices products. The ultimate solution must be simplicity, transparency, and pay-for-what-you-consume, just like any other SaaS. We hear you. And we have something great in store which will address all three of the above pain points.
We will be talking about these exciting updates at AppWorld 2024 and will be rolling out pieces of the solution as part of our longer-term plan and roadmap over the next twelve months.
Join me on this journey and tune in to AppWorld for a full breakdown of what’s in store. Early bird pricing is available through January 21. Please check out the AppWorld 2024 registration page for further details. You’re also invited to join NGINX leaders and other members of the community on the night of February 6 at the San Jose F5 office for an evening of looking forward into the future of NGINX, community connections, and indulging in the classics: pizza and swag! See the event page for registration and details.
We hope to see you next month in San Jose!
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."