July 31, 2025
Doing More With Your Existing Kafka

Doing More With Your Existing Kafka

(Quardia/Shutterstock)

Apache Kafka has become the foundation for real-time data pipelines across industries. From processing financial transactions to tracking IoT sensor data, Kafka is a key building block of enterprise architecture. Despite its usefulness, organizations and developers alike still struggle to unlock the full value of their Kafka investments.

The challenge, however, isn’t Kafka itself. It’s everything around it. From the custom-built proxies to the restricted access, limited governance and operational complexity, users have barriers that prevent real-time data from being fully leveraged across teams. For many developer teams, Kafka remains powerful but inaccessible, and scalable but expensive to manage.

According to IDC, 90% of the world’s largest companies will use real-time intelligence to improve services and customer experience by this year. Gartner predicts that 68% of IT leaders plan to increase their use of event-driven architecture (EDA). Given these statistics, organizations can’t afford for their Kafka pipelines to sit underutilized.

The transformation into a real-time business isn’t just a technical shift, it’s a strategic one. According to MIT’s Center for Information Systems Research (CISR), companies in the top quartile of real-time business maturity report 62% higher revenue growth and 97% higher profit margins than those in the bottom quartile. These organizations use real-time data not only to power systems but to inform decisions, personalize customer experiences and streamline operations. Kafka is key in this strategy, but only when its data streams are fully accessible and actionable.

Navigating the Complexities of Kafka

Many teams struggle with exposing Kafka topics in a secure, discoverable and controlled way. Internal developers often need specialized knowledge to access or interact with it, which slows development and creates bottlenecks. Meanwhile, security and compliance teams face challenges enforcing consistent authentication and authorization policies. This can be compounded by organizations with multiple Kafka clusters or instances.

To bridge the gap, organizations often build custom proxies or integration layers to expose Kafka to external teams or partners. While functional, these DIY solutions can break easily and are hard to maintain and scale. Kafka is not a full-stack governance or API solution, and this is where the headaches begin.

Consider a company with several sales and product systems producing live usage data. Without a standardized gateway layer, each integration between these systems and their Kafka clusters requires custom engineering effort, one API for the CRM, another for the billing platform and a third for the analytics tool. Over time, this patchwork approach becomes fragile and difficult to audit or expand.

Reframing Kafka as an API

Organizations are starting to think about Kafka differently, though, and are treating it as an extension of the broader API ecosystem. New technologies, like Kong Event Gateway, allow organizations to expose Kafka topics and event streams as managed APIs. This brings built-in governance, observability and security.

(BezierMagic/Shutterstock)

There are practical implications to this reframing, including:

Kafka topics can be published in an internal or external developer portal, just like REST APIs, allowing for easier reuse and collaboration. Role-based access controls (RBAC), OAuth2 and other policies can be applied to Kafka topics using existing API management tools. By virtualizing topics and allowing safe cluster sharing, teams can reduce unnecessary duplication of systems while maintaining access control. Encryption and traffic shaping make it easier to move event workloads into cloud-based Kafka services.

This also gives developers a single, unified control pane across REST, event and AI-based APIs. This simplifies development and improves operational visibility.

This opens the door to a wide range of real-time business applications. For example, a telecommunications provider might use event gateways to expose streaming network telemetry to both internal tools and third-party developers building analytics apps. These APIs could be versioned, rate-limited and secured, just like any REST API, but powered by live Kafka streams. This approach enables new revenue streams without duplicating data pipelines or rebuilding core systems.

A More Strategic Role for Kafka

When event streams are discoverable, secure and easy to consume, they are more likely to become strategic assets. For example, a Kafka topic tracking payment events could be exposed as a self-service API for internal analytics teams, customer-facing dashboards or third-party partners.

This unlocks faster time to value for new applications, enables better reuse of existing data infrastructure, boosts developer productivity and helps organizations meet compliance requirements more easily.

Kafka is already doing the heavy lifting for real-time data across the enterprise. But to get full ROI, organizations must move beyond simply deploying Kafka and make it accessible, governable and aligned with the broader developer and business ecosystem.

Event gateways offer a practical and powerful way to close the gap between infrastructure and innovation. They make it possible for developers and business teams alike to build on top of real-time data, securely, efficiently and at scale. As more organizations move toward AI-driven and event-based architectures, turning Kafka into an accessible and governable part of your API strategy may be one of the highest-leverage steps you can take, not just for IT, but for the entire business.

About the author: Saju Pillai is the senior vice president of engineering at Kong.  A seasoned engineering executive experienced in building teams and products from the ground up at both startups and global corporations, Pillai worked as a principal engineer at Oracle Corp programming HTTP servers and Fusion Middleware tech. He then went on to build and successfully exit a startup in the RBA space. Before joining Kong, Saju most recently built Concur’s core platform as the company’s VP of engineering and later ran Concur’s R&D and Infrastructure Operations as CTO and SVP of engineering. 

Related Items:

LinkedIn Introduces Northguard, Its Replacement for Kafka

Yes, Real-Time Streaming Data Is Still Growing

Confluent Says ‘Au Revoir’ to Zookeeper with Launch of Confluent Platform 8.0

Leave a Reply

Your email address will not be published. Required fields are marked *