Tuesday 14:10 UTC
OLTP Application Data Services with Apache Geode
Real-time transactions can be fast and furious. Think about building the next big retail market app. Users of the app need to get the answers as quickly as possible. Any slow down to response times will have those users going somewhere else. When the application is popular more and more users will come. Having highly scalable and fast data services is essential.
This session will highlight and describe
- What is OLTP?
- What are its characteristics?
- What are the data service challenges?
How Apache Geode can be used to meet needs such as
- Strong Consistency
- NoSQL characteristics
- High Availability
- Fault Tolerance
- WAN Replication
Gregory Green is an Advisory Solution Engineer with over 25 years of diverse experience in various industries such as financial, pharmaceutical, telecommunication, and others. Specializing in application transformation from legacy/monolith systems to microservices cloud-native applications with a focus on scalable, highly available and self-healing cloud-native data platforms as part of the Modern Applications Platform Business Unit.Tuesday 15:00 UTC
Building resilient and scalable API backends with Apache Pulsar and Spring Reactive
Reactive style of programming can simplify service composition in API backends built for massive scale. A reactive microservice should be able to use reactive APIs for the end-to-end processing of an API request. A reactive interface to a pub-sub messaging system is a common gap. This talk demonstrates a solution where the pub-sub messaging system, Apache Pulsar, is part of the full end-to-end reactive solution for building resilient and highly scalable API backends.
It is demonstrated in the context of a sample Spring Boot API backend how the Apache Pulsar Java Client can be adapted to Reactive Streams so that the full extent of Spring Reactive can be leveraged for error handling and improving resilience by using features such as reactive non-blocking backpressure, timeout handling, circuit breakers, retries and rate limiting.
Lari Hotari is a Apache Pulsar committer and Senior Software Engineer at DataStax. He has worked on the Java platform since 1997 and has contributed to open source for over 20 years.Tuesday 15:50 UTC
How to extend Apache APISIX into a Service Mesh sidecar
This talk will introduce the apisix-mesh-agent project, which has some capabilities to extend Apache APISIX into a sidecar program in the Service Mesh scenario, what's more, it uses xDS protocol to fetch configurations from control planes like Istio, Kuma. After that, it will introduce the future plan and expectations about using Apache APISIX in Service Mesh.
Chao Zhang (GitHub id: tokers), is one of the PMC of the Apache APISIX project and the contributor of OpenResty, and also, is a lover of Open Source.Tuesday 17:10 UTC
Micro services using Apache sling
Dr Yash Mody
As we progress towards low code and no code application behaviour, Apache Sling is uniquely positioned to create microservices that Martech teams can create and control with great ease. The OAS 3 compatible OSGi module, Apache Shiro security layer and an API gateway using the Apache webserver make it a robust platform. JCR node structure adapts to the Restful API philosophy. A layer of messaging using Kafka, using zookeeper to manage HA, ingesting API statistics to create an ai layer that can allow the API access to be modulated and optimised.
Yash has been an avid proponent of Open Source software. He handles the online platform for several banks and e-commerce platforms. He is currently focused on building his no code api platform “SHAFT” based on Apache Sling and other Apache tech stack products.Tuesday 18:00 UTC
Introduction to AsyncAPI for Apache Kafka
Apache Kafka is such a powerful and flexible way of streaming events and data, and can accommodate an unending variety of data types and structures, in a variety of formats. This great feature can also be a bug when you're integrating with yet another completely different stream of data!
Enter AsyncAPI, an open standard for describing records in event-driven systems. Using AsyncAPI you can clearly describe the data payloads and transform that information into developer-friendly documentation; other tools can validate AsyncAPI and generate code for clients or integrations. In this session you will see "behind the curtain" of how the AsyncAPI document is structured, and get a demo of some of the tools that build on AsyncAPI to build things like documentation and example code. If you aren't using AsyncAPI already, and your Kafka talks to other systems, then you'll want to see this talk.
Lorna is based in Yorkshire, UK; she is a Developer Advocate at Aiven as well as a published author and experienced conference speaker. She brings her technical expertise on a range of topics to audiences all over the world with her writing and speaking engagements. Lorna has a strong background in open source, and a passion is for better Developer Experiences for developers everywhere. You can find out more about Lorna on her website https://lornajane.net.Tuesday 18:50 UTC
A high-security API management infrastructure using Apache Camel
Apache Camel is usually thought to be an integration tool to integrate the disparate systems. In addtion, it also suitable for an API management infrastructure. However, when it is used as an API management infrastructure, there is still a security drawback because it cannot do Access Control such as Token Introspection and Scope Check. In this presentation, we take the challenge to propose a high-security API management infrastructure that is mainly constructed with Camel and Keycloak (An Identity and Access Management tool based on OAuth2.0/OIDC) to resolve the drawback.
Our proposal provides the following features: Token Issuance and Management based on OAuth 2.0, Integration with external IdP, Reverse Proxy, Access Control (Token Introspection, Scope Check and OAuth MTLS), Flow Control, Metrics (should work with other components like Prometheus), API Specification Publishing, and some features support the API creation like Protocol Conversion and Mash-up.
Yang Xie is a software engineer at Hitachi, Ltd. He is engaged on the technical support for Keycloak, 3scale, and Apache Camel. And he is also an open source contributor for Keycloak.Tuesday 19:40 UTC
Building Kubernetes Microservices with Apache Thrift
This talk will take attendees through the process of building high performance microservices with Apache Thrift targeting deployment on Kubernetes. The talk begins with a look at Thrift and Thrift client and service development, calling out specific Thrift best practices. In the second part of the talk we will walk through container packaging and Kubernetes manifest construction for service deployment, covering cloud native best practices for services on Kubernetes. Performance comparisons and tips are also covered. Attendees will leave with a comprehensive understanding of cloud native microservice development with Apache Thrift.
Randy Abernethy has been in the computer industry for over 25 years and is currently a managing partner at RX-M LLC, a cloud native consulting and training firm. The author of several books, including the “Programer’s Guide to Apache Thrift”, Randy is a PMC member of the Apache Thrift project and an active participant in the Cloud Native Computing Foundation, acting as a Cloud Native ambassador and a TOC contributor. Over the years Randy has held various CTO roles and created institutional trading platforms processing billions of dollars in transactions daily. He is a firm believer in IDL based interfaces, microservices, Jeeps, sailing and old Sci-Fi movies.Wednesday 14:10 UTC
Apache Flink StateFun: A Platform-Independent Stateful Serverless Stack
Tzu-Li (Gordon) Tai
Stateful Functions (StateFun), a project developed under the umbrella of Apache Flink, provides consistent messaging and distributed state management for stateful serverless applications. It does so in a vendor, platform and language agnostic manner - applications are composed of inter-messaging, polyglot functions that can be deployed on a mixture of your preferred FaaS platforms, as a Spring Boot application on Kubernetes, or really any deployment method typically used in modern cloud-native architectures.
In this session, you will learn about the core concepts behind the project and the abstractions that developers would work with, all up to date to the latest 3.0 release. For new users, the content of this talk will be a perfect place to get started with StateFun. For existing users, this will be a great opportunity to catch up with the latest advancements in the project, including improved ergonomics around zero-downtime upgrade capabilities of StateFun applications, type system for messages and function state, as well as an extended array of new language SDKs.
Tzu-Li (Gordon) Tai is an Apache Flink PMC and Senior Software Engineer at Ververica. He is currently working on the Stateful Functions (https://statefun.io) project in Apache Flink. In the past, he has contributed to various other parts of the Apache Flink project, including some of the more popular streaming connectors for Flink (Apache Kafka, AWS Kinesis, etc.) as well as several topics surrounding evolvability of stateful Flink streaming applications.Wednesday 15:00 UTC
Scaling Betfair Exchange to support 160k rps using Apache Kafka
On the Betfair Exchange, our biggest customers trade at ultra high frequencies pouring in millions of GBP on high profile sporting events into our trading systems. As such, low latency and reliability is key to everything we build. Our customers need to be able to view the current positions on offer on a market, place their orders accordingly and see them fulfilled reliably, all of which needs to happen in a few milliseconds. As an ever expanding company, we also have a steady growth in a number of jurisdictions we operate in, so the number of customers operating at such frequencies are going up every single day. All this means we need our exchange trading platform to be resilient, reliable, fast and easily scalable. On a busy Saturday afternoon when there is popular football going on, we see in excess of 200k transactions per second across our estate, and 99.9% of them being served with an SLA of 10ms. This used to be about 40k transactions per second a few years ago, but the goal post is constantly moving for us. So in order to get from that point to the present day, we needed to fundamentally re-engineer our backend systems, to be largely event driven, and Apache Kafka was the perfect tool to help us solve this problem. We are also currently in the middle of rearchitecting the core of our exchange platform that accepts new orders and fulfills them in a matter of few milliseconds. Whilst a transactional RDBMS has performed this function reliably for us for many years, its time to move this onto next generation of fast, reliable, distributed and scalable interface. This presentation details some of the challenges we faced and how we overcame them, to re write the core of exchange using Apache Kafka, essentially pumping millions of GBP trades everyday through a central backbone built around Apache Kafka.
Manjunath Shivakumar has worked on the Betfair Exchange for nearly a decade evolving it from a monolithic system with limited scalability to a fully event driven micro service based architecture today where we are able to support in excess of 200k requests per second. Manjunath started as a developer working on some core areas of the exchange and am now responsible for driving the future direction of the exchange as the architect. Manjunath presents regularly at company internal meetups and have presented externally in prestigious conferences like Kafka Summit. Manjunath would love the opportunity to present our journey to a more global audience at the Apache Con 2021.Wednesday 15:50 UTC
Practice of Apache APISIX in Low-Code Gateway
Apache APISIX is a cloud-native, high-performance, fully dynamic API gateway. As a traffic portal, it provides rich traffic management capabilities: on the one hand, it ensures stable and reliable upstream services under high traffic conditions; on the other hand, combined with a flexible plug-in mechanism, it realizes fine-grained traffic control, integrates common business solutions, and user authentication services that used to be implemented in multiple upstream services can now be unified at the gateway level through Apache APISIX. This greatly reduces the development work of repeated common services. In addition, Apache APISIX is the first low-code API gateway that enables users to assemble plug-ins by dragging and dropping and generate DSLs for the gateway to handle, enabling plug-ins to create plug-ins! Today, I will share with you the problems and accumulated experience of Apache APISIX in the direction of implementing a low-code gateway.
Zhiyuan Ju is anApache APISIX PMC, ApacheCon Speaker (2020), and core member of freeCodeCamp.Wednesday 17:10 UTC
Pulsar Beam, HTTP streaming over Apache Pulsar
To enable language and OS agnostic Pulsar clients, we answer the call to build a service to enable producing and consuming events via HTTP. It offers an HTTP endpoint for message ingestion, an HTTP SSE interface for event streaming as consumer, and a push notification based webhook interface. The design, which is implemented in Golang, keeps Cloud Native in mind. Key components are stateless and can be deployed independently for scalability.
By supporting webhooks, Pulsar Beam is able to tap into Major cloud providers. It extends the reach of Pulsar to the entire AWS, Azure, and GCP’s ecosystem via Lambda, Azure and GCP function integration.
Ming is a software engineer at DataStax. He was co-founder of Kesque that offered Pulsar as SaaS product. Prior to that, he has been building large scale real time and distributed software for IBM, Ericsson, and Nortel.Wednesday 18:00 UTC
Use Java to write plugins for Apache APISIX
Apache APISIX natively uses Lua to write plug-ins, but Lua is not a very popular development language. How to let more developers participate in the development of Apache APISIX? It is a good idea to support more languages to write plug-ins. This talk will introduce how to use Java to write Apache APISIX plug-ins.
Ming Wen is the Apache APISIX PMC Chair.Wednesday 18:50 UTC
Splitting Monolith Application to Microservices: gains and challenges from the practical experience
This talk presents a practical case of splitting monolith e-commerce application to microservices using Apache CXF, Karaf and ActiveMQ technologies.
Following topics will be addressed into presentation:
- Motivation and goals of splitting monolith application
- Criteria and markers to start splitting process. Is it necessary at all?
- Optimal order of extracting microservices
- How organize the whole process in closed iterative steps?
- What can be done with common libraries and shared code?
- Options for technology and deployment of target microservices
- How organize and motivate the teams and convince management?
Andrei is a platform architect in the Conrad Electronic group developing the e-commerce platform. The areas of his interest are REST API design, Microservices, Cloud, resilient distributed systems, security and agile development. Andrei is PMC and committer of Apache CXF and committer of Syncope projects.Wednesday 19:40 UTC
Observability and Resiliency for your Apache thrift service
This talk will discuss techniques to implement observability for Apache thrift services. It will focus on logging - structured logging and correlated error logging, adding metrics and implementing distributed tracing. We will also touch upon implementing resiliency techniques via auto-scaling and chaos engineering.
The talk will focus on patterns and be language-agnostic. However, code samples will accompany the talk and will be in Python.
Amit Saha is a senior site reliability engineer at Atlassian based in Sydney, Australia. He’s the author of two books, including Doing Math with Python, and several other publications.
He has spoken at various conferences and meetups, including PyCON US and PyCON Australia.
OGC APIs: A Suite of Web API Standards for Handling and Exchanging Location Data
Gobe Hobona, Scott Simmons
Many software libraries, including open source ones, include an ability to handle location data. In several cases, when standards are not followed, the software libraries mishandle location data leading to errors in positioning or loss of information. In some cases, the mishandling occurs at the API or microservice tier due to inconsistencies in how the APIs represent location data. Recognising this increasingly serious issue, the Open Geospatial Consortium (OGC) has embarked on a standardisation program to develop the OGC API suite of standards. This suite of standards defines modular API building blocks that spatially enable Web APIs in a consistent way. The proposed presentation will introduce the OGC API suite of standards and describe how various Apache Software Foundation projects could leverage the API standards.
The OGC is an international consortium of more than 500 businesses, government agencies, research organizations, and universities driven to make geospatial (location) information and services FAIR - Findable, Accessible, Interoperable, and Reusable. The consortium provides a consensus process for collaborative development, approval and maintenance of open, international standards that make location information FAIR – Findable, Accessible, Interoperable, and Reusable. It is through this process that the consortium is developing a series of OGC API standards for handling and exchanging Web APIs.
Dr. Gobe Hobona is the OGC's Director of Product Management, Standards. In this role he manages coordination between Standards Working Groups (SWGs) and sets priorities for standards development (in cooperation with SWG chairs). He also provides oversight of OGC Application Programming Interface (API) evolution and harmonization activities. He identifies opportunities for investment in SWG participation, Innovation Program initiatives, and Compliance test development. He monitors and participates in SWG activities, as well organises Sprints to advance standards or verify their design. He is also accountable for the Compliance Program and is responsible for Knowledge Management.
He holds a PhD in Geomatics from Newcastle University. He also holds a Bachelor of Science with Honours in Geographic Information Science from Newcastle University. He is a professional member of both the Royal Institution of Chartered Surveyors (RICS) and the Association for Computing Machinery (ACM).
Other activities: Chair of the OGC Architecture Domain Working Group; Chair of the OGC Naming Authority; Member of the RICS Aerial Imagery Guidance Working Group; and OGC representative on the IST/36 committee on geographic information of the British Standards Institution (BSI).
Scott Simmons is OGC's Chief Standards Officer. In this role, he provides oversight and direction to the Consortium’s technical and program operations and deliverables. Mr. Simmons also continues to lead the OGC Standards Program, where he ensures that standards progress through the organization’s consensus process to approval and publication. Preceding his time as a member of OGC staff, Mr. Simmons was an active member of OGC, promoting best practices in 3D Information Management (3DIM) as chair of the OGC 3DIM Domain Working Group and chairing or participating in numerous OGC infrastructure, mobility, and web services working groups. His OGC-related research has focused on data lifecycle management, integration, and dissemination.
Mr. Simmons was formerly an Executive Director for CACI International, Inc. (CACI). At CACI, Mr. Simmons responsibilities included alignment of new business opportunities with fielded or researched capabilities in the geospatial domain. CACI acquired TechniGraphics, Inc. in 2010 where Mr. Simmons was the Chief Technology Officer. From 1993 to 2000, Mr. Simmons worked as a consulting geologist in the areas of structural geology, seismic risk, and geochemistry for GeoSyntec Consultants, where he also founded and managed the firm’s geospatial operations.
From 1988 until 1993, Mr. Simmons worked as an exploration and production geologist in the oil and gas industry where he had particular expertise in redevelopment of declining oil and gas fields as well as the practical application of geospatial technology. Mr. Simmons has also served as an Adjunct Professor at the College of Wooster, on the Advisory Board of the GIS Cluster at the Rocky Mountain Innosphere, and as a subject matter expert for seminars at universities and conferences around the World. He holds a Bachelor of Science degree in Geology from the University of Texas and a Master of Science degree in Geology from Southern Methodist University.