Wednesday 14:10 UTC
Apache Camel 3: the next generation of enterprise integration
Claus Ibsen, Andrea Cosentino
Apache Camel is the leading open source integration framework, which has been around for over a decade. In the last two years, the Camel team has been working on the next generation v3.
The overall goal of this presentation is to show you what’s new in Camel 3, and how Camel is aimed for running modern cloud native workloads.
Apache Camel is now a family of Camel projects:
- Camel 3
- Camel K
- Camel Quarkus
- Camel Kafka Connector
- Camel Spring Boot
- Camel Karaf
Camel K is the serverless integration platform, enabling low-code/no-code capabilities, where integrations can be snapped together quickly using the powers from integration patterns and Camel’s extensive set of connectors.
Using Knative, the fast runtime of Quarkus, and Camel K; then the trio brings awesome serverless features such as auto-scaling, scaling to zero, and event-based communication, and with great integration capabilities from Apache Camel.
We will show quick demos of how Camel 3 and Camel K and present insights into what's coming next.
Claus Ibsen (@davsclaus) is an open-source enthusiast and software developer. He's co-leading the Apache Camel project, a project used for integration; which he has been working on full time for more than a decade. Currently Claus is working on expanding Camel into cloud-native and serverless with the latest innovations of Apache Camel K and Camel Quarkus.
With passion and enthusiasm Claus evangelizes about Apache Camel, Java and open source by being active on social media, writing blogs and books, speaking at conferences, etc.
Besides being a JavaChampion, Claus is also a member at Apache Software Foundation.
Prior to joining Red Hat, he has worked as a software developer, architect, and consultant for over a decade. He is based in Denmark.
Andrea Cosentino (@oscerd2) is an open-source addicted and software developer. He co-leads Apache Camel and he’s the Project Management Committee (PMC) Chair of the project. He's currently working on expanding the Camel ecosystem through new subprojects like Camel K, Camel Quarkus, and Camel Kafka Connector (the latest project in the family).
Andrea is a Principal Software Engineer at Red Hat where he works on the Red Hat® Integration Team.
Andrea is active on multiple open-source projects like Apache Karaf, Apache Servicemix, in the roles of committer and PMC Member respectively and on Fabric8 Kubernetes-client as one of the core maintainer.
Andrea is active on social media and blogs, talking about Apache Camel and open source in general.
Camel K and Serverless in Action - When to use it
As organizations move workloads to containers and the cloud the need to utilize resources optimally is ever important. Serverless technology allows dynamic allocation of resources as needed ensuring for no under or over provisioning. This is particularly useful for workloads that run periodically or see spikes. Apache Camel is a widely used Java integration framework that in the latest major version begins to utilize serverless technology. In this session attendees will get an introduction to serverless technology and Apache Camel. They will learn how Camel K utilizes it to run natively on Kubernetes platforms as well as how to get started with the technologies. Attendees will also see a live demo of Camel K and serverless in action.
Mary Cochran is a Red Hat Senior AppDev Solutions Architect for the SouthEast region of the U.S. Mary works with customers to learn more about Red Hat’s application services and how the offerings can be utilized to solve business problems. She has worked with a wide variety of technologies and developed across the whole stack from HTML to SQL on enterprise applications. Mary is a Red Hat Certified Architect with certifications ranging from Camel Developer and Messaging Administrator to System Administrator. She is an Apache Camel contributor and writes posts for Red Hat’s developers blog. Mary has enjoyed the opportunity to speak at multiple conferences from Red Hat Summit to Grace Hopper Celebration of Women in Computing as well as smaller conferences.Wednesday 15:50 UTC
From Camel to Kamelets: new connectors for event-driven applications
Event-driven architectures are having a great moment among developers. Whether they’re using Kafka as the events backbone, or a serverless environment such as Knative, developers now need new ways to connect their applications to external systems: they need connectors.
Kamelets are a new technology in the Apache Camel ecosystem that provide a solution for this problem: they are general purpose connectors that are ready to use and can connect your platform to virtually any of the 300+ systems supported by Apache Camel, and much more than that.
Kamelets are built with ease-of-use and extensibility in mind. Developers can use them out of the box, picking their favorite one from the constantly growing open catalog of Kamelets at Apache, or easily building their own for their specific needs, or even for their enterprise systems. Contributing to the catalog is also easy for everyone.
Additionally, in case you’re not building a single application, but a platform that needs connectors for external systems, there’s no better choice than using Kamelets for connectivity, given the abstract configuration interface that they provide and make them suitable for building any kind of visual UI (we’ll see some examples).
In this presentation, we’re going to explore how these new connectors are made and how they work in various contexts. With a live demo, we’re also going to see them in action using the Camel K runtime on Kubernetes, solving some real world problems.
Nicola Ferraro is a Principal Software Engineer at Red Hat. He is Apache Camel PMC member and co-creator of Camel K, the serverless integration runtime for Kubernetes. He often contributes to Knative, Syndesis and the Fabric8 development tools for Kubernetes.Wednesday 17:10 UTC
Testing Kamelets - Verify event sources and sinks with YAKS
Kamelets represent Camel-K route snippets that act as standardized event sources and sinks in an event driven architecture. Kamelets provide a simple interface that hides the individual implementation details and allows the user to connect to external systems with simply providing some required properties.
YAKS is a test automation tool that provides special support for Camel-K, Kamelets and other technologies such as Knative, Apache Kafka, OpenAPI 3, Http REST, JMS, and many more.
The presentation shows how to write automated integration tests for Kamelets with YAKS in order to run those tests as part of Kubernetes. The talk begins with a short introduction into the framework concepts and illustrates the general test automation for event sources and sinks in the form of examples and live demos.
In the end the audience should be able to accomplish automated integration tests for custom Kamelets that bind to messaging technologies such as Knative, Http, and Apache Kafka.
Christoph is a senior software engineer at Red Hat working on Middleware application services with Apache Camel, Camel-K and Kamelets. He has worked in enterprise integration projects for over a decade and has gained special interest in test automation. As the founder of the Open Source test frameworks Citrus and YAKS Christoph constantly shares this passion with others through conference talks and workshops.Wednesday 18:00 UTC
Building and maintaining an army of Camels to tackle your Cloud Native Problems
Christina Lin, Rachel Yordán
Getting Started with Apache Camel? This session is for you. This will not touch on learning HOW to write camel routes. But every other thing around Camel. Focusing on Cloud Native ways. From what IDE tools, testing frameworks, scaling instance, configuring, CI/CD processes and lastly monitoring. Let’s dive into the life of a Camel application. From its birth to a working Camel, and how they can be managed throughout its lifecycle. We will be looking at two different ways using Camel Quarkus and Camel K.
Christina Lin is the Technical Evangelist for Red Hat Middleware Integration. She helps to grow market awareness and establish thought leadership for Fuse, AMQ, and 3scale. By creating online videos, getting started blogs and also spoke at many conferences around the globe. She has worked in software integration for the finance, telecom, and manufacturing industries, mostly architectural design and implementation. These real life system experiences help her to be practical and combine open source technology, she hopes to bring more innovative ideas for the future system development.
Rachel Yordán is a Senior Software Engineer for Red Hat Middleware Integration and a core contributor of Fuse Online's upstream project, Syndesis. She is a developer advocate, frequently volunteering at local community events to mentor and teach aspiring software engineers. Originally a premedical student working in a chemistry lab, Rachel spent most of her teen and college years coding for fun. She has now spent over a decade working professionally in web development in a multitude of areas including e-commerce, AppSec, and online education. education.
Getting further with Apache Camel on Quarkus
Apache Camel is the proven integration swiss knife for years. In today's world of workloads moving to the cloud, the need for disparate systems to communicate remains more than ever. This context makes a Kubernetes Java stack like Quarkus a good fit to implement Camel routes. In this session, the attendance can first expect a quick reminder about Camel Quarkus basics. Beyond, some day to day useful features will be presented via concrete examples. Finally, a demo will illustrate the interest of running Camel Quarkus on top of Knative.
Alexandre is an Open Source enthusiast and member of the Apache Camel PMC. He is mainly contributing to Camel Quarkus with a focus on enabling extensions in native mode those days.Wednesday 19:40 UTC
Evolution from ESB to Cloud-native API Integration at The Chronicle of Higher Education
Jeff Bruns, Andre Sluczka
This is a story of how a pivotal change in technology can help solve the same business challenge more agile and more rapidly, while using the least amount of technical footprint and overhead possible. We are talking about Enterprise Integration platforms moving from huge, monolithic deployment approaches into tiny, lightweight, and easy to manage containerized microservices running anywhere needed. At first this may seem to be a monumental task, easier said than done, however, there is no need to re-invent the wheel and no regression required. Instead, we will use a few tricks to inject a mature Integration framework, Apache Camel, which has been tested with billions of exchanges, into our new containerized, cloud-native Kubernetes world. Join us and learn how The Chronicle of Higher Education built a real-time data processing platform for an exceptional customer experience processing millions of transactions each day. We are excited to show everyone how to adapt to a lightweight cloud-native integration strategy based on Apache’s new Camel K framework.
Jeff Bruns, Sr. Director of Engineering, The Chronicle of Higher Education, Washington D.C.
Jeff oversees the company’s integration platform solutions for multiple digital products including Chronicle.com, Philanthropy.com and ChronicleVitae.com. His cloud-native based EAI approach has taken The Chronicle of Higher Education customer experience to the next level. Jeff is driven by the endless possibilities of modern technology. He is a coder and builder by trade, and also an enterprise architect and innovative thought leader through experience. Jeff Bruns is a generous community contributor, especially when it comes to harnessing API services to deliver new revenue streams through data-as-service utilizing the most modern and innovative frameworks.
As founder and CEO of Datagrate, Andre has helped his clients design and build scalable enterprise integration platforms for 15+ years. At Datagrate we believe that Kubernetes and Apache Camel will become the foundation for cloud-native integration solutions, especially as demand for more vendor and cloud-agnostic approaches increases each day. Especially since this approach to connect an ever-growing landscape of SaaS solutions and legacy software stacks in real-time creates such an exceptional customer experience. We have worked for and with Talend, GE Transportation, Bayer, US Cellular, FIS Global and many more.
Apache NiFi 101: Introduction and Best Practices
In this talk, we will walk step by step through Apache NiFi from the first load to first application. I will include slides, articles and examples to take away as a Quick Start to utilizing Apache NiFi in your real-time dataflows. I will help you get up and running locally on your laptop, Docker or in CDP Public Cloud.
I will cover:
- Flow Files
- Version Control
- Basic Record Processing
- System Diagnostics
- Process Groups
- Scheduling and Cron
- Bulletin Board
- Basic Cluster Architecture
- Controller Services
- Remote Ports
- Handling Errors
Tim Spann is a Developer Advocate @ StreamNative where he works with Apache NiFi, Apache Pulsar, Apache Flink, Apache MXNet, TensorFlow, Apache Spark, big data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, streaming technologies, and Java programming. Previously, he was a Principal Field Engineer at Cloudera, a senior solutions architect at AirisData and a senior field engineer at Pivotal. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton on big data, the IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as IoT Fusion, Strata, ApacheCon, Data Works Summit Berlin, DataWorks Summit Sydney, and Oracle Code NYC. He holds a BS and MS in computer science.Thursday 15:00 UTC
Apache Hop is a complete suite of data orchestration tools including a visual designer, server, configuration tools and so on. As an incubating project it has made fantastic progress over the past year. First I will cover the progress we've made inside the ASF incubator program and then I'll talk about what Apache Hop is and how we're doing and integrating with other Apache projects. You'll be shown live how to run a visually designed pipeline on Apache Spark for example.
Matt Casters is chief solutions architect at Neo4j, co-founder of Apache Hop, founder of Kettle, and a data orchestration specialist.Thursday 15:50 UTC
Process Batch transaction using AzureBlob Integration with Apache Camel
The proposal is to create an application that will process the transactions received on azure storage blob by different vendors both external and internal. The current challenge is to provide a consumer service that can process transactions in batches and acknowledge the response. These transactions can have different file formats like XML, flat file, zip file. The size of the files vary based on the volume of the incoming batches that can range from 10 to few 100 million. Benefits: This application provides the flexibility of processing file based transactions in batches. Description: The requirement is to use an integration framework that has the possibility to read events from an azure storage blob request container on a periodic cycle, parse the content, perform transformation to JSON, make external service call to get the needed information, accumulate the results and store the final response in another azure storage blob response container. Approach: The approach is to create a camel-springboot application as Apache Camel provides the component that can integrate with Azure Storage Blob storage for reading files. These files can be read periodically using the apache camel timer component. The application is designed robustly to handle or process a large volumes of load. However, it is also equally resilient in case of handling failures while processing the files.
Srikant Mantha has 15+ yrs of experience is IT Industry and 9 yrs in integration space with experience in Tibco, Spring integration and camel. He has worked on building applications with Apache Camel since the last three years.Thursday 17:10 UTC
Apache Ignite Extensions - Modularization
Apache Ignite Extensions to allow Apache Ignite codebase host core modules capabilities and migrate 3rd party integrations in a separate repository.
The migration effort started with following motivation:
To keep Apache Ignite core modules and extensions modules to have separate release lifecycles.
Few integrations which are no longer in use can be deprecated.
Help Apache Ignite community to support core and extensions separately (test, release, fix, continue development).
The following extensions are currently undergoing migration and won't be maintained by Apache Ignite community for every core release. If later the community sees demand for an unsupported integration, it can be taken back and be officially supported (testing, dev, releases, compatibility with the core) as an Ignite module.
Flink - Ignite Flink Streamer consumes messages from an Apache Flink consumer endpoint and feeds them into an Ignite cache.
Flume - IgniteSink is a Flume sink that extracts events from an associated Flume channel and injects into an Ignite cache.
Twitter - Ignite Twitter Streamer consumes messages from a Twitter Streaming API and inserts them into an Ignite cache.
ZeroMQ - Ignite ZeroMQ Streamer consumes messages from a ZeroMQ consumer endpoint and feeds them into an Ignite cache.
RocketMQ - Ignite RocketMQ Streamer consumes messages from an Apache RocketMQ consumer endpoint and feeds them into an Ignite cache.
Storm - Ignite Storm Streamer consumes messages from an Apache Storm consumer endpoint and feeds them into an Ignite cache.
MQTT - Ignite MQTT Streamer consumes messages from a MQTT topic and feeds transformed key-value pairs into an Ignite cache.
Camel - Ignite Camel streamer consumes messages from an Apache Camel consumer endpoint and feeds them into an Ignite cache.
JMS - Ignite JMS Data Streamer consumes messages from JMS brokers and inserts them into Ignite caches.
Kafka - Apache Ignite Kafka Streamer module provides streaming from Kafka to Ignite cache. There are two ways this can be achieved:
- importing Kafka Streamer module in your Maven project and instantiate KafkaStreamer for data streaming
- using Kafka Connect functionality
An extension can be released separately from Apache Ignite core.
An extension has to be tested with existing testing tools like TeamCity and Travis.
Each extension is validated against every Apache Ignite core release and a new version of extension to be released along with Apache Ignite code if changes are required.
Extensions can continue to have their own specific version release and need not aligned with Apache Ignite core release version.
We identified risks with migration efforts associated with modification of existing build pipeline and testing procedures. Also release policies have to be updated to ensure that modules & core versions compatibility matrix is updated regularly.
We also had new extensions which are contributed by Apache Ignite community in Ignite Extensions project:
Pub-Sub - Pub/Sub module is a streaming connector to inject Pub/Sub data into Ignite cache.
Spring Boot Autoconfigure - Apache Ignite Spring Boot Autoconfigure module provides autoconfiguration capabilities for Spring-boot based applications.
Spring Boot Thin Client Autoconfigure - Apache Ignite Client Spring Boot Autoconfigure module provides autoconfiguration capabilities for Spring-boot based applications.
Saikat Maitra is Principal Engineer at Target and Apache Ignite Committer and PMC Member. Prior to Target, he worked for Flipkart and AOL (America Online) to build retail and e-commerce systems. Saikat received his Master of Technology in Software Systems from BITS, Pilani.Thursday 18:50 UTC
Privacy on Beam - E2E Differential Privacy Solution for Apache Beam
Mirac Vuslat Basaran
Privacy on Beam (https://github.com/google/differential-privacy/tree/main/privacy-on-beam) is an easy to use & end-to-end differential privacy solution for Apache Beam, a framework for scalable distributed data processing.
Differential Privacy is a mathematical concept for anonymization and protecting user privacy that has been gaining more and more traction in research and in the industry (for example, US Census is using differential privacy for their 2020 census). However, it is difficult to implement in practice with many pitfalls. There are many privacy-critical steps such as noise addition, partition selection and contribution bounding; and if done incorrectly, could lead to privacy risks. Privacy on Beam is an out-of-the-box differential privacy solution in the sense that it takes care of all the necessary steps for differential privacy without requiring any differential privacy expertise. It is meant to be used by developers, data scientists, differential privacy experts, and more.
In this talk, we'll give a brief introduction into differential privacy and why it is useful and talk about Privacy on Beam. We'll also have a small tutorial/codelab (similar to https://codelabs.developers.google.com/codelabs/privacy-on-beam/) to show how to use Privacy on Beam.
Mirac is a Software Engineer in the area of anonymization and differential privacy at Google. Before joining Google, he studied Computer Engineering (and Economics) at Bilkent University. Currently, he helps build and open source infrastructure for product teams to anonymize their data. He also consults product teams on anonymization and differential privacy.Thursday 19:40 UTC
The democratization of integration - making cloud native integration work
No code and low code integrations are all the rage in app development today. Over the past decade we've seen various approaches to integration and app development including code (recently known as full code), specialized integration languages, DSLs, low code and now no code, let alone the developer who claims their integration is composed of complex microservices. Whilst proponents of the various technologies argue over the pros and cons of each, one aspect is clear - integration is key to development, regardless of how you get there. Add to this various users such citizen and adhoc developers and cloud native requirements and this space gets very confusing very fast.
This talk is a Solution Architect's take on democratizing this complex area. Having worked with teams dealing with over 1500+ integration projects over the past decade, we look at the various types of integration projects and challenges teams have faced. We pick examples from startups and SMEs to large fortune 500 enterprises and look at successful projects as well as failed projects. Maybe there isn't a one size fit all solution to integration, even within a single organization. We propose an approach that combines the best of all worlds which addresses multiple personas, so that everyone can use what they like. And whilst at it, arrive at the true objective they were looking for - Integration!
A technology executive, Mifan Careem is Vice President of Solutions Architecture at WSO2 where he heads its global Solution Architecture and Pre-sales functions whilst playing a corporate leadership role. Mifan also heads WSO2 Open Healthcare owning WSO2’s venture into the healthcare solutions space. Mifan is a global speaker and technology thought leader on API Management, ubiquitous technology and Open Source business models. Prior to WSO2, Mifan co-founded Respere - a humanitarian open source technology startup and was a board member and contributor to Sahana, the Open Source Disaster Management System. Mifan is an advocate and contributor to technology for humanitarian response focusing on the use of open source geospatial solutions in disaster management.