FAQ

Frequently Asked Questions about GlassFlow

1. How do I get started with GlassFlow?

Getting started with GlassFlow involves creating an account, installing the GlassFlow CLI, and setting up your first data pipeline. Detailed instructions can be found in our Quickstart in the documentation.

2. What underlying tech does GlassFlow use?

GlassFlow uses technologies like pure Python, Docker, KEDA (Kubernetes Event-Driven Autoscaling) with an integrated NATS message broker for delivering real-time events and a Fission serverless framework to run lambda-like functions on Kubernetes.

3. Are you using Kafka for GlassFlow?

No, we are using NATS Jetstream as a message broker component but you, as a user, don’t have to know how it works or have any direct interaction with it. Our managed service takes care of setting it up, maintaining it, and adjusting it depending on your needs.

4. Can I use my existing message broker?

Yes, you can pull data from your already existing message broker and transform your events with the GlassFlow Cloud. Here is a link on how to pull data from Amazon SQS and Google PubSub.

5. Does GlassFlow support stateful event processing?

It is planned for the upcoming months.

6. Can the data stay in my cloud or on-premises?

Yes, your original data stays where it is and GlassFlow manages data in the application layer. GlasFlow does computational work and saves the result back to your original data source or sends real-time data to output streams you specify.

Last updated

© 2023 GlassFlow