1. 12
  1. 5

    I’ve used Vector in a lot of places and have been super happy with it. Compared to Filebeat, the in process stream processing features (like the lua module ) have really come in handy when I’ve needed to get data cleaned up before sending.

    1. 1

      The packaging on this looks super slick. Those are some of the best docs I’ve ever seen.

      That being said, where does this fall in the stack exactly? What does it replace?

      Let’s say I had a Grafana/Loki+Promtail/Prometheus/Jaeger setup or ELK stack (maybe with fluentd instead of logstash).

      Why and where would I want this?

      1. 3

        Good question! We’re hoping to make use cases more clear with a new website in the coming weeks.

        We are positioning Vector as the only tool needed to collect, process, and route all observability data. Our intent is to replace Prometheus exporters, Telegraf, Fluent, Logstash, Beats, Splunk forwarders, and the like. If you zoom out, we think Vector can serve as an observability data platform that puts you in control. Use cases include:

        • Cost reduction by sampling and cleaning data.
        • Cost reduction by separating the system of record from the analysis. (Ex: using S3 as your system of record and sampling data sent to Splunk).
        • Cost reduction by simply using less internal resources for processing observability data.
        • Security/privacy compliance through data redaction and other security features.
        • Prevent lock-in by decoupling data collection and processing from your downstream vendor(s).

        And finally, we hope that Vector provides a better data catalog than the tools I mentioned above, so it will play an active role in improving the insights you get from this data.

        Let me know if that helps!