1. 12

Hey folks,

I’ve built a log ingestion system in rust. It allow end user to ingest log at high rate and indexes as well. Pathivu is cost-effective and ingest logs at a scale.

Would love hear feedback from the community

    1. 3

      The asciinema is next to useless, you spend half the time booting kubectl. I still don’t now what this software is doing.

      1. 1

        Hey, Sorry for the useless asciinema.

        I actually created pathivu in the cluster and used katchi to see all the logs from the pathivu.

        The end result will look like this https://github.com/pathivu/pathivu#use-katchi-to-see-logs

        I’ll update the asciinema soon.

    2. 2


      Is there a design overview or a comparison with, say, loki?

      1. 2

        loki doesn’t do indexing. Here we do indexing. When it comes to design, it just append only log like Kafka. The main reason to write in rust for the memory handling. In go-lang, folks already facing lot memory-related issues. Personally, I haven’t benchmarked Loki in big deployment. I hope, there is lot of rooms in Pathivu for optimization, I released it in MVP state to get feedback from the community, whether folks would love Pathipu or not. So far, I heard good about pathipu. Got a lot of motivation to build further. <3

        1. 2

          How does it do compacting? And are there already ideas regarding HA? Does it index after simple word splitting or is it doing anything fancy?

          It is great having a different tool. Loki in HA is pretty involved and all the elasticsearch options are an operational nightmare. It was most unfortunate that oklog died. It’s great having something simpler and written with efficiency in mind.

          1. 2

            Now, it won’t do any compaction. Regarding HA, planning to make it multi-node. currently, it’s a single node.

            Indexing is simple splitting and stop word cleaning. If the user demands something fancy, I will implement.

            The main idea for me to build this is to efficient and cost-saving.

            In my previous company, there was a saying “Reducing AWS cost itself a buissness”

            1. 1

              Thank you for answering. Have you looked at the loki-grafana protocol? It would be great to use grafana for looking at the logs. That way, one does not have to train users to use a different tool.

              1. 2

                grafana plugin is in the pipeline. Folks can see in a week.

    3. 2

      What do you mean by “high rate” and “at scale?”

      Do you have any interesting benchmarks?

      1. 1

        I’m able to make 10mb/s. But, lot of optimization yet to made. eg. multi-thread ingester…

        Once it comes to right shape, will post it in readme.

        If you’re interested, I would like to share my benchmark script with you.