Threads for sch00lb0y

  1. 3

    In Swift, this would be:

    struct Test {
        let `in`: String
    }
    
    let a = Test(in: "asdf")
    

    I think the example doesn’t really show the point of r#. You could just as well just change the name to _in or __in instead and it would probably be more readable than r#in.

    1. 3

      As always, the RFC gives a lot of motivation. https://rust-lang.github.io/rfcs/2151-raw-identifiers.html

      (E.g. the ability to name a function like a keyword, particularly useful for FFI use)

      1. 2

        But if you always call it using r#, you have essentially renamed the function. It would be acceptable if it was only at declaration or where disambiguation was otherwise needed, but here it seems to surface at every point of use.

        1. 4

          Imagine function:

          extern "C" r#match() {
          
          }
          

          bc. a dynamic library needs to export this symbol. I agree in general, r# is not to be used in interfaces intended for humans.

          1. 6

            Hm, I don’t think that’s what is happening here. For FFI purposes, we have a dedicated attribute, link_name

            https://doc.rust-lang.org/reference/items/external-blocks.html#the-link_name-attribute

            Unlike r#, it’s not restricted to valid rust identifiers (ie, it allows weird symbols in name).

            My understanding that 90% of the motivation for r# was edition system, and desire to re-purpose existing idents as keywords. Hence, unlike Swift or Kotlin, Rust deliberately doesn’t support arbitrary strings as raw identifiers, only stuff which is lexically an ident ((_|XID_Start)XID_Continue*).

      2. 3

        The example uses debug serialisation (#[derive(Debug)]), which perhaps isn’t the best example of why it matters, but at least proves the point.

        The name matters in serialisation, and this could be generated code. I’ve had this exact problem in two unrelated protocol generators that happened to generate C++, and got funny build errors when I tried to define messages with fields like delete and static.

        1. 1

          OK, but that option hasn’t gone anywhere. You can still name it _in if you want. There’s plenty of niche cases where it would be nice to keep the identifier, mostly when interfacing with code you don’t control.

          1. 1

            Yes, exactly, I figured out about raw identifier while checking a PR at sqlparser crate. where author used in for parsing one of the statements.

        1. 7

          The author is confused by the distinction between the SASL framework for authentication and one of the authentication mechanisms available in SASL. They are describing a salted mechanism (referring to the SASL SCRAM RFC so probably SCRAM, but I haven’t double-checked in detail) and presenting it as being SASL.

          1. 1

            Actually, I wrote my learning on reading the SASL code on the rust postgres driver. https://github.com/sfackler/rust-postgres/blob/master/postgres-protocol/src/authentication/sasl.rs

            Probably, I’m wrong. Let me verify and correct it.

            1. 5

              Yep your right.

              Love this community. I’ll correct it.

          1. 1

            I would argue that this is a problematic design. It adds a proxy step for your pg connection; it shifts access outside of postgres or your application, inducing another integration point for your authZ system.

            I would probably look into something like authzed for the application, then relevant team permissions are driven into the database from that.

            1. 1

              To my understanding. authzed do the policy decision whether the user has access or not. But it doesn’t rewrite the query or enforce the decision.

              Let me know if I’m understanding is wrong. I am happy to change the design if something is better.

              1. 1

                Bluntly: I would place the authorization in the application, not in a proxy enforcing it.

                1. 1

                  Got it.

                  I thought about enforcing it at a centralized place. There are times, we may need to give access to people for adhoc jobs. eg: uploading csv. That time we may use this.

                  But anyways, problem can be solved in multiple ways

                  cheers

            1. 3

              Sounds cute. I’m a little curious why it’s written in a mix of Go and Rust. Is the control plane in Go and the data plane in Rust or something?

              (I suggested removing the devops tag because this doesn’t do anything devops-y, there’s no deploying or containerising or anything that resembles configuration management.)

              1. 2

                Yes correct, control plane is in go and data plane is in Rust :)

              1. 4

                Maybe I’m missing something, but RBAC and RLS in Postgres does everything this does(and more?), and is already included in PostgreSQL.

                1. 3

                  Yeah, you can do every access control as same as postgres. Since, inspektor uses OPA, you could write policies like give permission to the support engineer only if the engineer is on call.

                  Inspektor is not included in the postgres; it acts as a proxy between enduser and database.

                  Next, you can monitor all the operations that have been performed on your postgres instance. SSO logins helps to get the database credentials for data scientist in no time by pulling all the roles from the access directory.

                  In future, we are adding support for more databases and data warehouses for centralized access control and data masking to get realistic data for the data scientist to create models.

                  Still, we are in the early days. I hope we can do more with inspektor than simple access control.

                  Please let us know if you have any idea how we can improve.

                1. 2

                  Shameless plug: All the companies(Google, Microsoft…) are telling trust us. But, I believe that we should trust us instead of relying on third parties. They always change when businesses interest changes. This is where web3 is coming to play. Technologies like IFFS, safe network are coming. Looking at the scale issue, I guess this web3 takes at least 5 more years. But, this kind p2p technology is possible with small-scaled mesh. Mesh networks within our devices or families. From the beginning, I hate the idea of storing passwords in the third-party password manager. Later, I fell into the same trap because a managing lot of passwords is difficult. So, I building an open-source p2p password manger. Replicates the passwords within your devices, instead of storing everything at the vendor’s cloud. It’s half-way for the closed beta release. I would like to hear everyone’s feedback on this idea.

                  Thanks

                  1. 1

                    Replicates the passwords within your devices, instead of storing everything at the vendor’s cloud. It’s half-way for the closed beta release. I would like to hear everyone’s feedback on this idea.

                    We need more of this kind of thing! Telling people not to store their shit “in the cloud” is only half the story. We also need easy to use(!) alternatives we can point to when they ask “so how should I do it?”

                  1. 3

                    The asciinema is next to useless, you spend half the time booting kubectl. I still don’t now what this software is doing.

                    1. 1

                      Hey, Sorry for the useless asciinema.

                      I actually created pathivu in the cluster and used katchi to see all the logs from the pathivu.

                      The end result will look like this https://github.com/pathivu/pathivu#use-katchi-to-see-logs

                      I’ll update the asciinema soon.

                    1. 2

                      Great!

                      Is there a design overview or a comparison with, say, loki?

                      1. 2

                        loki doesn’t do indexing. Here we do indexing. When it comes to design, it just append only log like Kafka. The main reason to write in rust for the memory handling. In go-lang, folks already facing lot memory-related issues. Personally, I haven’t benchmarked Loki in big deployment. I hope, there is lot of rooms in Pathivu for optimization, I released it in MVP state to get feedback from the community, whether folks would love Pathipu or not. So far, I heard good about pathipu. Got a lot of motivation to build further. <3

                        1. 2

                          How does it do compacting? And are there already ideas regarding HA? Does it index after simple word splitting or is it doing anything fancy?

                          It is great having a different tool. Loki in HA is pretty involved and all the elasticsearch options are an operational nightmare. It was most unfortunate that oklog died. It’s great having something simpler and written with efficiency in mind.

                          1. 2

                            Now, it won’t do any compaction. Regarding HA, planning to make it multi-node. currently, it’s a single node.

                            Indexing is simple splitting and stop word cleaning. If the user demands something fancy, I will implement.

                            The main idea for me to build this is to efficient and cost-saving.

                            In my previous company, there was a saying “Reducing AWS cost itself a buissness”

                            1. 1

                              Thank you for answering. Have you looked at the loki-grafana protocol? It would be great to use grafana for looking at the logs. That way, one does not have to train users to use a different tool.

                              1. 2

                                grafana plugin is in the pipeline. Folks can see in a week.

                      1. 2

                        What do you mean by “high rate” and “at scale?”

                        Do you have any interesting benchmarks?

                        1. 1

                          I’m able to make 10mb/s. But, lot of optimization yet to made. eg. multi-thread ingester…

                          Once it comes to right shape, will post it in readme.

                          If you’re interested, I would like to share my benchmark script with you.