The author is confused by the distinction between the SASL framework for authentication and one of the authentication mechanisms available in SASL. They are describing a salted mechanism (referring to the SASL SCRAM RFC so probably SCRAM, but I haven’t double-checked in detail) and presenting it as being SASL.
Actually, I wrote my learning on reading the SASL code on the rust postgres driver. https://github.com/sfackler/rust-postgres/blob/master/postgres-protocol/src/authentication/sasl.rs
Probably, I’m wrong. Let me verify and correct it.
I would argue that this is a problematic design. It adds a proxy step for your pg connection; it shifts access outside of postgres or your application, inducing another integration point for your authZ system.
I would probably look into something like authzed for the application, then relevant team permissions are driven into the database from that.
To my understanding. authzed do the policy decision whether the user has access or not. But it doesn’t rewrite the query or enforce the decision.
Let me know if I’m understanding is wrong. I am happy to change the design if something is better.
I thought about enforcing it at a centralized place. There are times, we may need to give access to people for adhoc jobs. eg: uploading csv. That time we may use this.
But anyways, problem can be solved in multiple ways
Sounds cute. I’m a little curious why it’s written in a mix of Go and Rust. Is the control plane in Go and the data plane in Rust or something?
(I suggested removing the devops tag because this doesn’t do anything devops-y, there’s no deploying or containerising or anything that resembles configuration management.)
Maybe I’m missing something, but RBAC and RLS in Postgres does everything this does(and more?), and is already included in PostgreSQL.
Yeah, you can do every access control as same as postgres. Since, inspektor uses OPA, you could write policies like give permission to the support engineer only if the engineer is on call.
Inspektor is not included in the postgres; it acts as a proxy between enduser and database.
Next, you can monitor all the operations that have been performed on your postgres instance. SSO logins helps to get the database credentials for data scientist in no time by pulling all the roles from the access directory.
In future, we are adding support for more databases and data warehouses for centralized access control and data masking to get realistic data for the data scientist to create models.
Still, we are in the early days. I hope we can do more with inspektor than simple access control.
Please let us know if you have any idea how we can improve.
Shameless plug: All the companies(Google, Microsoft…) are telling trust us. But, I believe that we should trust us instead of relying on third parties. They always change when businesses interest changes. This is where web3 is coming to play. Technologies like IFFS, safe network are coming. Looking at the scale issue, I guess this web3 takes at least 5 more years. But, this kind p2p technology is possible with small-scaled mesh. Mesh networks within our devices or families. From the beginning, I hate the idea of storing passwords in the third-party password manager. Later, I fell into the same trap because a managing lot of passwords is difficult. So, I building an open-source p2p password manger. Replicates the passwords within your devices, instead of storing everything at the vendor’s cloud. It’s half-way for the closed beta release. I would like to hear everyone’s feedback on this idea.
Replicates the passwords within your devices, instead of storing everything at the vendor’s cloud. It’s half-way for the closed beta release. I would like to hear everyone’s feedback on this idea.
We need more of this kind of thing! Telling people not to store their shit “in the cloud” is only half the story. We also need easy to use(!) alternatives we can point to when they ask “so how should I do it?”
The asciinema is next to useless, you spend half the time booting kubectl. I still don’t now what this software is doing.
Hey, Sorry for the useless asciinema.
I actually created pathivu in the cluster and used katchi to see all the logs from the pathivu.
The end result will look like this https://github.com/pathivu/pathivu#use-katchi-to-see-logs
I’ll update the asciinema soon.
loki doesn’t do indexing. Here we do indexing. When it comes to design, it just append only log like Kafka. The main reason to write in rust for the memory handling. In go-lang, folks already facing lot memory-related issues. Personally, I haven’t benchmarked Loki in big deployment. I hope, there is lot of rooms in Pathivu for optimization, I released it in MVP state to get feedback from the community, whether folks would love Pathipu or not. So far, I heard good about pathipu. Got a lot of motivation to build further. <3
How does it do compacting? And are there already ideas regarding HA? Does it index after simple word splitting or is it doing anything fancy?
It is great having a different tool. Loki in HA is pretty involved and all the elasticsearch options are an operational nightmare. It was most unfortunate that oklog died. It’s great having something simpler and written with efficiency in mind.
Now, it won’t do any compaction. Regarding HA, planning to make it multi-node. currently, it’s a single node.
Indexing is simple splitting and stop word cleaning. If the user demands something fancy, I will implement.
The main idea for me to build this is to efficient and cost-saving.
In my previous company, there was a saying “Reducing AWS cost itself a buissness”
Thank you for answering. Have you looked at the loki-grafana protocol? It would be great to use grafana for looking at the logs. That way, one does not have to train users to use a different tool.
I’m able to make 10mb/s. But, lot of optimization yet to made. eg. multi-thread ingester…
Once it comes to right shape, will post it in readme.
If you’re interested, I would like to share my benchmark script with you.
In Swift, this would be:
I think the example doesn’t really show the point of
r#. You could just as well just change the name to
__ininstead and it would probably be more readable than
As always, the RFC gives a lot of motivation. https://rust-lang.github.io/rfcs/2151-raw-identifiers.html
(E.g. the ability to name a function like a keyword, particularly useful for FFI use)
But if you always call it using r#, you have essentially renamed the function. It would be acceptable if it was only at declaration or where disambiguation was otherwise needed, but here it seems to surface at every point of use.
bc. a dynamic library needs to export this symbol. I agree in general,
r#is not to be used in interfaces intended for humans.
Hm, I don’t think that’s what is happening here. For FFI purposes, we have a dedicated attribute, link_name
Unlike r#, it’s not restricted to valid rust identifiers (ie, it allows weird symbols in name).
My understanding that 90% of the motivation for
r#was edition system, and desire to re-purpose existing idents as keywords. Hence, unlike Swift or Kotlin, Rust deliberately doesn’t support arbitrary strings as raw identifiers, only stuff which is lexically an ident (
The example uses debug serialisation (
#[derive(Debug)]), which perhaps isn’t the best example of why it matters, but at least proves the point.
The name matters in serialisation, and this could be generated code. I’ve had this exact problem in two unrelated protocol generators that happened to generate C++, and got funny build errors when I tried to define messages with fields like
OK, but that option hasn’t gone anywhere. You can still name it
_inif you want. There’s plenty of niche cases where it would be nice to keep the identifier, mostly when interfacing with code you don’t control.
Yes, exactly, I figured out about raw identifier while checking a PR at sqlparser crate. where author used
infor parsing one of the statements.