On one hand, I do agree that invariants are easier to read when they’re stated positively. In my code, I follow every invariant-checking if statement with a comment that says something along the lines of // Invariant: the array is large enough for this algorithm to make sense.
On the other hand, my code usually checks that the invariant is not held, and exits early. So my comments state the invariant positively, but the code states it negatively.
I think I could get used to negating a positively stated invariant:
if (!(array.size() > MIN_DATAPOINTS_COUNT)) {
return;
}
But that looks quite ugly to me and I immediately itch to rewrite it to <=.
Yup! This reasoning sold me on Swit’s guard statement, its awesome to be able to write something along the lines of
assert invariant orelse return
That’s why the advice here is to try to re-arrange the code such that the positive invariant is natural, rather than to just mechanically postivise it — you need guard / unless for the mechanical transformation to always give good results.
Usually if I find a conditional hard to read I’ll pull it out into an intermediate variable. I never really do it with single clause conditions, but for illustrative purposes your example would become something like:
hasEnoughData = array.size() > MIN_DATA_POINTS_COUNT
if !hasEnoughData {
return
}
It allows you to state the property positively, but show that the condition is about the negative case.
We check if the array hasEnoughData. The property is easy to verify.
The conditional is about the negative case. The intent is clear.
We know exactly what the property is checking because the variable name acts as a micro comment that is less likely to become out of date.
The benefits of it are much better with more complex conditions.
~/@ - contains notes as markdown files relevant to that computer and shouldn’t be synced anywhere else. Work stuff, private stuff, scratch paper that isn’t entirely throwaway, etc.
~/@@ - contains global notes as markdown files. Non-sensitive stuff. They are automatically synced across computers and my phone via Obsidian.
I edit these with Vim. That’s about it. Honestly, I don’t use any of Obsidians features except for it’s excellent syncing and it’s mobile app. I feel a little bad about that since so many people love it’s other capabilities…but I mostly just want text files.
I’m surprised I couldn’t find a description of Ares’ goals. Like is it for fun, or for a specific use case, or to explore an idea, or simplify a design? I have zero background in OS dev, so it’s possible the design makes it obvious to the initiated.
I did some searching and the end of Drew’s update blog post seems to say it was originally an attempt at proving out Hare but has become more of a fun-time hobby. For some reason I’m used to projects like this having a manifesto attached. If it is mainly for fun/exploration then that’s really awesome! The world could use more play.
The licensing change is non-consequential for the vast majority of Terraform users. I wish the change didn’t happen, but it’s only a problem for companies that compete directly with HashiCorp’s managed SaaS offerings.
Pulumi is VC backed, pre-exit, and does not use a FOSS license that prevents it from doing the same thing as HashiCorp when the vultures demand more revenue.
I’ve managed cloud infrastructure at several large tech companies and Terraform scales fine. The declarative and immutable aspects of Terraform are constraints that support scaling. If you’re doing something that feels complicated or hacky in Terraform, you’re often doing something that introduces risk into your infrastructure. It’s good to have pain receptors that warn you of danger.
but it’s only a problem for companies that compete directly with HashiCorp’s managed SaaS offerings.
I think the problem that a lot of companies are grappling with is the specifics. What entails “competing”? I think we all understand the spirit of the text, but how could it be interpreted when new management takes over and decides that want to reap every last penny.
a) The spirit is critical. Spirit is what a lawyer ends up arguing in court.
b) I suspect that “competing” is relatively well defined. As an example, when you create your company you have to specify its industry - so at minimum I suspect you can trivially argue “we aren’t even in the same industry” and avoid it. If you’re a SaaS/ software company you’ll need another way to differentiate, but again, I don’t think that’s super complicated unless you’re very very close but (just barely) not actually competing.
I think developers think about legal stuff as if it’s code, and they get so hung up on edge cases and specific hangups, but that’s not really legal shit works in my experience (not a lawyer, but I founded and ran a company for years and we had lawyers, I definitely had to think a lot about legal shit).
Any move that takes us further from clarity is a bad one, even if the legal system has norms that allow people to roll the dice and possibly get a good outcome. What you’re basically saying is: it’s ok that you now need to be able to retain a lawyer just in case to try and fight through the ambiguity in complying with the licence terms. That’s pretty rubbish for all the people who legitimately can’t afford one, and even for the people who can it’s a waste of resources. It’s definitely not open source, and people are right to be sceptical and to give it all a wide berth where possible.
As someone pointed out in the past, the problem is acquisitions. What happens if, say, Amazon or Microsoft buys the company? Are you doing anything that competes with any potential purchaser of the company? If so, you are exposed to legal risk.
I suspect that “competing” is relatively well defined. As an example, when you create your company you have to specify its industry - so at minimum I suspect you can trivially argue “we aren’t even in the same industry” and avoid it.
By this logic, Apple doesn’t compete in the markets for music or, you know, telephones. I’m not a lawyer, but I don’t share your faith that this is a trivial matter at all.
There are lots of good reasons for building a compiler. There is also a lot of knowledge floating around about how to build compilers and so you can learn from 60 years of experience if you set out to build a compiler.
The problem is if you don’t set out to build a compiler but end up building one anyway. This is a problem because you are almost certainly going to repeat some of the mistakes that other people have learned from.
For compiler, feel free to substitute ‘scheduler’, ‘distributed system’, or ‘RPC protocol’ into the above, the rest of the rest of the text works just as well.
I think it’s more aimed at those not realizing they are building a compiler / interpreter. They think they’re just allowing some flexibility in a configuration file, which slowly morphs into its own language.
Or, as I once did, try to use string mangling to convert someone’s made up Excel expressions (text, not formulas) into valid Ruby, as they seemed close enough that it would be simpler than writing a transpiler, only to end up with a muddled mess for which a parser + code generator would probably have been the same amount of work. The project died due to business reasons and I never got to find out though.
Before becoming a programmer I worked with finance, and I had an automation system in a spreadsheet where each row was a command and the columns held arguments. Commands where for creating slides based on figures in spreadsheets.
Spreadsheet programming in itself is such a passion of mine. That’s one of the things that got me hooked in FP. You know, Excel is a 2D programming environment for pure and total functional programming with incremental parallel computation with ample support for database connectivity, and built in chart visualization and data tables.
I’ve witnessed two projects that could be described as compilers, but whose value was not “being a compiler.”
One was incredibly easy to work on.
The other had collapsed in on itself.
Only one was a compiler on purpose.
Avoiding cleverness or sinking time into “doing it right” are often stated goals. But cutting against the inevitable grain of a project to achieve it isn’t any better. “Wanting to build a compiler” when you’re building a compiler is a massive asset for building a good compiler.
Greedy macro unexpansion, which automatically compares subsets of AST nodes to refactor them into un-hygienic, gensym containing defmacro style macros. The result is unprecedented expressiveness making it trivial to maintain LOBSTERS-LANG code bases.
Conceptually, this is something I actually really really want. I just can’t carve out the time to design and write it.
I write Go for work and enjoy it’s stripped down nature. But the vertical space bloat and boilerplate enforced by the ecosystem (context.Context, if err != nil) kill me. It’s so hard to see the intent when it’s surrounded by so many mechanics.
I want a macro compressor, entirely separate from the language. It could literally just be a presentation view in my editor. Open a Go file, highlight a block, and then “semantically compress” or “semantically expand” whatever I want.
ctx := context.Background()
manyObjs, err := superFaillible(ctx)
if err != nil {
return err
}
var fullObjs []FullObj
for _, o := range manyObjs {
full, err := inflate(ctx, o)
if err != nil {
return err
}
fullObjs = append(fullObjs, full)
}
No actual thought was put into that syntax, just filler to get the idea across. Realistically it’d be a lisp with some Go specific keywords and structures. It’s not about writing code at that level, it’s about being able to visually scan a file in Go. Since it’s not for writing, the syntax is unencumbered by ergonomics. It should just be immediately clear and unambiguously map to Go.
That reminds me of a feature I’ve seen in a Java IDE, maybe around the time they added lambdas. Lambdas desugar to an inner class with one method, so the IDE would recognize an inner class with one method and display it more like a lambda.
I agree with the need, but not the conclusion. You have a need for an abstraction that allows you to think clearly about the problem. For application-level programming, the programming language is this abstraction, and if it’s not the right level abstraction for the problem then you’re using the wrong tool. You wouldn’t use assembly language for application-level programming, and IMO you shouldn’t use Go either.
I mean…it’s just tooling? You wouldn’t say stop using Python because autocomplete is useful, or that C is inherently bad because someone uses GDB. Just because something has a pain point doesn’t mean it’s terrible.
I’m definitely not a Go fanboy, but for the work projects I’m referring to it really is the right tool for the job. It was chosen for a reason. Besides, not everyone who writes Go gets to tell their employer that they are rewriting their codebase into something they like better. I’d rather those people be equipped w/ tooling to make their day pleasant and their software better :)
All good points. I just feel like the readability of a language is pretty central, and it would be great if such a central aspect didn’t need a separate tool.
The undo keyword reverts the last mutation made by the current thread - mutations made via undo itself don’t count for this, so you can do it several times to travel back in time.
Related: the @ operator when used with a variable lets you specify a particular index in the history of that variable. Positive numbers as the second operand mean steps backwards in that variable’s history.
x = 3
x = 5
x = 2
print(x)
// => 2
print(x@0)
// => 2
print(x@1)
// => 5
print(x@2)
// => 3
Thanks for bringing this up! It’s honestly my favorite part of LOBSTERS-LANG, and doesn’t get used enough. It’s super flexible too. If you use a negative number it’ll use linear regression to look into it’s future.
I am curious about this problem form the other side: if you are a vendor of one of the “Theirs” thing, how can you simplify local testing for your users?
I’ve worked on a team before that published a local “mock” server of our service. It wasn’t purely a mock since it did a ton of the work the main service did, but ephemerally and not the hard stuff. Keep in mind it was a very simple service, and it had very specialized users who we had direct partnerships with.
It was a really large hassle, but it worked. You’d make a request to setup a test context scoped to a particular test, then make your calls using a connection to it. Our partners appreciated it alot, and it had the extra benefit of exposing some misunderstandings they had about our service. “Why isn’t the mock server working correctly” questions almost always ended in them not knowing what “correctly” meant.
I don’t think I’d ever suggest that approach or do it again. Even with a small service that didn’t change very often, it was ALOT of work and maintenance that never seemed finished. Constant conversations would come up on how much could be done by the mock service itself and how much should be provided by the user. There was also an interesting case where a user seemed to learn about our service via the testing tools and not the documentation leading to all kinds of really weird and superstitious beliefs about our API. Not to mention requests to support sdks for the testing tool or asks to workaround quirks in their testing environment that had nothing to do with us.
I still like the idea. But the effects of owning that were way larger than we expected.
Greatly appreciated! I’ve been doing a deep dive on logic programming recently, specifically the actual implementations of various methods. For such a mature field it’s remarkable how little accessible information there is. (Kudos to the *Kanren folks and various SMT enthusiasts for being the exception there.)
I also recommend The Power of Prolog. This book teaches you to use the good, pure-logical parts of Prolog, which means you can think declaratively like in Datalog or miniKanren.
It’s magic! For example, Advent of Code 2021 - Day 24 gives you an assembly-like language and asks “For which inputs does this program output zero? Find the largest.” It doesn’t sound trivial: it requires some kind of static analysis. But in Prolog it is trivial! I wrote a boring interpreter, and then queried it with an unknown input.
How fast is life’s interpreter? 1 timeslice per timeslice? If I can run things faster than we experience them I’d love to have some kind of kanren-y search algorithm to search through possibilities for the series of decisions I need to achieve what I want.
If not, I could really use some peanut butter fudge right now. I feel like much day-to-day use would be simple hedonism. Just gonna Skinner Box myself into a really weird universe with increasingly modified laws of physics because it’s fun. Sorry y’all are being subjected to my weather preferences and prank gravity inversions.
Yeah, it’s unfair that frontend development keeps getting criticized for being fad oriented. I do think it’s accurate. But I don’t think it’s a skill issue or worthy of shame. The improvements are superficial and there’s alot of churn simply because the problems haven’t been solved yet. In comparison to many backend tasks:
Browser and device/resolution compatibility is too complex to be addressed simply. Big strides have been made, but it’s hard to see tooling as anything but frustrating complexity when you haven’t struggled with the problem it solves. Most people’s personal projects don’t reach the point where they can’t ignore those problems anymore.
Human factors like accessibility, design, ux, language/internationalization, and “the-user-is-a-drunken-toddler” are omnipresent. Not that they don’t exist elsewhere too, but they aren’t an implicit constraint in everything. The elegant solution you have always falls over when there are that many competing constraints.
The realm is still largely driven by enthusiasts compared to backend web dev. This means fashions are natural as we experiment with new solutions. It also means many projects are on more equal footing. Open source tooling for backend may start non-commercial, but because large frameworks almost always solve the problems of scale or enterprises they inevitably become commercial. This means there’s a vested interest in slowing down experimentation (and therefore fashion) for stability or to keep the money flowing.
Being sandboxed into a browser, being reliant on backend apis, and having non-opt-in security means the list of “default” features you need is massive. We don’t even think to call them out anymore…they’re included in bundles or defaults in generated boilerplates. You can’t really roll your own for everything or opt out of certain things like you can when you control your system’s universe/environment. You can’t really even isolate things in the same way, though it’s worth trying.
People are largely unsatisfied with it. Open problems lead to more attempts at a solution. Not that there aren’t open problems on the backend, but the everyday humdrum widely-applicable problems at least have a solution that generally works.
That’s not to say things shouldn’t improve. The writer of The Grug Brained Developer is also the developer of HTMX which is an important attempt at improvement. But I hate that it’s always framed from the viewpoint of “we’re not like those other solutions” when they exactly are: it’s an experiment to tame complexity and make frontend dev either easier or produce actually functional software. I support this fashion because I like it, but there’s nothing more legitimate about this particular fashion.
This was quite insightful. So, with frontend programming, there isn’t much in terms of massive problems, and they have limited areas to innovate.
Are there any other fields like that in programming? Suppose business software programming language developers (for example - VBA) don’t have much to innovate because most of the development ecosystem is out of bounds for them.
It sounds pretty weird to think that frontend development is widely popular yet they have some odd artificial restrictions that are imposed on browser design and “good SEO practices”. So, quite oddly innovation here is usually just about personal preferences and not about universal utility. Which is literally what fashion means.
“we’re not like those other solutions” when they exactly are: it’s an experiment to tame complexity and make frontend dev either easier or produce actually functional software.
That’s a bit reductive, no? htmx’s purported uniqueness is in its technical approach: instead of taking inspiration from native GUI toolkits and having client-side JavaScript build UI (with SSR added on top), htmx builds the plain “links-and-forms” model of the web and has the server drive the app by sending HTML responses that encode the possible interactions.
We’re all trying to make web dev better. What else would we do? Make it deliberately worse?
I don’t think so? But the question makes me think I was unclear. I fully support and even prefer htmx’s approach. I like the drive towards simplicity. Moving functionality to an area with fewer limitations and centralizing logic to minimize moving pieces is a great way to do that. I have 2 live projects using it and it does what it promises.
My point wasn’t that htmx isn’t valid. It’s that it’s equally valid as an experiment in how to build frontend software. Since there hasn’t been a clear pattern that has emerged as the “best” way to tackle things, I don’t like the chastising of other approaches when they are simply making different tradeoffs. Many problems haven’t been solved yet. It’s reductive to those problems that people are trying to solve. We can talk about the pros and cons of approaches and make tradeoffs without shaming the industry, and we can encourage more experimentation.
We’re all trying to make web dev better. We should be deliberately experimenting. Not writing off entire ideas as “fad based” in an attempt to frame one as separate from that social constraint. My gripe isn’t towards htmx at all, it’s at the general treatment of frontend development as broken rather than “in progress.”
This seems extremely tricky to implement at the design level (granted, my background is not really in design at a scale where “personas” become feasible).
A lot of the emphasis today in “forward-thinking” design seems to be accessibility, open ess, one-design-for-all - that sort of thing.
Attempting to reign in requirements at the design level with “anti-“personas seems like opening yourself up to the gigantic world of what pop- design seems to view as (at the very least) majorly uncool.
In my world, figuring out what a thing will and will not do gets sorted out by folks that aren’t afraid of the word requirements - very early on - and again, in my little world, if designers (I assume UI, or “front-end”) are having these discussions at the persona stage, then we’ve gone done messed up somewhere.
I believe the personas in the article are at a different level than what you talk about. The personas I’m familiar with encapsulate user stories of how a product will be used, not the UI design: Mark wants to enter transactions at the Point Of Sale Terminal, John wants to enter inventory counts as fast as possible, Jim wants to run reports, and so on.
However, I’m very interested in an expansion of this article that provides examples.
Maybe something like: Mark at the POS terminal is under pressure to scan items (customers are queuing!) and does not have the time to hunt for the right button or undo mistakes; buttons must be big, accessible and not error prone. Therefore, adding more features to the screen(s) he uses is counterproductive for him.
This rises almost to the level of a persona. Mark is our POS terminal’s intended end-user, and the decision to minimize the number of new features is tailored toward making his experience as an end-user better. I think they’re talking something a bit more like “Mark wants to watch Netflix on the POS terminal.”
I think it pops up quite a bit in terms of taming complexity.
Tons of UIs have user configurable filters. If the filter is a single dropdown, it’s super easy to use but not very powerful. If you have a full condition builder with nestable conditions then it’s infinitely configurable. But you have to decide: do you expect your user to understand boolean logic?
Saying that you don’t expect an email marketer to understand boolean logic over event driven state machines defines a good Anti-Persona. You explicitly don’t go out of your way to empower those users because it hinders the actual Persona you’re targeting. Plenty of platforms already exist that can take on that customer you’re explicitly saying you aren’t prioritizing.
I welcome insights and perspectives from individuals whose professional responsibilities primarily involve the creation and management of internal tools. Your expertise and experience in this area would be of great value and highly appreciated.
Strictly ‘cause you asked for it - not a judgment or endorsement on this project:
I hail from the mirror-verse. My $WORK responsibilities involve the creation and management of internal tools; however, the one-two punch of the “javascript” tag and npx command to bootstrap the project is a death-knell (to mix metaphors) in my world.
In the mirror-verse, if folks need database access, they get a locked down database account. They want a form to access a database - they get to build it (and maintain it) themselves - perhaps using this project!
Pardon my ignorance, what is mirror-verse, is it the game?
So your problem is it being JS? Would it help if you knew that the actual underlying code is Rust but it’s compiled to wasm and packaged as a JS/TS lib?
I can’t speak for @crunxxi , but I also feel like I come from the mirror-verse. What I mean by that is I am “the mirror” (the opposite) of the audience this tool tries to target. I maintain data and give people access to build on it.
For me the issues aren’t how quickly we can build the tool. The issues are:
Who owns the data that this relies on.
How do I give access in a minimal way that isn’t a risk. I need to worry about security, privacy, and someone messing with the data in ways that can break other things.
Can we give that access in a consistent way so when others need it we aren’t just re-inventing the wheel. (Credential handling, punching holes in VPNs, defining permissions, etc…)
How do we track who is using this and what they are doing? If we need to change something we’ve also now broken this random tool and need to tell whoever owns it. Maybe even (sigh) wait on them to update first.
How do we track the amount of load it generates? If the system is having issues we need to be able to reach out.
Do they UNDERSTAND the data? subZero will introspect columns, but the users won’t know the caveats of what is in some field somewhere. Names can be very misleading. I usually will need to give them documentation or onboard them or at least be able to field questions. If they can update that data without talking to me I’m going to be terrified.
Building the tool is NOT the hard part. It’s all the junk around it when you don’t own the database. If subZero does everything through introspection then it’s going to constantly bump up against fine-grained access issues if we don’t give it full access.
To be clear: this does not invalidate the product. It actually looks really slick. I bet alot of people will find it useful. I’ve personally been at startups where it’d be nice. But it will be most useful to small teams where everything is open and can be grabbed without permission. Places where us “mirror-verse” people haven’t been hired yet.
Thank you for the “mirror” explanation and thank you very much for the detailed explanation of your pain points, this is very valuable to me.
I maintain data and give people access to build on it.
This feels like a my failure on the messaging part since what you describe as your job should fit the customer profile. What if i describe (the core of) subzero as “extensible PostgREST with more features and support for other databases besides PostgreSQL”? Would you say that you are not on the other side of the mirror now?
Who owns the data that this relies on.
subzero does not need to connect to the database with admin privileges you can use whatever locked down credentials you are given (that is if i am understanding it correctly as in you don’t control the db, you just have some limited access to it)
How do I give access in a minimal way that isn’t a risk. I need to worry about security, privacy, and someone messing with the data in ways that can break other things.
The way it works is you can either rely on the permissions set in the db for various roles (combination of grants + RLS) or you can set permissions in the same style but as a configuration. In either case, each user/role starts with no access to anything and you define to what columns/rows they get access to.
Can we give that access in a consistent way so when others need it we aren’t just re-inventing the wheel.
This is probably the best part, The REST api shape is the one from PostgREST (so there are lots of SDKs/url-builders for other languages) but the underlying db can be anything that is using SQL (at this moment it’s PG/MySQL/Sqlite/Clickhouse)
How do we track who is using this and what they are doing? If we need to change something we’ve also now broken this random tool and need to tell whoever owns it. Maybe even (sigh) wait on them to update first.
subZero is not a black box that you have no visibility in. At it’s core it’s a lib that you feed a http request and it gives you back a SQL query that you run against the db which returns the data the original http request asked for and you use this in the context of a JS/TS/Express server. So you can log whatever you feel you need to log when the http requests comes in and when it sends out the data.
How do we track the amount of load it generates? If the system is having issues we need to be able to reach out.
Similar as the explanation above, subzero is a lib you use inside an express server so there is nothing liming you from using something like prom-client to add a prometheus endpoint for metrics.
Do they UNDERSTAND the data? subZero will introspect columns, but the users won’t know the caveats of what is in some field somewhere. Names can be very misleading. I usually will need to give them documentation or onboard them or at least be able to field questions. If they can update that data without talking to me I’m going to be terrified.
It would be relatively easy (a feature for subzero) to generate a OpenAPI document based on the database schema (and your annotations) that can serve as an automated documentation for your users. Access to your users can be as limited as you want, this is all configurable. Also i feel like you are looking at subzero as something that gives an UI to the users, it’s not, it only generates the backend API, you build the UI on top of it.
If subZero does everything through introspection then it’s going to constantly bump up against fine-grained access issues if we don’t give it full access.
It seems like you think of introspection as “whatever subzero can see/introspect from the db will be available to the api users to read/modify” and it’s not at all like that. This part is quite configurable/hackable of what subzero sees (or needs to see) and what it allows the api users to do. Also, it does not need full access to the db (not even the ability to introspect).
PS: If I have not scared you already with this wall of text and you think you can extract some value from this I would love to connect. Let me know here and i’ll reach out over email/discord
I have to admit I hand-rolled a build script to inject the git SHA, build ID, and build time into various projects at work. It works pretty well though!
Injecting the build time makes reproducible builds a whole lot harder (which I’m fully aware you might not care about for your use case), so I’m wondering in general: what value can people get out of including the build time?
In my use case, it’s easier for me to see if the deployer did it’s job and new version of the code is running by looking at the build timestamp, than by looking at commit ID. Commit ID requires me to go and check in the repo when it was created, then to go on CI to check when it was built. SVN commit IDs on the other hand are useful, as I could remember from the previous instance what was the number and which one I could expect. My brain can’t remember 40 hex digits, heck not even 3…
If one has it, then yes. But for many of the services I run, and the environment they run in, having a version number is more of a nuisance than anything helpful. Adding a version number and updating it is more scaffolding than adding a build time field.
I’m with you on the larger point that “when” is a useful datapoint for all kinds of reasons, I just get all bothered by that kind of self-induced ambiguity.
Just include the CI build number, that’s what we do. Actually what we do is make the CI build ID #, essentially the version number. CI builds tend to be stable increasing numbers. We include the VCS revision hash also, in case we need it, but that’s super rare, the CI build # gets us everything we could want.
In the rare case that we have to hand-roll a release, we just invent a number, since it’s just a number with no special properties.
We do that too. Except we just leave the build number blank if there isn’t a build so it’s not misleading. But times help on their own so you don’t have to go look at the job and grab the timestamps If you want to see what else was happening at the time. Was just responding to why the timestamps are helpful…they certainly aren’t exhaustive.
It’s a shortcut to tell when something was built. You give up reproducible builds if you even accept that a new build will produce an artifact that has different creation/modification timestamps than a previous build artifact. If you want to make it fully reproducible you end up doing what Nix does (set the times to Unix epoch). So I don’t really lose sleep over it.
It may be Stockholm syndrome but I like the C++ syntax (aside from the struct vs class inconsistency, which is obv for C compatibility.) I appreciate not having to type “public” or “private” over and over, and when reading an interface it keeps the public API segregated.
Go’s mechanism is awful. It’s a clever way of avoiding the need for keywords, but it means that changing access control requires a renaming/refactoring, which is a big pain in an editor without language support and often makes minor changes to a lot of source files, leading to merge conflicts. (Plus, Go has no ability to make things private to a struct or to a source file, so packages easily become grope-fests where everything’s in everything’s business.)
Why yes, I have recently been doing refactoring of Go packages…
+1 on names dictating visibility being awful. It’s a pain in python, too (although visibility is more of a suggestion there).
I think the current iteration of Rust’s visibility system is the sweet spot. Everything defaults to private. To make it public, add pub. To make it crate-public, add pub(crate). And unlike Haskell, you don’t have to duplicate the name to declare it public at the top of the module.
Rust’s visibility rules are significantly better than average, but I don’t think they are even a local optimum. There are at least two improvements to be made:
pub(crate) should be a single, short keyword.
pub should strictly mean “visible outside of the crate”, it currently also plays dual role of “visible in parent”. That is, public-in-private should become deny-by-default
These two issues are more-or-less backwards compatibility sacrifices due to the fact that Rust 1.0 didn’t get visibility exactly right.
I agree. I kinda hate bikeshedding on the syntax but parameterizing pub really hurts usability. It makes it harder to read, and also makes crate level visibility feel like a second class citizen. It doesn’t feel like it’s ever a first-considered option to me even though it’s arguably the most common visibility that I need.
For a new language I like the idea of there not being a raw fn equivalent. I like the cleanliness of having fn_pub, fn_mod, fn_prv keywords that are all top level. My worry is that visually scanning would be hindered, but maybe syntax highlighting is enough to alleviate that?
First, I often end up having to scroll up a moderately long way (especially in classes that contain a lot of doc comments, which good codebases do) to find the access specifier.
Second, I either end with annoying diffs if I want to change an access specifier. I either move a method from one group to another (don’t do this if it’s virtual, that’s an ABI break) or I have to write an access specifier before and after the method.
often makes minor changes to a lot of source files, leading to merge conflicts.
Assuming that private declarations are scoped to a single file, that should never be the case:
If you’re making a private name public, all existing uses of the name should be in only that file since it’s currently private.
If you’re making a public name private, all existing uses of the name most be in only that file because otherwise you can’t make it private since it’s being used externally.
(I work on Dart, which has like Go uses identifier naming for privacy. I refactor all the time and while it is somewhat annoying that it means adjusting the name of every identifier, it’s never a sweeping change because of the above observation.)
The problem with C++ is that a class’s implementation may be spread between many files. The declaration must be in a single place, but method bodies can then be anywhere (in contrast to Objective-C, which puts them all in a block).
This is made worse by the C++ name resolution rules, which often mean that you need to put the definition of a method for one class after the declaration of another, so individual methods of a class may end up being defined across a mix of headers.
If you’re making a private name public, all existing uses of the name should be in only that file since it’s currently private.
Not in Go. That is exactly my complaint. “private” names are accessible within the entire package, which can be any number of source files. (The project I work on is overdue for refactoring and has a package with at least 50 source files in it, and I know a couple structs in it that have sensitive fields that shouldn’t be groped from outside that are being groped.)
changing access control requires a renaming/refactoring, which is a big pain in an editor without language support
Which editors don’t have language support for this kind of thing? (And have meaningful usage numbers.)
and often makes minor changes to a lot of source files, leading to merge conflicts
Changing identifier names should never result in merge conflicts, right?
Go has no ability to make things private to a struct or to a source file
Source files only assert scoping for their imports, which is fine. But it’s entirely possible to make a struct field “private” to the package in which it is defined, by lower-casing its name in the struct. Do you mean something else?
Even if your editor can automate this, churning names creates a really unfortunate code review burden that doesn’t need to be there.
I feel like leveraging naming conventions for this is just a bad workaround for a hesitation about the syntax overhead. I would rather find a syntax overhead that I’m confident in paying.
Which editors don’t have language support for this kind of thing?
Most of them, in 2012 when I started using Go. I agree it’s a lot less annoying today, but I think it’s a bad idea to have such a basic task be dependent on fancy editor refactoring support.
Changing identifier names should never result in merge conflicts, right?
Um, of course it can. Were you being sarcastic? I rename “foo” to “Foo”, meanwhile you change “f := foo()” to “x := foo()”. Bang.
Do you mean something else?
Why yes, I meant “make things private to a struct or to a source file” as I said. A field of a struct that can only be accessed by methods of that struct, as supported by a million other languages. Without that you can’t have safe encapsulation, not unless you make that struct the only thing in its package. Or a function/variable that’s accessible only in its source file, like “static” in C++ or “fileprivate” in Swift.
A field of a struct that can only be accessed by methods of that struct, as supported by a million other languages. Without that you can’t have safe encapsulation, not unless you make that struct the only thing in its package.
Ah, so you’re defining “private” accessibility at the type scope, whereas Go defines it at the package scope. Fair.
For the first time in my life I seem to have a stable routine, and I’m really enjoying it. More than younger me who prized novelty and flexibility ever thought I would. To the extent that I’m paranoid about messing it up. So I guess I’m waking up at 7, walking a 3 mile loop, working out while listening to a podcast in the background, cooking lunch (which is at like 10am now?), working for ~3-4 hours at a coffeeshop, then lunch2/dinner and proceeding to work on personal stuff or just goof off with friends.
I would rather have a “sister” site (running the same system) focused on society/culture than see such topics here. I like Lobste.rs because it is a mostly technical site / forum.
This is actually one of the reasons I go to sic.pm. Its small and I hope becomes more as active as people discover it, but it follows the exact opposite approach of Lobste.rs in that the topic is “everything that piques your curiosity and interest.”
To me Lobste.rs is for deep technical discussions in a community of people primed to understand them and contribute meaningfully. Sic.pm fills the need of “deep/involved things that I would find interesting if I knew about them”, which is a very different use case than most link aggregators who mainly share flash-in-the-pan material.
I want different things at different times and Lobste.rs focusing on doing its one thing well is an important part of it’s culture.
Interviewing. There’s a startup I’m actually really excited about but their process is looooong. Other than that breaking back into an old project I haven’t looked at in like a year.
Maaaybe getting a blog going finally. I’ve been noticing alot of simple concepts that don’t have simple resources explaining them. You have to assume some prior knowledge, but most resources don’t attempt to minimize it. Wavelet Trees for example are very simple and useful, but articles always reference some other structure or use academic terms that exist only as a form of indirection. So maybe I’ll take a crack at that.
The second rule has me conflicted.
On one hand, I do agree that invariants are easier to read when they’re stated positively. In my code, I follow every invariant-checking
if
statement with a comment that says something along the lines of// Invariant: the array is large enough for this algorithm to make sense
.On the other hand, my code usually checks that the invariant is not held, and exits early. So my comments state the invariant positively, but the code states it negatively.
I think I could get used to negating a positively stated invariant:
But that looks quite ugly to me and I immediately itch to rewrite it to
<=
.Yup! This reasoning sold me on Swit’s
guard
statement, its awesome to be able to write something along the lines ofThat’s why the advice here is to try to re-arrange the code such that the positive invariant is natural, rather than to just mechanically postivise it — you need
guard
/unless
for the mechanical transformation to always give good results.Usually if I find a conditional hard to read I’ll pull it out into an intermediate variable. I never really do it with single clause conditions, but for illustrative purposes your example would become something like:
It allows you to state the property positively, but show that the condition is about the negative case.
hasEnoughData
. The property is easy to verify.The benefits of it are much better with more complex conditions.
Two folders in my home directory.
~/@
- contains notes as markdown files relevant to that computer and shouldn’t be synced anywhere else. Work stuff, private stuff, scratch paper that isn’t entirely throwaway, etc.~/@@
- contains global notes as markdown files. Non-sensitive stuff. They are automatically synced across computers and my phone via Obsidian.I edit these with Vim. That’s about it. Honestly, I don’t use any of Obsidians features except for it’s excellent syncing and it’s mobile app. I feel a little bad about that since so many people love it’s other capabilities…but I mostly just want text files.
I’m surprised I couldn’t find a description of Ares’ goals. Like is it for fun, or for a specific use case, or to explore an idea, or simplify a design? I have zero background in OS dev, so it’s possible the design makes it obvious to the initiated.
I did some searching and the end of Drew’s update blog post seems to say it was originally an attempt at proving out Hare but has become more of a fun-time hobby. For some reason I’m used to projects like this having a manifesto attached. If it is mainly for fun/exploration then that’s really awesome! The world could use more play.
Sometimes engineers are more enthusiastic about building the thing than talking about the thing. :D
Drew might have been burned by the flames that seem to erupt whenever his stuff is discussed. Keeping it quiet is understandable.
The licensing change is non-consequential for the vast majority of Terraform users. I wish the change didn’t happen, but it’s only a problem for companies that compete directly with HashiCorp’s managed SaaS offerings.
Pulumi is VC backed, pre-exit, and does not use a FOSS license that prevents it from doing the same thing as HashiCorp when the vultures demand more revenue.
I’ve managed cloud infrastructure at several large tech companies and Terraform scales fine. The declarative and immutable aspects of Terraform are constraints that support scaling. If you’re doing something that feels complicated or hacky in Terraform, you’re often doing something that introduces risk into your infrastructure. It’s good to have pain receptors that warn you of danger.
I think the problem that a lot of companies are grappling with is the specifics. What entails “competing”? I think we all understand the spirit of the text, but how could it be interpreted when new management takes over and decides that want to reap every last penny.
I think people are overcomplicating things.
a) The spirit is critical. Spirit is what a lawyer ends up arguing in court.
b) I suspect that “competing” is relatively well defined. As an example, when you create your company you have to specify its industry - so at minimum I suspect you can trivially argue “we aren’t even in the same industry” and avoid it. If you’re a SaaS/ software company you’ll need another way to differentiate, but again, I don’t think that’s super complicated unless you’re very very close but (just barely) not actually competing.
I think developers think about legal stuff as if it’s code, and they get so hung up on edge cases and specific hangups, but that’s not really legal shit works in my experience (not a lawyer, but I founded and ran a company for years and we had lawyers, I definitely had to think a lot about legal shit).
Any move that takes us further from clarity is a bad one, even if the legal system has norms that allow people to roll the dice and possibly get a good outcome. What you’re basically saying is: it’s ok that you now need to be able to retain a lawyer just in case to try and fight through the ambiguity in complying with the licence terms. That’s pretty rubbish for all the people who legitimately can’t afford one, and even for the people who can it’s a waste of resources. It’s definitely not open source, and people are right to be sceptical and to give it all a wide berth where possible.
As someone pointed out in the past, the problem is acquisitions. What happens if, say, Amazon or Microsoft buys the company? Are you doing anything that competes with any potential purchaser of the company? If so, you are exposed to legal risk.
By this logic, Apple doesn’t compete in the markets for music or, you know, telephones. I’m not a lawyer, but I don’t share your faith that this is a trivial matter at all.
Why in the world would anybody not want to build a compiler?
There are lots of good reasons for building a compiler. There is also a lot of knowledge floating around about how to build compilers and so you can learn from 60 years of experience if you set out to build a compiler.
The problem is if you don’t set out to build a compiler but end up building one anyway. This is a problem because you are almost certainly going to repeat some of the mistakes that other people have learned from.
For compiler, feel free to substitute ‘scheduler’, ‘distributed system’, or ‘RPC protocol’ into the above, the rest of the rest of the text works just as well.
This is essentially what happened to the project that inspired https://lobste.rs/s/iksbf4/alien_artefacts
I think it’s more aimed at those not realizing they are building a compiler / interpreter. They think they’re just allowing some flexibility in a configuration file, which slowly morphs into its own language.
Or, as I once did, try to use string mangling to convert someone’s made up Excel expressions (text, not formulas) into valid Ruby, as they seemed close enough that it would be simpler than writing a transpiler, only to end up with a muddled mess for which a parser + code generator would probably have been the same amount of work. The project died due to business reasons and I never got to find out though.
Before becoming a programmer I worked with finance, and I had an automation system in a spreadsheet where each row was a command and the columns held arguments. Commands where for creating slides based on figures in spreadsheets.
It was like a byte code for slides!
i actually kinda like this idea for toy programming. turtle graphics or something, with a spreadsheet frontend
Spreadsheet programming in itself is such a passion of mine. That’s one of the things that got me hooked in FP. You know, Excel is a 2D programming environment for pure and total functional programming with incremental parallel computation with ample support for database connectivity, and built in chart visualization and data tables.
Your comment sparked a bit of personal “aha”.
I’ve witnessed two projects that could be described as compilers, but whose value was not “being a compiler.”
One was incredibly easy to work on. The other had collapsed in on itself.
Only one was a compiler on purpose.
Avoiding cleverness or sinking time into “doing it right” are often stated goals. But cutting against the inevitable grain of a project to achieve it isn’t any better. “Wanting to build a compiler” when you’re building a compiler is a massive asset for building a good compiler.
Greedy macro unexpansion, which automatically compares subsets of AST nodes to refactor them into un-hygienic, gensym containing defmacro style macros. The result is unprecedented expressiveness making it trivial to maintain LOBSTERS-LANG code bases.
Conceptually, this is something I actually really really want. I just can’t carve out the time to design and write it.
I write Go for work and enjoy it’s stripped down nature. But the vertical space bloat and boilerplate enforced by the ecosystem (context.Context, if err != nil) kill me. It’s so hard to see the intent when it’s surrounded by so many mechanics.
I want a macro compressor, entirely separate from the language. It could literally just be a presentation view in my editor. Open a Go file, highlight a block, and then “semantically compress” or “semantically expand” whatever I want.
becomes something closer to
No actual thought was put into that syntax, just filler to get the idea across. Realistically it’d be a lisp with some Go specific keywords and structures. It’s not about writing code at that level, it’s about being able to visually scan a file in Go. Since it’s not for writing, the syntax is unencumbered by ergonomics. It should just be immediately clear and unambiguously map to Go.
That reminds me of a feature I’ve seen in a Java IDE, maybe around the time they added lambdas. Lambdas desugar to an inner class with one method, so the IDE would recognize an inner class with one method and display it more like a lambda.
Something like this:
vs
I agree with the need, but not the conclusion. You have a need for an abstraction that allows you to think clearly about the problem. For application-level programming, the programming language is this abstraction, and if it’s not the right level abstraction for the problem then you’re using the wrong tool. You wouldn’t use assembly language for application-level programming, and IMO you shouldn’t use Go either.
I mean…it’s just tooling? You wouldn’t say stop using Python because autocomplete is useful, or that C is inherently bad because someone uses GDB. Just because something has a pain point doesn’t mean it’s terrible.
I’m definitely not a Go fanboy, but for the work projects I’m referring to it really is the right tool for the job. It was chosen for a reason. Besides, not everyone who writes Go gets to tell their employer that they are rewriting their codebase into something they like better. I’d rather those people be equipped w/ tooling to make their day pleasant and their software better :)
All good points. I just feel like the readability of a language is pretty central, and it would be great if such a central aspect didn’t need a separate tool.
The
undo
keyword reverts the last mutation made by the current thread - mutations made viaundo
itself don’t count for this, so you can do it several times to travel back in time.Related: the
@
operator when used with a variable lets you specify a particular index in the history of that variable. Positive numbers as the second operand mean steps backwards in that variable’s history.Thanks for bringing this up! It’s honestly my favorite part of LOBSTERS-LANG, and doesn’t get used enough. It’s super flexible too. If you use a negative number it’ll use linear regression to look into it’s future.
I am curious about this problem form the other side: if you are a vendor of one of the “Theirs” thing, how can you simplify local testing for your users?
I’ve worked on a team before that published a local “mock” server of our service. It wasn’t purely a mock since it did a ton of the work the main service did, but ephemerally and not the hard stuff. Keep in mind it was a very simple service, and it had very specialized users who we had direct partnerships with.
It was a really large hassle, but it worked. You’d make a request to setup a test context scoped to a particular test, then make your calls using a connection to it. Our partners appreciated it alot, and it had the extra benefit of exposing some misunderstandings they had about our service. “Why isn’t the mock server working correctly” questions almost always ended in them not knowing what “correctly” meant.
I don’t think I’d ever suggest that approach or do it again. Even with a small service that didn’t change very often, it was ALOT of work and maintenance that never seemed finished. Constant conversations would come up on how much could be done by the mock service itself and how much should be provided by the user. There was also an interesting case where a user seemed to learn about our service via the testing tools and not the documentation leading to all kinds of really weird and superstitious beliefs about our API. Not to mention requests to support sdks for the testing tool or asks to workaround quirks in their testing environment that had nothing to do with us.
I still like the idea. But the effects of owning that were way larger than we expected.
Given the absence of a
prolog
tag, for readers interested in Prolog and related logic programming languages, I’ll link this list of several previous Prolog/Datalog-related submissions.I agree a
prolog
orlogic programming
tag would be very interesting!Greatly appreciated! I’ve been doing a deep dive on logic programming recently, specifically the actual implementations of various methods. For such a mature field it’s remarkable how little accessible information there is. (Kudos to the *Kanren folks and various SMT enthusiasts for being the exception there.)
I also recommend The Power of Prolog. This book teaches you to use the good, pure-logical parts of Prolog, which means you can think declaratively like in Datalog or miniKanren.
It’s magic! For example, Advent of Code 2021 - Day 24 gives you an assembly-like language and asks “For which inputs does this program output zero? Find the largest.” It doesn’t sound trivial: it requires some kind of static analysis. But in Prolog it is trivial! I wrote a boring interpreter, and then queried it with an unknown input.
How fast is life’s interpreter? 1 timeslice per timeslice? If I can run things faster than we experience them I’d love to have some kind of kanren-y search algorithm to search through possibilities for the series of decisions I need to achieve what I want.
If not, I could really use some peanut butter fudge right now. I feel like much day-to-day use would be simple hedonism. Just gonna Skinner Box myself into a really weird universe with increasingly modified laws of physics because it’s fun. Sorry y’all are being subjected to my weather preferences and prank gravity inversions.
I think with frontend development, most “inventions” in the last five years have been largely superficial.
Yeah, it’s unfair that frontend development keeps getting criticized for being fad oriented. I do think it’s accurate. But I don’t think it’s a skill issue or worthy of shame. The improvements are superficial and there’s alot of churn simply because the problems haven’t been solved yet. In comparison to many backend tasks:
That’s not to say things shouldn’t improve. The writer of The Grug Brained Developer is also the developer of HTMX which is an important attempt at improvement. But I hate that it’s always framed from the viewpoint of “we’re not like those other solutions” when they exactly are: it’s an experiment to tame complexity and make frontend dev either easier or produce actually functional software. I support this fashion because I like it, but there’s nothing more legitimate about this particular fashion.
This was quite insightful. So, with frontend programming, there isn’t much in terms of massive problems, and they have limited areas to innovate.
Are there any other fields like that in programming? Suppose business software programming language developers (for example - VBA) don’t have much to innovate because most of the development ecosystem is out of bounds for them.
It sounds pretty weird to think that frontend development is widely popular yet they have some odd artificial restrictions that are imposed on browser design and “good SEO practices”. So, quite oddly innovation here is usually just about personal preferences and not about universal utility. Which is literally what fashion means.
That’s a bit reductive, no? htmx’s purported uniqueness is in its technical approach: instead of taking inspiration from native GUI toolkits and having client-side JavaScript build UI (with SSR added on top), htmx builds the plain “links-and-forms” model of the web and has the server drive the app by sending HTML responses that encode the possible interactions.
We’re all trying to make web dev better. What else would we do? Make it deliberately worse?
I don’t think so? But the question makes me think I was unclear. I fully support and even prefer htmx’s approach. I like the drive towards simplicity. Moving functionality to an area with fewer limitations and centralizing logic to minimize moving pieces is a great way to do that. I have 2 live projects using it and it does what it promises.
My point wasn’t that htmx isn’t valid. It’s that it’s equally valid as an experiment in how to build frontend software. Since there hasn’t been a clear pattern that has emerged as the “best” way to tackle things, I don’t like the chastising of other approaches when they are simply making different tradeoffs. Many problems haven’t been solved yet. It’s reductive to those problems that people are trying to solve. We can talk about the pros and cons of approaches and make tradeoffs without shaming the industry, and we can encourage more experimentation.
We’re all trying to make web dev better. We should be deliberately experimenting. Not writing off entire ideas as “fad based” in an attempt to frame one as separate from that social constraint. My gripe isn’t towards htmx at all, it’s at the general treatment of frontend development as broken rather than “in progress.”
I understand now, and I agree with your view. Thanks for clarifying.
If you were to extend that to the last 30 years I would agree with you.
Reducing things down to for example desktop OS design, I will submit without being two (2) important innovations in this time.
Both originated within Apple’s Mac OS X project.
Most of my projects are documentation, so I guess someone could use formal methods to engineer a terminator?
As Hollywood has taught us, every terminator or rogue ai has a Deus Ex Machina style weakness. Until hwayne patched that up and doomed us all.
I now want to watch the film where the Deus Ex Machina is Gödel’s incompleteness theorem.
This seems extremely tricky to implement at the design level (granted, my background is not really in design at a scale where “personas” become feasible).
A lot of the emphasis today in “forward-thinking” design seems to be accessibility, open ess, one-design-for-all - that sort of thing.
Attempting to reign in requirements at the design level with “anti-“personas seems like opening yourself up to the gigantic world of what pop- design seems to view as (at the very least) majorly uncool.
In my world, figuring out what a thing will and will not do gets sorted out by folks that aren’t afraid of the word requirements - very early on - and again, in my little world, if designers (I assume UI, or “front-end”) are having these discussions at the persona stage, then we’ve gone done messed up somewhere.
I believe the personas in the article are at a different level than what you talk about. The personas I’m familiar with encapsulate user stories of how a product will be used, not the UI design: Mark wants to enter transactions at the Point Of Sale Terminal, John wants to enter inventory counts as fast as possible, Jim wants to run reports, and so on.
However, I’m very interested in an expansion of this article that provides examples.
Maybe something like: Mark at the POS terminal is under pressure to scan items (customers are queuing!) and does not have the time to hunt for the right button or undo mistakes; buttons must be big, accessible and not error prone. Therefore, adding more features to the screen(s) he uses is counterproductive for him.
This rises almost to the level of a persona. Mark is our POS terminal’s intended end-user, and the decision to minimize the number of new features is tailored toward making his experience as an end-user better. I think they’re talking something a bit more like “Mark wants to watch Netflix on the POS terminal.”
I think it pops up quite a bit in terms of taming complexity.
Tons of UIs have user configurable filters. If the filter is a single dropdown, it’s super easy to use but not very powerful. If you have a full condition builder with nestable conditions then it’s infinitely configurable. But you have to decide: do you expect your user to understand boolean logic?
Saying that you don’t expect an email marketer to understand boolean logic over event driven state machines defines a good Anti-Persona. You explicitly don’t go out of your way to empower those users because it hinders the actual Persona you’re targeting. Plenty of platforms already exist that can take on that customer you’re explicitly saying you aren’t prioritizing.
I welcome insights and perspectives from individuals whose professional responsibilities primarily involve the creation and management of internal tools. Your expertise and experience in this area would be of great value and highly appreciated.
Strictly ‘cause you asked for it - not a judgment or endorsement on this project:
I hail from the mirror-verse. My $WORK responsibilities involve the creation and management of internal tools; however, the one-two punch of the “javascript” tag and
npx
command to bootstrap the project is a death-knell (to mix metaphors) in my world.In the mirror-verse, if folks need database access, they get a locked down database account. They want a form to access a database - they get to build it (and maintain it) themselves - perhaps using this project!
Pardon my ignorance, what is mirror-verse, is it the game? So your problem is it being JS? Would it help if you knew that the actual underlying code is Rust but it’s compiled to wasm and packaged as a JS/TS lib?
I can’t speak for @crunxxi , but I also feel like I come from the mirror-verse. What I mean by that is I am “the mirror” (the opposite) of the audience this tool tries to target. I maintain data and give people access to build on it.
For me the issues aren’t how quickly we can build the tool. The issues are:
Building the tool is NOT the hard part. It’s all the junk around it when you don’t own the database. If subZero does everything through introspection then it’s going to constantly bump up against fine-grained access issues if we don’t give it full access.
To be clear: this does not invalidate the product. It actually looks really slick. I bet alot of people will find it useful. I’ve personally been at startups where it’d be nice. But it will be most useful to small teams where everything is open and can be grabbed without permission. Places where us “mirror-verse” people haven’t been hired yet.
Thank you for the “mirror” explanation and thank you very much for the detailed explanation of your pain points, this is very valuable to me.
This feels like a my failure on the messaging part since what you describe as your job should fit the customer profile. What if i describe (the core of) subzero as “extensible PostgREST with more features and support for other databases besides PostgreSQL”? Would you say that you are not on the other side of the mirror now?
subzero does not need to connect to the database with admin privileges you can use whatever locked down credentials you are given (that is if i am understanding it correctly as in you don’t control the db, you just have some limited access to it)
The way it works is you can either rely on the permissions set in the db for various roles (combination of grants + RLS) or you can set permissions in the same style but as a configuration. In either case, each user/role starts with no access to anything and you define to what columns/rows they get access to.
This is probably the best part, The REST api shape is the one from PostgREST (so there are lots of SDKs/url-builders for other languages) but the underlying db can be anything that is using SQL (at this moment it’s PG/MySQL/Sqlite/Clickhouse)
subZero is not a black box that you have no visibility in. At it’s core it’s a lib that you feed a http request and it gives you back a SQL query that you run against the db which returns the data the original http request asked for and you use this in the context of a JS/TS/Express server. So you can log whatever you feel you need to log when the http requests comes in and when it sends out the data.
Similar as the explanation above, subzero is a lib you use inside an express server so there is nothing liming you from using something like prom-client to add a prometheus endpoint for metrics.
It would be relatively easy (a feature for subzero) to generate a OpenAPI document based on the database schema (and your annotations) that can serve as an automated documentation for your users. Access to your users can be as limited as you want, this is all configurable. Also i feel like you are looking at subzero as something that gives an UI to the users, it’s not, it only generates the backend API, you build the UI on top of it.
It seems like you think of introspection as “whatever subzero can see/introspect from the db will be available to the api users to read/modify” and it’s not at all like that. This part is quite configurable/hackable of what subzero sees (or needs to see) and what it allows the api users to do. Also, it does not need full access to the db (not even the ability to introspect).
PS: If I have not scared you already with this wall of text and you think you can extract some value from this I would love to connect. Let me know here and i’ll reach out over email/discord
I have to admit I hand-rolled a build script to inject the git SHA, build ID, and build time into various projects at work. It works pretty well though!
Injecting the build time makes reproducible builds a whole lot harder (which I’m fully aware you might not care about for your use case), so I’m wondering in general: what value can people get out of including the build time?
What value do you personally get out of it?
Here’s one I can imagine: knowing whether the daemon serving you content/information is the build you expected to see
I get that for the commit ID - but the build timestamp? Ideally, each build of a commit should behave identically independent of build time.
In my use case, it’s easier for me to see if the deployer did it’s job and new version of the code is running by looking at the build timestamp, than by looking at commit ID. Commit ID requires me to go and check in the repo when it was created, then to go on CI to check when it was built. SVN commit IDs on the other hand are useful, as I could remember from the previous instance what was the number and which one I could expect. My brain can’t remember 40 hex digits, heck not even 3…
Shouldn’t it be enough to look at the version number of the software you shipped?
If one has it, then yes. But for many of the services I run, and the environment they run in, having a version number is more of a nuisance than anything helpful. Adding a version number and updating it is more scaffolding than adding a build time field.
What about teams where the norm is to replace version numbers for small changes?
That strategy sounds like the worst of both worlds. I absolutely don’t want to think “is this change small enough to not increment a number?”
“This is a small bug fix that has to go out to all people with this version”
Not saying it’s right, saying it’s useful to be able to marginally compare software on something other than version.
I’m with you on the larger point that “when” is a useful datapoint for all kinds of reasons, I just get all bothered by that kind of self-induced ambiguity.
It’s actually been pretty helpful for me in the past for deployed internal web services:
Just include the CI build number, that’s what we do. Actually what we do is make the CI build ID #, essentially the version number. CI builds tend to be stable increasing numbers. We include the VCS revision hash also, in case we need it, but that’s super rare, the CI build # gets us everything we could want.
In the rare case that we have to hand-roll a release, we just invent a number, since it’s just a number with no special properties.
We do that too. Except we just leave the build number blank if there isn’t a build so it’s not misleading. But times help on their own so you don’t have to go look at the job and grab the timestamps If you want to see what else was happening at the time. Was just responding to why the timestamps are helpful…they certainly aren’t exhaustive.
It’s a shortcut to tell when something was built. You give up reproducible builds if you even accept that a new build will produce an artifact that has different creation/modification timestamps than a previous build artifact. If you want to make it fully reproducible you end up doing what Nix does (set the times to Unix epoch). So I don’t really lose sleep over it.
It may be Stockholm syndrome but I like the C++ syntax (aside from the struct vs class inconsistency, which is obv for C compatibility.) I appreciate not having to type “public” or “private” over and over, and when reading an interface it keeps the public API segregated.
Go’s mechanism is awful. It’s a clever way of avoiding the need for keywords, but it means that changing access control requires a renaming/refactoring, which is a big pain in an editor without language support and often makes minor changes to a lot of source files, leading to merge conflicts. (Plus, Go has no ability to make things private to a struct or to a source file, so packages easily become grope-fests where everything’s in everything’s business.)
Why yes, I have recently been doing refactoring of Go packages…
+1 on names dictating visibility being awful. It’s a pain in python, too (although visibility is more of a suggestion there).
I think the current iteration of Rust’s visibility system is the sweet spot. Everything defaults to private. To make it public, add
pub
. To make it crate-public, addpub(crate)
. And unlike Haskell, you don’t have to duplicate the name to declare it public at the top of the module.Rust’s visibility rules are significantly better than average, but I don’t think they are even a local optimum. There are at least two improvements to be made:
pub(crate)
should be a single, short keyword.pub
should strictly mean “visible outside of the crate”, it currently also plays dual role of “visible in parent”. That is, public-in-private should become deny-by-defaultThese two issues are more-or-less backwards compatibility sacrifices due to the fact that Rust 1.0 didn’t get visibility exactly right.
I agree. I kinda hate bikeshedding on the syntax but parameterizing
pub
really hurts usability. It makes it harder to read, and also makes crate level visibility feel like a second class citizen. It doesn’t feel like it’s ever a first-considered option to me even though it’s arguably the most common visibility that I need.For a new language I like the idea of there not being a raw
fn
equivalent. I like the cleanliness of havingfn_pub
,fn_mod
,fn_prv
keywords that are all top level. My worry is that visually scanning would be hindered, but maybe syntax highlighting is enough to alleviate that?I dislike the C++ style for two reasons:
First, I often end up having to scroll up a moderately long way (especially in classes that contain a lot of doc comments, which good codebases do) to find the access specifier.
Second, I either end with annoying diffs if I want to change an access specifier. I either move a method from one group to another (don’t do this if it’s virtual, that’s an ABI break) or I have to write an access specifier before and after the method.
Assuming that private declarations are scoped to a single file, that should never be the case:
If you’re making a private name public, all existing uses of the name should be in only that file since it’s currently private.
If you’re making a public name private, all existing uses of the name most be in only that file because otherwise you can’t make it private since it’s being used externally.
(I work on Dart, which has like Go uses identifier naming for privacy. I refactor all the time and while it is somewhat annoying that it means adjusting the name of every identifier, it’s never a sweeping change because of the above observation.)
The problem with C++ is that a class’s implementation may be spread between many files. The declaration must be in a single place, but method bodies can then be anywhere (in contrast to Objective-C, which puts them all in a block).
This is made worse by the C++ name resolution rules, which often mean that you need to put the definition of a method for one class after the declaration of another, so individual methods of a class may end up being defined across a mix of headers.
Not in Go. That is exactly my complaint. “private” names are accessible within the entire package, which can be any number of source files. (The project I work on is overdue for refactoring and has a package with at least 50 source files in it, and I know a couple structs in it that have sensitive fields that shouldn’t be groped from outside that are being groped.)
Which editors don’t have language support for this kind of thing? (And have meaningful usage numbers.)
Changing identifier names should never result in merge conflicts, right?
Source files only assert scoping for their imports, which is fine. But it’s entirely possible to make a struct field “private” to the package in which it is defined, by lower-casing its name in the struct. Do you mean something else?
Sublime, vim, Emacs, notepad++, etc etc. Vim and Emacs can do it with plugins, but not with the default configuration.
Even newer editors like helix can only do it if you have the appropriate LSP server installed (and that LSP has support for renaming).
Even if your editor can automate this, churning names creates a really unfortunate code review burden that doesn’t need to be there.
I feel like leveraging naming conventions for this is just a bad workaround for a hesitation about the syntax overhead. I would rather find a syntax overhead that I’m confident in paying.
Most of them, in 2012 when I started using Go. I agree it’s a lot less annoying today, but I think it’s a bad idea to have such a basic task be dependent on fancy editor refactoring support.
Um, of course it can. Were you being sarcastic? I rename “foo” to “Foo”, meanwhile you change “f := foo()” to “x := foo()”. Bang.
Why yes, I meant “make things private to a struct or to a source file” as I said. A field of a struct that can only be accessed by methods of that struct, as supported by a million other languages. Without that you can’t have safe encapsulation, not unless you make that struct the only thing in its package. Or a function/variable that’s accessible only in its source file, like “static” in C++ or “fileprivate” in Swift.
Ah, so you’re defining “private” accessibility at the type scope, whereas Go defines it at the package scope. Fair.
For the first time in my life I seem to have a stable routine, and I’m really enjoying it. More than younger me who prized novelty and flexibility ever thought I would. To the extent that I’m paranoid about messing it up. So I guess I’m waking up at 7, walking a 3 mile loop, working out while listening to a podcast in the background, cooking lunch (which is at like 10am now?), working for ~3-4 hours at a coffeeshop, then lunch2/dinner and proceeding to work on personal stuff or just goof off with friends.
I would rather have a “sister” site (running the same system) focused on society/culture than see such topics here. I like Lobste.rs because it is a mostly technical site / forum.
This is actually one of the reasons I go to sic.pm. Its small and I hope becomes more as active as people discover it, but it follows the exact opposite approach of Lobste.rs in that the topic is “everything that piques your curiosity and interest.”
To me Lobste.rs is for deep technical discussions in a community of people primed to understand them and contribute meaningfully. Sic.pm fills the need of “deep/involved things that I would find interesting if I knew about them”, which is a very different use case than most link aggregators who mainly share flash-in-the-pan material.
I want different things at different times and Lobste.rs focusing on doing its one thing well is an important part of it’s culture.
Same here! I usually go there when I want to settle in with a long, thoughtful read. Which is less often than I’d like over the past few weeks.
Interviewing. There’s a startup I’m actually really excited about but their process is looooong. Other than that breaking back into an old project I haven’t looked at in like a year.
Maaaybe getting a blog going finally. I’ve been noticing alot of simple concepts that don’t have simple resources explaining them. You have to assume some prior knowledge, but most resources don’t attempt to minimize it. Wavelet Trees for example are very simple and useful, but articles always reference some other structure or use academic terms that exist only as a form of indirection. So maybe I’ll take a crack at that.