There are various ORMs/mappers, but most advice you’ll find (and what I’ll say, also) is to not use them. Wrap a database connection in a struct that has methods to do what you want. Something I’ve found conditionally useful are generators to write the function to populate a struct from a database row.
The community will also say to not use web frameworks, and again I’d agree. The stdlib http package provides a stable foundation for what you want to do. You’ll have more luck looking for packages that do what specific thing you want, rather than thinking in terms of frameworks.
All that said, some coworkers like echo but I can’t for the life of me understand why. Any web-oriented package shouldn’t need to give a shit if it’s hooked up to a tty or not.
The problem is that when you do search and filtering on various conditions (like in a shop) you don’t want to resort to sql string stitching, I wasn’t able to find anything nice when I looked at the docs, for example in gorm:
db.Where("name = ? AND age >= ?", "jinzhu", "22") - I expect that when you have 20 conditions with different operators and data types you will end up having a bad time.
db.Where("name = ? AND age >= ?", "jinzhu", "22")
I’m at a small scale but I just write the query entirely and or do string concatenation. I’m having a fine time though. I just use sqlx.
https://godoc.org/github.com/jmoiron/sqlx#In is pretty useful when you need it.
I admit I don’t understand how to use that from the doc string. Could you show a simple example or elaborate?
It’s useful for queries using the IN keyword, like this:
query, params, errIn := sqlx.In("SELECT column1 FROM table WHERE something IN (?);", some_slice)
// if errIn != nil...
rows, errQuery := db.Query(query, params...)
We built something in house that is very similar in spirit to sqlx but adds a bunch of helpers.
https://github.com/Masterminds/squirrel (which kallax uses) seems somewhat akin to the SQLAlchemy expression API. (And yeah, to me, that’s a great part of SQLAlchemy; I’ve hardly used its ORM in comparison.)
I went from Python + heavy use of the SQLAlchemy expression API to Go and got by OK with just stdlib, but part of that was that the work in Go had far less complicated queries most of the time. So, not the best comparison maybe.
I support the advice to not use mappers like ORMs, but I also agree with what you said. The middle ground seems to be query builders.
If you use Postgres as your DBMS by any chance, I advise you to make sure that the query abstraction layer of your choice doesn’t do query parameterization by string interleaving but utilizes PQexecParams underneath instead.
I haven’t used it but I think Dropbox built a library for that. https://godoc.org/github.com/dropbox/godropbox/database/sqlbuilder
I’m curious about NixOS. Atomic updates sounds really nice. Can NixOS give me Arch but with atomic updates and rollback? Right now on Arch I run the 4.15 kernel but a default NixOS install seems to have 4.9. Maybe I just need to use the unstable channel but I saw that it wasn’t recommended in production.
Thanks to the Recurse Center for inviting me to speak and for making the video. I’m here if anyone has questions.
A very non-technical question: Why should Xi “only” be an editor for the next 20 years? In terms of text editors, that’s not that long. People, like me, use Editors that are nearly twice as old as I am, and the reasons don’t seem to be tied to performance or the internal structure of the implementations, but rather a core “philosophy” regarding how things are done, or how the programmer should relate to text. What does Xi have to offer regarding these “practical” qualities, that have made, for example Emacs or Vi(m) last for so long? Does Xi see itself in such a certain tradition, having a certain ideal that you aspire to, or do you set your own terms? These could seem important if one intends to write an editor that should be practically used, which is what I gathered from the video, as opposed to being a “purely academic” experiment, which would obviously have different goals and priorities.
Do you plan on doing a Linux frontend yourself and would it matter performance-wise? I saw that some people are working on a gtk+ frontend but I was wondering if it will be as fast as the mac one.
In my ideal world, there’d be a cross-fertilization of code and ideas so the linux front-end would be just as nice and performant as the mac one, but it’s unlikely at this point I’ll take it on myself.
I just tried xi-gtk and it’s very fast. Not sure what it’s like compared to the swift one but it’s a whole lot faster than gedit.
Also, here is a cool demo of async loading of big text files – you can navigate and I think even edit while loading:
Using immer, Clojure-like immutable data structures in C++:
The editor is a demo of the library: https://github.com/arximboldi/ewig
I just watched the video. It looks really interesting, although a lot of it was over my head!
I more or less understand the process model, async architecture, and distributed data structures. I like that part – very Unix-y.
But there were a lot of rendering terms I didn’t understand. Maybe because some of it is Mac-specific. But also some of the OpenGL issues. Is there any background material on text rendering you’d recommend?
Also, I don’t understand the connection to Fuschia? I was under the impression that Fuschia was more consumer-facing, and Xi is more developer-facing. That is, I imagine most consumers don’t have text editors installed. There is no text editor on Android or ChromeOS.
Or is xi more general than a vi/emacs replacement – is it meant to be used as part of a browser for implementing text boxes?
Glad you enjoyed the talk!
Unfortunately, there really isn’t a lot of material on text processing, especially from a modern perspective. A lot of what I learned about rendering came from reading other code (alacritty in particular), and talking to people like Patrick Walton and my teammates on Chrome and Android.
There is an EditText widget on Android (a good chunk of my career involved working on it), but you certainly wouldn’t want to write code (or long-form text) in it. My goal with xi is to make a core lightweight and performant enough it can be used in such cases, easily embedded in apps, yet powerful enough for cases where you really do need a dedicated editor application.
I feel like it’s fairly out of my league, but I’ve been thinking about implementing a Sublime Text-like editor (multiple cursors, smart brackets/quotation marks) for arbitrary text fields in web sites. Would it be possible to use Xi as a backend for something like that? Perhaps via compilation to WASM?
Eventually it is my hope that something like that could work. There are some technical details (the current implementation uses threads), so it’s not an easy project. In the meantime, the excellent CodeMirror does multiple selections, and is very widely used embedded in websites.
Is Elasticsearch light enough to run on a cheap vps with 1 GB of ram?
For game servers I monitor logs and run commands in real-time for certains events. Can I do something similar with Elasticsearch? It would be simpler if I’m already using it.
The ELK stack is pretty heavy from my experience. If your usage is simple and Andy to get your hands dirty, I’d really advise your to look at https://github.com/oklog/oklog !
I like it but I hate sticky headers.
That makes sense, thanks. I’ll implement that behavior that hides the header until you start scrolling up again.
For militant decentralisation, https://git.scuttlebot.io/%25n92DiQh7ietE%2BR%2BX%2FI403LQoyf2DtR3WQfCkDKlheQU%3D.sha256 seems like a cool solution. No internet required.
You can overwrite other people’s master branch with that?
Don’t know, I’ve never used it. http://scuttlebot.io/apis/community/git-ssb.html#usage says “You can only push to SSB git repos that you created, not ones created by other users”, but I’m guessing there’s more to the story :)
Only if they give you the key.
Scuttlebot is built on SSB, which is a content-addressable (by Sha) distributed database, where all entries are signed by a private key, and your local replica only copies records published by a key it follows.
No automated custom builds
No automated custom builds
It just means you can’t use their ones for free any more. You’re welcome to find a different volunteer to support your work.
Is there a way to figure out the target? It would be useful when I don’t build on the same machine.
Just use -Ctarget=cpu=native.
Even if I’m not compiling and running on the same processor?
Well, you’ll have problems running on older processors (that don’t support some instruction set that’ll be used in the binary).
Can I use this instead of Tower?
Edit: I mean, is it basically the same as tower?
Yep. This is the code that the product “Tower” comes from.
That reminds me about that xi-editor project. I wonder if they made some progress.
Not really related but anyone knows of videos like this but about stats (like how to interpret data and when to use mean, coefficient of variation and others)?
They answer the question in the first sentence:
They do load fast, which is a terrific user experience
Eh, that really ignores the thrust of the article. Sure, they load fast, and users like that. But the question is why do publishers feel so comfortable cedeing an enormous amount of control to Google, by allowing their content to be served directly by Google, under URLs under Google’s domain, that get shared, without anyone being sent to the publisher’s own servers?
Another way to look at this is that publishers get free hosting and save on bandwidth costs but still get to run their ads and don’t need to pay Google a cent for this.
Sure. But if I’m a publisher that’s a deal that I’d really be asking myself what the catch was going to be in the long term. Losing control over core parts of your business is never a good long-term play.
The fact that people are linking to and sharing to my content on not-my-branded-domain would and should make me extremely concerned about the long-term benefit of this scheme for me.
Publishers are in a rough spot and mostly just trying to stay afloat rather than thinking about their long-term health. Same reason they have been falling over themselves to publish Facebook Instant Articles.
No doubt. The Facebook Instant Articles are a terrible idea too.
I’d really be asking myself what the catch was going to be in the long term.
Can you unpublish an AMP page?
Does the domain matter? Maybe for search, but the publishers name is right there
If Google were to abuse this at all, antitrust regulators would have such a field day. And why? Google’s getting what they want out of this
To point at but one issue with it: lets say this goes the way of a lot of Google products, and disappears in a year or five. What happens if I were making significant revenue off of evergreen links that happened to use the AMP URL? I’m at the mercy of whatever Google decides to do (redirects? 404s?). That’s not something you want to leave up to a 3rd party when it’s your business at stake.
If you do business on the web your revenue is already at googles whim (to a first approximation)
And again, in the second paragraph:
I searched for “ars pixel preview.” The first search result was the AMP version of his review.
Does Google give results boosts to pages with AMP versions available?
Yes, they have a results carousel at the top that’s all AMP in some cases, even if you didn’t say “AMP” in the query.
Which is true, but is AMP the only way to make a web page that loads fast?
You can get very close by not including any 3rd party js or other sources of slow, and paying top dollar for a cdn, getting your cache headers right, etc. users already have Google DNS cached, but that’s not a huge deal
For small assets and moderate traffic CloudFront has been really reasonable for my blog. Typically traffic runs about ten cents a month; I don’t get much, obviously, but it’s cheaper than a VPS, doesn’t require patching, and is really fast.
Just installed it. First reaction is that they changed the scratch buffer text!
Haven’t gotten a chance to play with xwidgets yet.
First reaction is that they changed the scratch buffer text!
They did? I’m using it right now and I didn’t notice. What did they change?
Mine is only two lines now instead of 3..
;; This buffer is for text that is not saved, and for Lisp evaluation.
;; To create a file, visit it with <open> and enter text in its buffer.
As someone who does distributed systems, I’m a little surprised at the claim that grpc makes distributed systems “as easy as making local function calls” – while I believe that in google’s own datacenters this may be true enough to rely on, it seems pretty unlikely to be true in AWS, GCE, or even most ‘enterprise’ local deployments, given that grpc appears to rely on HTTP/2 over TCP, and TCP has, ah, nontrivial behavior in some environments.
TCP has, ah, nontrivial behavior in some environments
Can you elaborate on that?
TCP is a serialized protocol where both sides chat back and forth on every packet, so if one side or the other experiences latency or packet loss (whether due to slow or overloaded applications, full or contended packet queues in the kernel, cheap or bad or failing NICs, bad wiring, cheap or slow or bad or overloaded or distant intermediate routers, or some combination of all of those (i.e. the public internet)), then every communication on the socket will hang until the first latent or lost packet is retried or otherwise gets through.
This causes highly multiplexed protocols over TCP to pass along the lag to every channel receiver, obviously.
Applications generally expect function calls to return so quickly as to be virtually instant, and to not have to retry them on failure. So situations in which the communication is broken, packets are lost, the network fails temporarily or permanently, or even just situations in which any latency above your maximum expectation occurs and causes an app level timeout, will either cause unexpected (and possibly fatal) backpressure, unexpected (and possibly fatal) application level retries or crashes, or other hilarity, like exposure of completely unreasonable race conditions that would never otherwise happen, etc., etc., if you naively replace a native function with a distributed system, you will have a variety of bad times that look like nothing you’ve ever seen before.
In situations like Google’s, where they have obscenely expensive, gorgeous, exquisitely maintained switches in a big matrix sitting right on top of their racks (note: this is hearsay; I don’t yet work at Google), and there aren’t any routes going through an EC2 ELB or Joe’s VPS and Bait Shop in Dubai over a satphone, there are probably very few of these events, so maintaining the fiction that it’s just like a regular procedure call is worth it and a super-powerful abstraction that they can gift to their developers.
For those of us using Joe’s VPS and Bait Shop in Dubai, the window of risk is radically larger. Our scale problems are also a lot smaller, though; but it can be very difficult for us to know where our (collective) service providers' failure point is, where believing in the abstraction is dangerous and unviable. It’s unquantifiable because we don’t have enough visibility into the points of possible danger; we can only guess, based on our failure surface, about how bad the problem will be.
If pressed for recommendations, my gut tells me that the average dev on the average infrastructure should stick with the idea that network communication should be heavily decoupled (e.g. JSON over HTTP or protobuf over HTTP or whatever), rather than try to use grpc as a function call replacement mechanism directly. You don’t get Internet Scale with JSON over HTTP but you have a nice reliably bad protocol with lots of debug handles and well understood logs and stack overflow. People with serious distributed system problems should continue to use erlang ( :D ). People with gorgeous deluxe infrastructure running locally should totally jump all over grpc, especially if they integrate QUIC, but that’s for another day and another five paragraphs.
It seems to be “reactive streams” from people who don’t understand the need for back pressure, think that communication across machines can or should look like local method invocation, haven’t checked the existing state of the art in the last 15 years and believe that the primitive onX API is all what people need.
Accusing the folks at Google of not understanding backpressure or not understanding the existing 15-year state of the art is a little far fetched. It’s just that grpc comes from a radically different environment than the ones that you and I are used to, and has different affordances and abstractions than solutions that grew out of our more common environments. As the two environments come into contact with each other, there will be friction and weirdness, but I suspect we have as much to learn from Google as Google does from us.
Or maybe they only open-source the bad stuff and keep the good stuff secret as a competitive advantage? The things they have released … e. g. GWT, Android, Go aren’t nearly as good as they want everyone to think they are in my opinion. It’s usually quite underwhelming to see the hype vs. the actual code.
It seems like Redis could have sensible defaults in less time than it took to write all that.
Even if they release a fixed version there will still be a a lot of out-of-date redis server out there.
This is true of every vulnerability. We still at least try to fix it.
There is no fix. It is the job of the administrator to set up the system correctly. Why is it so unreasonable to expect people to understand how to configure the software they are using?
Sure, users should configure things properly. But why do you feel it’s unreasonable to expect that software should be secure by default? At any rate, it’s in Redis' best interest, because it is what receives the bad press, not all the users getting caught out.
I mean, I sympathise–I really do. Years ago I upgraded an open-source Objective-C library I have to use Automatic Reference Counting (ARC), and despite that being the main focus of the release so many people were complaining about the library having terrible memory leaks, so I had to litter every source file with this shit to stop it compiling if you didn’t compile it with ARC:
#error "This source file must be compiled with ARC enabled!"
I hated having to do it at the time, but you know what? I sucked it up and added it, and it really helped my users. And it helped me because it reduced my support burden: I didn’t have to constantly explain to users that no, there’s no memory leak, you just have to compile the library with ARC.
While I don’t agree that it is unreasonable, history has shown time and again that users simply do not. Being responsible is having secure defaults; even if that means not entirely “working” out of the box. Binding to localhost and having a password are simple, time tested solutions to the problem these Redis instances are facing.
The summary text which Lobsters scraped pretty much sums up what is wrong with single page web apps:
Oops! It appears you’re using an unsupported browser. Old browsers can put your security at risk, are slow and are not compatible with the Google Fiber website. To continue browsing our site, you’ll need to update to a modern
Edit: That and the fact that the Check your address form seems to be completely broken. Apologies for the super-negative tone but this sort of thing is fast becoming the rule rather than the exception.
Very little attention is paid, it seems, to accessibility. More and more we have sites that are usable only in a months-old browser version on a whiz-bang internet connection. It has somehow become okay to require users to download the bulk of your application to their local machine and then do the rendering work themselves. Sure, it’s great if you’re running a server and don’t want to do any real work there, but it’s a bad deal for the users, who see their bandwidth eaten alive and their CPU running hot to render a news site and all the attendant ads. And if you’re on a screen reader or using any assistive technology, good luck. Hopefully we haven’t cluelessly broken everything in a quest to make our website into an “app” (whatever the heck that means). Seriously, whatever happened to KISS? Start with some basic HTML (using the niceties of the HTML5 spec with some ARIA roles. They’ve been around long enough now), make it pretty with some CSS (avoiding non-hardware-accelerated animations), and then add some JS for fun and flavor. It is not that hard. If things are too slow, cache smartly and setup a CDN. Unless you’re getting huge traffic, you don’t need more.
Very little attention is paid, it seems, to accessibility.
Do you mean in this case or in general?
Angular 1 has supports for accessibility. Are you saying that it is not good?
Word. Particularly apposite, given the classic Yegge post currently sharing the front page.
what browser are you using?
Did not know about the preview function. Thanks!
I’m not a fan of the cake day thing. It doesn’t matter at all which day you signed up. There is so many users that every day is the cake day of thousands of people. I think that mean it’s the opposite of being something special and worth caring about.
Your whole comment applies to actual birthdays also :P
Birthdays are typically celebrated with at most a few dozen other people, not, y'know, all of Reddit. ;)
I got “502 Bad Gateway” right now.
Confirmed. Same here
Very strange. It just decided to quit, as far as I can tell. Briefly fixed…
Would it be worth it to use something like Kubernetes with only one node?
I don’t know if one of them support it but I would like to have zero-downtime deployment with containers and it could be nice to have the possibility to add another node someday if I need to.
I wouldn’t consider any form of cluster manager and scheduler until I was well past 10 nodes, the operational overheads of managing such a setup will far exceed any benefits you get.
For a single node you can get away with managing it entirely by hand (document your setup process), or use something like ansible so it’s more easily reproducible. If you plan on growing it’s good to make sure your system is compatible with the direction you plan on growing your infrastructure, but it’s rarely wise to do it from day one.
I wonder if it’s worth it for only one laptop.
Based on my experience with Chef - probably not, unless you’re really familiar with it or are using some kind of framework. Nothing against Chef, but it has more setup overhead.
I picked Ansible for this because it scales down well to very simple use cases. There isn’t a whole lot of setup overhead. It helps that it’s just pushing scripts over SSH, rather than using a central configuration server.
I found it tremendously valuable to have my development environment config reproducible, because my laptop hard drive went bad and I needed to get up to speed again on short notice. (I also had backups, of course.)