Interesting this was written by a powerDNS person, they are a very smart and helpful group of people and the PowerDNS codebase is a good read.
Also running dig on their endpoint returns a single A record which means they’re terminating all requests on a single server. This is probably not the most highly available setup.
They could be using anycast or managing the fail over on DNS.
And then we rewrote all this code in Python and were much happier.
(Not as happy as I’ll be when all the C is rewritten in Rust, but as long as we all port a little code every day, we’ll be there before too long!)
exarkun used to give me some great advice on IRC back when I was neck deep in twister/pyopenssl/mem_bio/session resumption. pyOpenSSL changed to use cffi or something right?
Honestly I’d cut them some slack. It’s easy to assume that a company their size just has all the money to pay for testing and qa and expertise.
But the truth is it’s probably just a handful of engineers who are passionate about playing well in the Linux ecosystem who will probably fix this and not make the mistake again.
I think if you didn’t know that forcibly removing whatever was at /bin/sh was bad, then no packaging rules can save you.
Then again, I’ve rarely seen anyone use their editor of choice well. I’ve lost count of how many times I’ve watched someone open a file in vim, realise it’s not the one they want, close vim, open another file, close it again… aaarrrgh.
I do this a lot, because I prefer browsing files in the shell. I make pretty extensive use of a lot of other vim features though. When did you become the arbiter of how “well” I’m using my computer?
Closing vim seems odd to me. Why wouldn’t one instead open the new file without closing vim? Maybe it’s a cultural thing? I don’t think anyone would do that in Emacs.
I do the thing you quoted as well, but that is because vim is often my editor of convenience on a machine rather than my editor of choice, which is true for many usages I see of vim.
Because the shell lets me change directories, list files with globs, run find, has better tab-completion (bash, anyway), etc, etc. I might not remember the exact name of the file, etc. Finding files in the shell is something I do all day, so I’m fast at it. Best tool for the job and all that.
(Yes I can do all that with ! in vi/vim/whatever, but there’s a cognitive burden since that’s not how I “normally” run those commands. Rather than do it, mess it up because I forgot ! in front or whatever, do it again, etc, I can just do it how I know it’ll work the first time.)
This is exactly why I struggle with editors like Emacs. My workflow is definitely oriented around the shell. The editor is just another tool among many. I want to use it just like I use all my other tools. I can’t get on with the Emacs workflow, where the editor is some special place that stays open. I open and close my editor many, many times every day. To my mind, keeping your editor open is the strange thing!
It’s rather simple actually: the relationship between the editor and the shell is turned on it’s head – from within the editor you open a shell (eg. eshell, ansi-term, shell, …) and use it for as long as you need it, just like a one would use vi from a shell. Ninja-slices.
You can compare this as someone who claims to log out of their x session every time they close a terminal or a shell in a multiplexer. Would seem wierd too.
I know you can launch a shell from within your editor. I just never really understood why you would want to do that.
Obviously some people do like to do that. My point is just that different ways of using a computer make intuitive sense to different people. I don’t think you can justify calling one way wrong just because it seems odd to you.
I know you can launch a shell from within your editor. I just never really understood why you would want to do that.
I do it because it allows me to use my editor’s features to:
a) edit commands b) manipulate the output of commands in another buffer (and/or use shell pipelines to prep the output buffer) c) not have to context switch to a different window, shutdown the editor, suspend the editor, or otherwise change how I interact with the currently focused window.
That makes a lot of sense. I guess I have been misleading in using the word “shell” when I should really have said “terminal emulator”. I often fire off shell commands from within my editor, for just the same reasons as you, but I don’t run an interactive shell. I like M-! but I don’t like eshell, does that make sense?
Having pondered this all a bit more, I think it comes down to what you consider to be a place. I’m pretty certain I read about places versus tools here on lobsters but I can’t find it now. These are probably tendencies rather than absolutes, but I think there are at least a couple of different ways of thinking about interaction with a computer. Some people think of applications as places: you start up a number of applications, they persist for the length of your computing session, and you switch between them for different tasks (maybe a text editor, a web browser and an e-mail client, or something). Alternatively, applications are just tools that you pick up and drop as you need them. For me, a terminal, i.e. an interactive shell session, is a place. It is the only long-lived application on my desktop, everything else is ephemeral: I start it to accomplish some task then immediately kill it.
I enjoy seeing articles such as this where people make their editor of choice work for whatever language they need. I wish I wasn’t so tied down and spoiled by msvs and autocomplete. I’m curious why autocomplete is pretty much either love/hate for most people too.
EDIT: Also under the “About” section I can’t reach the link for Indigo: https://chapters.indigo.ca/ - IP Address could not be found/ DNS_PROBE_FINISHED_NXDOMAIN.
Omnisharp for emacs autocomplete works very well. I’m not saying vscode isn’t better but as someone who uses emacs every day, the c# experience is more than passable.
Hmm. Does https://indigo.ca work for you?
I love a well written readme that attempts to explain what the alternatives are and why I’d use their tool be another. It’s that kind of experience in a readme that will make me bookmark a project and come back to it when I find a good excuse to use it.
It’s funny how for many years people hated on checked exceptions as the worst mistake every – and now they’re back as Result types.
I just wish this realization had come sooner, as now too many languages don’t have support for either.
I think checked-exceptions as implemented in Java had a number of flaws that Rust’s corrects:
NullPointerException, contributing to a feeling that they don’t add a lot of value.UnsupportedEncodingException on "utf-8". The Java spec says UTF-8 must be available, but you have to write the handful of lines of code to catch UnsupportedEncodingException anyways! In Rust the equivalent situation is handled with .unwrap() or .expect("..."), much less verbose.Result and wrapping it into the correct one. In Java convention seems to be declaring that every function raises three different exception types, adding verbosity at every call definition.I agree. It just saddens me that Kotlin makes all exceptions unchecked, even those coming from Java, instead of automatically wrapping the Java code in Result<T, E>.
There’s a lot of things Rust does right that no JVM language currently does well.
There’s a lot of things Rust does right that no JVM language currently does well.
Such as? Scala is very Rust-like; it doesn’t do linear typing but that wouldn’t help you much on the JVM anyway.
At least Rust has unwrap, when you know that errors should not happen if code is correct, or for initial rough code. Java’s checked exceptions are frustrating just because there’s no short syntax for re-raising as unchecked exception (and preserving stacktrace, some IDEs even add code that prints stacktrace to stderr in such unwrap-like handlers).
I hated checked exceptions right up until I tried to write some software that had to be more reliable than a http worker that got restarted every request.
Turns out that when I write to a file I really want to know exactly what can go wrong.
My only experience with checked exceptions was java, and that sucked… But inferred+merged checked exceptions could be cool. Any languages have that?
Yes, Ocaml has it with Polymorphic variants + result monad. The one current downside is the error messages can be less than ideal.
A few blog posts describing it:
http://functional-orbitz.blogspot.se/2013/01/introduction-to-resultt-vs-exceptions.html
http://functional-orbitz.blogspot.se/2013/01/experiences-using-resultt-vs-exceptions.html
The difference is that results are plain old values that fit in the normal type system. You can call a higher-order function with a function that returns a result and it will just work. Checked exceptions were indeed a terrible mistake, not because they force you to handle errors, but because they were a secondary type system that didn’t interoperate properly with the primary type system.
(People who are proposing effect systems should take note)
Sure, but the solution would have been to wrap checked exceptions in a Result type for interop, not, like kotlin does today, to just swallow all of them.
the solution would have been to wrap checked exceptions in a Result type for interop
There are a couple of problems with that - performing a JVM catch at every interop boundary is inherently inefficient, and exceptions don’t quite have the nice monadic composition you’d expect from results.
I spent alot of today playing with this and it’s very nice. It’s a very cool use of go’s plugin infrastructure.
I love being able to connect my go snippets with graph rendering!
Related to this, I know someone who got fired because of innocent activity on LinkedIn.
I don’t remember his exact dates, but he put something like “June 2009 – November 2010” on LinkedIn and “June 2009 – February 2011” on his CV. He started a new job and got caught in “the lie”, but it wasn’t. His last day of work was in November, but his severance period continued for three months.
He did nothing wrong; but he was fired anyway, because his new company didn’t want to be stuck with someone who’d lost a job in the past.
It’s an evil world where workers have no power and surveillance of us by employerfolk is ubiquitous and easy. Unless you get into the 0.1% who becomes a YouTube celebrity or a bestselling author, you shouldn’t want any public reputation; it can only hurt you. I realize that there’s some apparent hypocrisy in me, of all people, saying this; but hear it from someone who’s suffered.
I almost lost a job offer because of something like that. I worked for a university group that didn’t really have a “home”. For a while my paycheck came from the associated research foundation, later it came from the University itself. My team/boss/work never changed. I put it as one job on a resume and was called out on it. I had to get a former boss to take a call on vacation to sort it out.
Glad I didn’t take that job. When a company gives you a peek into how they do things: believe them.
I’m curious about your opinion of _why? He took the approach of being public, without sharing any details. His success ultimately led to a deanonimzation, reversing everything.
I guess the question is: how do you actively participate in conferences, and other valuable learning opportunities without building a public reputation? And, of course is there a way to go back to that? Seems highly unlikely.
I couldn’t tell you to the month when I started/stopped most of my past jobs. Some of them I couldn’t even tell you the year.
I am in this situation right now. I have a number of months of severance, and am still on the payroll, so that I have some time to pack up and leave before I’m fully terminated and my visa invalidated.
Had no idea this could be used against me, so thanks!
Best way to play this is to keep your story, whatever it is, consistent. The thing to remember about HR Mooks is that they can’t tell who’s lying and who’s not, and they assume most people are lying (because, well, a lot of people lie). So, your best bet, if you’re on severance or “gardening leave” is to consistently treat the severance as time you were employed.
Definitely agree there, HR stuff can really be eye opening when you realise they’re there for the company rather than the human. I’m not sure how on the wording/employment contracts for severance in the US work but in the UK if you are placed on gardening leave, you are still employed and not able to start at a new place of employment. Which can be great or very stressful depending on your circumstances!
What sort of new problems doesn’t increasing Ruby’s performance 3x open up for Ruby? As a non-Rubyist, Ruby is so slow a 3x improvement is still too slow, to me.
That’s fair, but Ruby’s performance is on par with Python’s (or better in at least some benchmarks, actually), which is plenty good enough for many, many people. So for those using Ruby, that’d still be an incredibly welcome improvement.
That said, I’m curious when you last tried Ruby. It’s definitely still an order of magnitude slower than most compiled languages, no question, but the new GC that landed around 2.2, and the bytecode based VM that landed around 1.9 or so, have made a huge, very noticable difference in places I’ve used Ruby. It’s been enough that I’ve kept reaching for it over e.g. Go in quite a few instances.
Ok, but that’s not really answering my question. Is this performance push just to get some benchmark numbers higher or is it opening new opportunities for Ruby? Certainly people aren’t going to be replacing their big data jobs with Ruby with a meager 3x improvement. People aren’t going to be writing video games in pure Ruby or doing high frequency trading.
Is this performance push just to get some benchmark numbers higher or is it opening new opportunities for Ruby?
Performance concerns aren’t even at the top of my list of “why I wouldn’t write new software in Ruby”; I’m more concerned with long-term maintainability due to the dynamicity making static analysis more or less a lost cause.
In the interview he talks a bit about the possibility of gathering type data from running code, but without breaking existing code I’m having a hard time imagining a solution that would be reliable.
I’m also skeptical. Ruby users, for a while, seemed to revel in meta-programming and run-time code generation, which greatly hinders static analysis.
Well, I think that a 3x improvement would at least help push the bar for “Ruby is too slow for me, I’ll rewrite it in X-compile-language” further away.
That’s what I was trying to convey also. Ruby’s strength is that it’s a very expressive language, but its speed at one point made it an effective no-go for classes of work. (Hell, I remember writing a site in Django instead of Rails for performance reasons.) The speed improvements so far have really just kept it relevant, to be honest, by getting it back to and somewhat exceeding Python; a further 3x would mean I could start looking at it over things like C# for what I might call mildly performance demanding stuff (e.g. maybe daemons with nontrivial but not excessive workloads).
The upcoming support for guilds is much more exciting in terms of addressing new problems than single-threaded execution improvements, IMHO.
A less humble opinion of mine is that guilds are too little, too late, and still too primitive to have a transformative effect on Ruby and its ecosystem. I think too many devs (like me) are now accustomed to more advanced/higher-level concurrency abstractions, and guilds feel like a slightly safer way of revisiting the mid-90’s.
I think it just helps people who have already invested in Ruby justify their continued investment. I wouldn’t expect a ton of new users at this point unless they develop another “rails”.
I don’t know the answer to your question, but with Linux perf tools you can go all the way from a node express handler thorough the kernel stack down to pretty low level network stuff like an interrupt generated from a packet hitting a NIC.
There are various ORMs/mappers, but most advice you’ll find (and what I’ll say, also) is to not use them. Wrap a database connection in a struct that has methods to do what you want. Something I’ve found conditionally useful are generators to write the function to populate a struct from a database row.
The community will also say to not use web frameworks, and again I’d agree. The stdlib http package provides a stable foundation for what you want to do. You’ll have more luck looking for packages that do what specific thing you want, rather than thinking in terms of frameworks.
All that said, some coworkers like echo but I can’t for the life of me understand why. Any web-oriented package shouldn’t need to give a shit if it’s hooked up to a tty or not.
The problem is that when you do search and filtering on various conditions (like in a shop) you don’t want to resort to sql string stitching, I wasn’t able to find anything nice when I looked at the docs, for example in gorm:
db.Where("name = ? AND age >= ?", "jinzhu", "22") - I expect that when you have 20 conditions with different operators and data types you will end up having a bad time.
I’m at a small scale but I just write the query entirely and or do string concatenation. I’m having a fine time though. I just use sqlx.
sqlx
https://godoc.org/github.com/jmoiron/sqlx#In is pretty useful when you need it.
I admit I don’t understand how to use that from the doc string. Could you show a simple example or elaborate?
It’s useful for queries using the IN keyword, like this:
query, params, errIn := sqlx.In("SELECT column1 FROM table WHERE something IN (?);", some_slice)
// if errIn != nil...
rows, errQuery := db.Query(query, params...)
We built something in house that is very similar in spirit to sqlx but adds a bunch of helpers.
https://github.com/Masterminds/squirrel (which kallax uses) seems somewhat akin to the SQLAlchemy expression API. (And yeah, to me, that’s a great part of SQLAlchemy; I’ve hardly used its ORM in comparison.)
I went from Python + heavy use of the SQLAlchemy expression API to Go and got by OK with just stdlib, but part of that was that the work in Go had far less complicated queries most of the time. So, not the best comparison maybe.
I support the advice to not use mappers like ORMs, but I also agree with what you said. The middle ground seems to be query builders.
If you use Postgres as your DBMS by any chance, I advise you to make sure that the query abstraction layer of your choice doesn’t do query parameterization by string interleaving but utilizes PQexecParams underneath instead.
I haven’t used it but I think Dropbox built a library for that. https://godoc.org/github.com/dropbox/godropbox/database/sqlbuilder
Thanks for this. I’ve been doing alot of gopherjs lately on the bright side and I like that you get the same workflow you get from go, combined with all of the benefits of edit-refresh when you want them. It’s one way of bringing channels and goroutines to the browser.
bad performance on low-end devices (and I suspect higher battery consumption, but can’t really proof this one)
I’d actually argue the opposite here. With a traditional web app you’re sending HTML across, and you’re doing a lot of parsing each time a page loads. Parsing HTML is an expensive operation. With SPA style apps, you load the page once and pass JSON around containing just the data that needs to be loaded. So, after initial load, you should expect to get better resource utilization.
I’m not sure that parsing HTML is as expensive as parsing (and compiling) Javascript though. Of course you’d pay a high price at each request of an e-commerce web app, but if you want to read an article on some blog, it is faster when you don’t have to load all of Medium’s JS app.
Browser vendors are trying really hard to fasten the startup time of their VM but the consensus is that to get to Interactive fast, you should ship less JS or at least less JS upfront.
Obligatory pointer to Addy Osmani’s research on the topic https://medium.com/@addyosmani
Parsing XML is notoriously expensive. In fact, it’s one of the rationales behind Google’s protocol buffers. Furthermore, even if the cost of parsing XML and JSON was comparable, you’d still be sending a lot more XML if you’re sending a whole page. Then that XML has to be rendered in the DOM, which is another extremely expensive operation.
To sum up, only pulling the data you actually need, and being able to repaint just the elements that need repainting is much faster than sending the whole page over and repainting it on each update.
The problem is that incremental rendering is often paired with a CPU intensive event listener and digest loops and other crud causing massive amounts of Javascript for every click and scroll.
That’s not an inherent problem with SPAs though, that’s just a matter of having a good architecture. My team has been building complex apps using this approach for a few years now, and it results in a much smoother user experience than anything we’ve done with traditional server-side rendering.
This seems like the exact kind of thing we can empirically verify. Do you know of any good comparisons?
I haven’t seen any serious comparisons of the approaches. It does seem like you could come up with some tests to compare different operations like rendering large lists, etc.
I’m not so sure, a modern HTML parser is fairly efficient. On top of that, a lot of stuff is cached in a modern browser.
My blog usually transfers in under 3 KB if you haven’t cached the page, around 800 B otherwise (which includes 800 bytes from Isso). My website uses less than 100KB, most of which is highly compressed pictures.
Most visitors only view one page an leave so any SPA would have to match load performance with the 3 KB of HTML + CSS or the 4KB of HTML+CSS plus 100KB of images…
A similar comparison would be required for any traditional server-side rendering application; if you want to do it in SPA, it should first match (or atleast come close to) the performance of the current server for the typical end user.
SPAs are probably worth thinking about if the user spends more than a dozen pages on your website during a single visit and even then it could be argued that with proper caching and not bloating the pages, the caching would make up a lot of performance gains.
Lastly, non-SPA websites have working hyperlink behaviour.
I think that if your site primarily has static content, then server side approach makes the most sense. Serving documents is what it was designed for after all. However, if you’re making an app, something like Slack or Gmail, then you have a lot of content that will be loaded dynamically in response to user actions. Reloading the whole page to accommodate that isn’t a practical approach in my opinion.
Also, note that you can have working hyperlink behavior just fine with SPAs. The server loads the page, and then you do routing client-side.
Also, note that you can have working hyperlink behavior just fine with SPAs. The server loads the page, and then you do routing client-side.
That’s how it would work in theory, however, 9/10 SPAs I meet don’t do this. The URL of the page is always the same, reloading looses any progress and I can’t even open links in new tabs at all or even if I can, it just opens the app on whatever default page it has.
Even with user content being loaded dynamically, I would considering writing a server app unless there will be, as mentioned, a performance impact for the typical user.
That’s a problem with the specific apps, and not with the SPA approach in general though. Moving this logic to the server doesn’t obviate the need for setting up sane routing.
I’ve sadly seen SPA done correctly only rarely, it’s the exception rather than the rule in my experience.
So I’m not convinced it would be worth it, also again, I’m merely suggesting that if you write an SPA, it should be matching a server-side app’s performance for typical use cases.
I agree that SPAs need to be written properly, but that’s just as true for traditional apps. Perhaps what you’re seeing is that people have a lot more experience writing traditional apps, and thus better results are more common. However, there’s absolutely nothing inherent about SPAs that prevents them from being performant.
I’ve certainly found that from development perspective it’s much easier to write and maintain complex UIs using the SPA style as opposed to server-side rendering. So, I definitely think it’s worth it in the long run.
I’ve built enough apps both ways now to feel confident weighing in.
If you build a SPA, your best case first impression suffers (parsing stutters etc), but complex client side interaction becomes easy (and you can make it look fast because you know which parts of the page might change).
I no longer like that tradeoff much; I find too few sites really need the rich interactivity (simple interaction is better handled with jquery snippets), and it’s easier to make your site fast when there are fewer moving parts.
This might change as the tooling settles down; eg webpack is getting easier to configure right.
The tooling for Js is absolutely crazy in my opinion. There are many different tools you need to juggle, and they’re continuously change from under you. I work with ClojureScript, and it’s a breath of fresh air in that regard. You have a single tool for managing dependencies, building, testing, minifying, and packaging the app. You also get hot code loading out of the box, so any changes you make in code are reflected on the page without having to reload it. I ran a workshop on building a simple SPA style app with ClojureScript. It illustrates the development process, and the tooling.
I’m a simple person. When someone submits a link to the creator of the linux kernel discussing a critical CPU issue that has a huge effect in my planning for 2018 and the very foundations of “cloud” computing, I click on it.
I really love Sqlite, and reading accounts like this is great. BUT, note this all reads with no inserts/updates/deletes. Sqlites achilles heel for being really useful?
Additionally, though this test was focused on read performance, I should mention that SQLite has fantastic write performance as well. By default SQLite uses database-level locking (minimal concurrency), and there is an “out of the box” option to enable WAL mode to get fantastic read concurrency — as shown by this test.
You’d be surprised. Serializing writes on hardware with 10ms latency is pretty disasterous, giving parallel write databases a huge advantage over SQLite on hard drives. But even consumer solid state drives are more like 30us write latency, over 300 times faster than a conventional hard drive.
Combine with batching writes in transactions and WAL logging and you’ve got a pretty fast little database. Remember, loads of people loved MongoDB’s performance, even though it had a global exclusive write lock until 2013 or something.
People really overestimate the cost of write locking. You need a surprising amount of concurrency to warrant a sophisticated parallel write datastructure. And if you don’t actually need it, the overhead of using a complex structure will probably slow your code down.
Sounds like you might like the “COST” metric…. https://lobste.rs/s/dyo11t/scalability_at_what_cost
Given that they run all of expensify.com on a single (replicated) Bedrock database, that would pass my “really useful” test, at least. :)
The project page itself warns about that. When toying with ideas, I thought about a front-end that sort of acted as a load balancer and cache that basically could feed writes to SQLite at the pace it could take with excesses held in a cache of sorts. It would also serve those from its cache directly. Reads it could just pass onto SQLite.
This may be what they do in the one or two DB’s Ive seen submitted that use SQLite as a backend. I didnt dig deep into them, though. I just know anything aiming for rugged database should consider it because the level of QA work that’s gone into SQLlite is so high most projects will never match it. That’s kind of building block I like.
Now I’ll read the article to see how they do the reads.
Devil’s advocate. If you are going to give up on easy durability guarantees, you could also try just disabling fsync and letting the kernel do the job you are describing.
I’ve been trying to make posts shorter where possible. That’s twice in days someone’s mentioned something I deleted: original version mentioned strongly-consistent with a cluster. I deleted it thinking people would realize I wanted to keep the properties that made me select SQLite in first place. Perhaps, it’s worth being explicit there. I’ll note I’m brainstorming way out of my area of expertise: databases are black boxes I never developed myself.
After this unforgettable paper, I’d be more likely to do extra operations or whatever for durability given I don’t like unpredictability or data corruption. It’s why I like SQLite to begin with. It does help that a clean-slate front-end would let me work around such problems with it more true for memory-based… depending on implementation. Again, I’m speculating out of my element a bit since I didn’t build databases. Does your line of thinking still have something that might apply or was it just for a non-durable, front end?
PTRACE_ATTACH sends SIGSTOP to this thread.
http://man7.org/linux/man-pages/man2/ptrace.2.html
I’m not clear on what the benefits of using PTRACE_ATTACH over SIGSTOP is but I think it’s that you actually get a tracer connection to the thread/process?
I don’t mind when a shell script grows more lines. I do however have a couple rules for when it’s time to move.
For me that still leaves room for things like backup scripts and many other types of automation. Also it does leave room for simple data stream processing.