I haven’t had ads on my blog in over a decade. I’ve been meaning to remove the Facebook Page/Twitter widgets too when I get around to my redesign, since I’m pretty much giving both companies free information with them.
There’s a lot of implementations that load the widget once the user want to use them. They are pretty common in Germany and pretty much work by having the button “primed” with one click, which loads and activates the JS and the widget.
You might consider using these or similar social sharing buttons without javascript or tracking.
Roughly thousands of manhours of manual testing all over the world. Requires a few months and millions of dollars. Details vary depending on the customer.
For the developers there is continuous integration testing for each pull request done via x86 simulation and on target device. Code reviews for each pull request. Sometimes manual full system testing by developers but mostly by a special integration team.
I’m working for an automotive supplier so our processes are probably not applicable to you. The contrast might be interesting though. ;)
Work with an Android integrator, sounds remarkably similar. I develop tools to help people test the actual android devices, so I don’t work with that specific workflow, but the people that actually work on Android, it’s pretty similar.
Also, stop with the flat and “clean” design. If there’s something your users are supposed to click on, make it look like something you can click on. Make links that look like links, buttons that look like buttons, etc. Even lobsters fails at this, there’s a menu at the top of the page but it doesn’t look anything like a menu, it’s just a horizontal line of gray words.
Also, the names of the words make a user think they might be menu options. Then, the user hovers over them to see the link icon appear. There is an investigate step there vs links that are obviously links which is a usability loss. I don’t the loss is significant, though, given nature of our community. We’re technologists and explorers. Heck, the whole point of the site was coming to look for good links. :)
Still, a feedback as simple as “reduce opacity or add an underline on hover” would go a long way in showing the user there’s an interaction “here”.
Submit a pull request? https://github.com/lobsters/lobsters
Didn’t know that was an option (well, I never looked into that anywyas).
I’ll keep it under hand for when I find time to do so, thanks.
The takeaway here is: your job is to find solutions to problems
Nope! It’s not. It’s to make the client happy, both long- & short-term. Here we go again, reaching for the Technical Hammer when we have a People Problem.
Not happy usually, just content enough that he pays you and maybe even does business with you again.
For us, finding solutions to problems is part of making the client happy. That is why we wrote:
Like captain Spock, we combine the worlds of logical thinking with the human dimension, which may seem irrational when analyzed through the cold prisma of mathematical rationality but has its own logic and meaning. And we need to develop skills in both areas, because, ultimately, we are humans working for other humans — code is just our tool.
Kingdomino is a really great game. It has an amazing depth for the short play time (15 min for experienced players). It is easy to learn, so it is fun for casual players as well.
My english blog with mostly technical content: http://beza1e1.tuxen.de/blog_en.html
My german blog with less technical content: http://beza1e1.tuxen.de/blog_de.html
Less than one post per month lately.
As someone who never used Rust I want to ask: does the section about crates imply that all third-party libraries are recompiled every time you rebuild the project?
Good question! They are not; dependencies are only built on the first compilation, and they are cached in subsequent compilations unless you explicitly clean the cache.
I would assume dependencies are still parsed and type checked though? Or is anything cached there in a similar way to precompiled headers in C++?
A Rust library includes the actual compiled functions like you’d expect, but it also contains a serialized copy of the compiler’s metadata about that library, giving function prototypes and data structure layouts and generics and so forth. That way, Rust can provide all the benefits of precompiled headers without the hassle of having to write things twice.
Of course, the downside is that Rust’s ABI effectively depends on accidental details of the compiler’s internal data structures and serialization system, which is why Rust is not getting a stable ABI any time soon.
Rust has a proper module system, so as far as I know it doesn’t need hacks like that. The price for this awesomeness is that the module system is a bit awkward/different when you’re starting out.
Ok, then I can’t see why the article needs to mention it. Perhaps I should try it myself rather than just read about its type system.
It made me think it suffers from the same problem as MLton.
I should’ve been more clear. Rust will not recompile third-party crates most of the time. It will if you run cargo clean, if you change compile options (e.g., activate or deactivate LTO), or if you upgrade the compiler, but during regular development, it won’t happen too much. However, there is a build for cargo check, and a build for cargo test, and yet another build for cargo build, so you might end up still compiling your project three times.
I mentioned keeping crates under control, because it takes our C.I. system at work ~20 minutes to build one of my projects. About 5 minutes is spent building the project a first time to run the unit tests, then another 10 minutes to compile the release build; the other 5 minutes is spent fetching, building, and uploading a Docker image for the application. The C.I. always starts from a clean slate, so I always pay the compilation price, and it slows me down if I test a container in a staging environment, realize there’s a bug, fix the bug, and repeat.
One way to make sure that your build doesn’t take longer than is needed to is be selective in your choice of third party crates (I have found that the quality of crates varies a lot) and making sure that a crate pays for itself. serde and rayon are two great libraries that I’m happy to include in my project; on the other hand, env_logger brings a few transitive libraries for coloring the log it generates. However, neither journalctl nor docker container logs show colors, so I am paying a cost without getting any benefit.
Compiling all of the code including dependencies, can make some types of optimizations and inlining possible, though.
Definitely, this is why MLton is doing it, it’s a whole program optimizing compiler. The compilation speed tradeoff is so severe that its users usually resort to using another SML implementation for actual development and debugging and only use MLton for release builds. If we can figure out how to make whole program optimization detect which already compiled bits can be reused between builds, that may make the idea more viable.
In last discussion, I argued for multi-staged process that improved developer productivity, esp keeping mind flowing. The final result is as optimized as possible. No wait times, though. You always have something to use.
Exactly. I think developing with something like smlnj, then compiling the final result with mlton is a relatively good workflow. Testing individual functions is faster with Common Lisp and SLIME, and testing entire programs is faster with Go, though.
Interesting you mentioned that; Chris Cannam has a build setup for this workflow: https://bitbucket.org/cannam/sml-buildscripts/
KeePass has clients that work the 3 operation systems in question, and I’ve had good luck using Syncthing to share the password file between computers, but the encryption of the database means that any good sync utility can work with it.
I KeePassX together with SyncThing on multiple Ubuntus and Androids for two years now. By now I have three duplicate conflict files which I keep around because I have no idea what the difference between the files is. Once I had to retrieve a password from such conflict file as it was missing in the main one.
Not perfect, but works.
Duclare, using ssh instead of SyncThing would certainly work since the database is just a file. I prefer SyncThing because of convenience.
Duclare, using ssh instead of SyncThing would certainly work since the database is just a file.
Ideally it’d be automated and integrated into the password manager though. Keepass2android does support it, but it does not support passwordless login and don’t recall it ever showing me the server’s fingerprint and asking if that’s OK. So it’s automatically logging in with a password to a host run by who knows. Terribly insecure.
I had the same situation. 3 conflict files and merging is a pain. I’ve switched to Pass instead now.
I use Keepass for a few years now too. I tried other Password managers in the meantime but I never got quite satisfied, not even pass though that one was just straight up annoying.
I’ve had a few conflicts over the years but usually Nextcloud is rather good at avoiding conflicts here and KPXC handles it very well. I think Syncthing might casue more problems as someone else noted, since nodes might take a while to sync up.
In the industry, you can optimize througput over latency because you produce the same thing over and over again. But in software development, you usually develop something new. If the software you need already exists, you just use it. You need an agile process because you develop something new, and you cannot plan everything ahead of time. Some issues are discovered along the road. Because of this, I don’t think the latency versus throughput trade-off is really relevant here.
And yet, we do reinvent the wheel very often in software development. Sure, nobody writes the same program a million times but there are plenty of programmers who pump out CRUD web apps.
But those CRUD apps are customized, and the customer paying for them may change the requirements. “Develop something new” doesn’t have to mean “develop something revolutionary” or even “develop something novel”, it just means something that doesn’t already exist.
To the extent that most CRUD apps share certain design characteristics, that’s why we’ve got LEGO programming or whatever people are calling it now. But even if you use a bunch of off-the-shelf components the customer can make decisions that result in the need to put the pieces together somewhat differently.
The big question is if they actually need to be saved.
A year ago I would have said no. Feed readers may not be as popular as Facebook, Twitter, etc. but sites have feeds and I can use them.
For large parts this is still true. Luckily lots of sites build on Wordpress, Drupal, etc and they come with feeds out of the box. Sometimes the author may not even know he provides a feed for me.
However, lately I have the feeling this is in decline. It seems a wix.com (yet another diy website ui) blog does not provide a feed by default. Some wordpress blogs lack the auto-discovery HTML header for the feed. Signs that supporting RSS/Atom is not that important for content producers anymore.
RSS was a great concept (and appropriate for its time), but was designed by people who didn’t comprehend XML namespaces, instead forcing implementations (both generators and readers) to escape XML and/or HTML tags, which requires multiple passes for generating and parsing feeds - with an intermediate encoding/decoding step (Really Simple?). They purportedly addressed this in RSS 2.0, but if you have a look at their RSS 2.0 example, they still got it wrong, persisting a 1990’s understanding of the web. Although I still use it, I shake my head in disappointment every time I see RSS source. RSS 2.0 should really have been based on something that could be validated, such as XHTML.
At this point, it is probably way too late for a comeback, as:
You could read the above points as things that RSS should be able to overcome. If RSS were indeed to make a comeback, I would hope that in a new “RSS 3.0” incarnation it would satisfy the following criteria:
I’ll admit, I do not like JSON one bit because it is antithetical to several, if not all of the above criteria. However, since a JSON alternative is desired, I would recommend that it be directly based on an XML/HTML version that does satisfy the above criteria. Then a simple XSL (read “standardized”) spreadsheet could be employed to generate the equivalent JSON version, satisfying both worlds.
they still got it wrong, persisting a 1990’s understanding of the web. Although I still use it, I shake my head in disappointment every time I see RSS source. RSS 2.0 should really have been based on something that could be validated, such as XHTML.
Atom does fulfill your second list’s criteria, is often used today in place of RSS, and can even be validated. My article even says that if in doubt, use Atom.
Social media platforms like Twitter are commonly used as a substitute and have a large hegemony over content.
The entire point of the site is to set something against this before it is too late. Today, there still are many sites providing feeds, and I do hope that this article will sustain that. To be clear, I don’t advocate leaving social media. All I ask in that article is to provide a feed additionally to your social media presence.
Browsers have given up on RSS in favor of their own peculiar readers.
I’ve actually never used Firefox’ RSS/Atom support and I don’t believe that browsers are the correct target for RSS/Atom feeds. There are feed reader programs that deal specifically with feeds and they are still being maintained, so I don’t see browsers removing their feed support as problematic.
Google, Microsoft, Yandex and whatever Yahoo is now are pushing for an entirely different system
You listed yourself why it isn’t a real alternative.
Maybe my 6yo and me finally get around to try the ~50 years old Lego train my dad already played with.
I also have to build a nice box for my Raspberry Pi RFID player, but inspiration is missing so far.
I have been thinking similar thoughts since I read this article.
Why would you use WebAssembly? There are various similar technologies. For example, the JVM certainly has a more mature ecosystem.
Some have compared WebAssembly to Java applets; in some ways, they’re very right, but in some ways, they’re very wrong. Eventually I’ll write a post about the wrong
I’m waiting for that post.
From following the development of the wasm standard, one of its greatest strengths over using existing “virtual machine bytecode”s would be a focus on compressibility and fast parsing+type checking+JIT transformation. These design constraints weren’t really primary or even secondary concerns in the development of those existing bytecodes, which started more as an intermediate step in the black box of a compiler
There is one aspect where Ed maybe has an actual advantage: It keeps you in write mode and discourages editing. I will consider using Ed for journalling where I currently use vim.
There is one aspect where Ed maybe has an actual advantage: It keeps you in write mode and discourages editing.
cat > $filename will do that, too, but with ed I can switch back to command mode, save what I’ve done so far, and then continue by returning to append mode.
Though I could probably do the same with cat >> $filename, but I’m afraid I’d forget that I need to type > twice to append and end up overwriting the file. :)
This is why I prefer writing drafts in a chat with myself. Also because of the enforced pacing: the rhythm of hitting Enter when a line is done, and the leaving it as it is.
I can agree with the reliability of Java, but is there a good framework. Having used Java-Play and Python-Django I would say that Django comes with more add better integrated batteries. For a startup, development speed is probably more important. Later you can add microservices in Java if you like.
I’d really like to write more D. In my particular case, I couldn’t have a GC in play (self-imposed memory constraints), but there’s a lot about it that’s attractive to me. I don’t have any desire to choose Go over it - power of the language is considerably greater from my limited experience.
That said, Go does have a big package community behind it like Rust.
Stick a @nogc on your main function and you have a compile-time guarantee that no GC allocations will happen in your program.
Neat - I didn’t realize this. Too late now for the current project, but good to know for the future. I’m particularly interested in its C++ FFI story. There’s a couple of specialized C++ libraries I’d like to use without having to write flat-C style wrappers just to call them sanely from Rust.
Thanks for that!
It always the same arguments with D discussions:
It’s at least a pattern that’s solvable. Someone just has to attempt to compile the whole standard library with no GC option. Then, list the breakage. Then, fix in order of priority for the kind of apps that would want no-GC option. Then, write this up into a web page. Then, everyone shares it in threads where pattern shows up. Finally, the pattern dies after 10-20 years of network effects.
People are doing that. Well, except for the “write this up into a web page” part. I guess you are thinking of web pages like http://www.arewewebyet.org/
Yeah, some way for people to know that they’re doing it with what level of progress. Good to know they’re doing it. That you’re the first to tell me illustrates how a page like that would be useful in these conversations. People in D camp can just drop a link and be done with it.
I find D has a lot of packages too. Not an explosive smörgåsbord, but sufficient for my purposes.
The standard library by itself is fairly rich already.
I guess the question would be whether unsafe or smart pointers are about as easy to use in D as C or C++. If so, the GC might not be a problem. In some languages, GC is really hard to avoid.
Maybe @JordiGH, who uses D, can tell us.
I write D daily. Unsafe pointers work the same as in C or C++. I wrote a GC-less C++-like smart pointer library for D. It’s basically std::unique_ptr and std::shared_ptr, but no std::weak_ptr because 1) I haven’t needed it and 2) One can, if needed, rely on the GC to break cycles (although I don’t know how easy that would be do to currently in practice.
D is a better C++, so pointers easier to use than C++. As I understand, the main problem is that it used to be the case that the standard library used GC freely, making GC hard to avoid if you used the standard library. I understand there is an ongoing effort to clear this but I don’t know the current status.
It depends on which part of the standard library. These days, the parts most often used have functions that don’t allocate. In any case it’s easy to avoid by using @nogc.
Nice article. How do you feel about the size of the language? One thing that keeps me off from looking at rust seriously is the feeling that it’s more of a C++ replacement (kitchen & sink) vs a C replacement.
The Option example feels a bit dropped off too early, you started by showing an example that fails and then jumped to a different code snippet to show nicer compiler error messages without ever going back and showing how the error path is handled with the Option type.
You should also add Ada to the list of your languages to explore, you will be surprised how many of the things you found nice or interesting were already done in the past (nice compiler errors, infinite loop semantics, very rich type system, high level language yet with full bare metal control).
Thank you for commenting! I agree that Rust’s standard library feels as big as C++‘s, but I haven’t been too bothered by the size of either one. To quote Bjarne Stroustrup’s “Foundations of C++” paper, “C++ implementations obey the zero-overhead principle: What you don’t use, you don’t pay for [BS94]. And further: What you do use, you couldn’t hand code any better.” I haven’t personally noticed any drawbacks of having a larger standard library (aside from perhaps binary size constraints, but you would probably end up including a similar amount of code anyway, just code that you wrote yourself), and in addition to the performance of standards-grade implementations of common data structures, my take on it is that having a standardized interface to them improves readability quite a bit - when you go off to look through a codebase, the semantics of something like a hashmap shouldn’t be surprising. It’s a minor draw, but I feel like I have to learn a new hash map interface whenever I go off to grok a new C codebase.
I’ll definitely take a look at Ada, seems like a very promising language. Do you have any recommendations for books? I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.
Also, thank you for pointing out the issue with the Option example, I’ll make an edit to the post at some point today.
It’s funny how perspectives change; to C and JavaScript people, we have a huge standard library, but to Python, Ruby, Java, and Go people, our standard library is minuscule.
I remember when someone in the D community proposed to include a basic web server in the standard library. Paraphrased:
“Hell no, are you crazy? A web server is a huge complex thing.”
“Why not? Python has one and it is useful.”
What you don’t use, you don’t pay for [BS94]
That is true however you have little impact on what others use. Those features will leak into your code via libraries or team mates using features you might not want. Additionally when speaking about kitchen & sink I didn’t only mean the standard library, the language itself is much larger than C.
I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.
Last I did anything related to Ada was somewhere around 2012. I recall the Barnes books were well regarded but I don’t know if that changed in any significant way.
For casual reading the Ada Gems from Adacore are fun & informing reads.
I’ll definitely take a look at Ada, seems like a very promising language. Do you have any recommendations for books? I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.
I recommend Building High Integrity Applications in SPARK. It covers enough Ada to get you into the meat of SPARK (the compile time proving part of Ada) and goes through a lot of safety features that will look familiar after looking at Rust. I wrote an article converting one of the examples to ATS in Capturing Program Invariants in ATS. You’ll probably find yourself thinking “How can I do that in Rust” as you read the book.
I made my own custom one in 241 lines of Python + 162 lines of templates of HTML and Atom.
It does not have some features most have like lazy compilation but is still plenty fast for me.
It has some features other usually have not, like per page summaries, explicit inclusion of pages into blogs, TweetThis blocks, and non-Javascript social media buttons.
The older parts of my website are still running on Ikiwiki but I really like the simplicity where I know and understand all the code.