1. 12

    Mailbox.org for 1€ per month.

    The interface is not as good as Google, but they will not shut down my account if I make the wrong comment on YouTube. Even if they do, there is a support channel.

    1. 3

      I’m also an mailbox.org, with a custom domain. I barely use the web interface, which is ok but not great. Seems reliable and they appear to know what they’re doing.

      1. 2

        I’ve also been on mailbox.org for almost 3 years now. I had to disable some of the spam filters when it turned out that mailbox blackholed a conference acceptance email(!) that I’d received. Other than that one incident, I’m happy with them.

      1. 18

        I maintained the Wasabi compiler for Fog Creek Software, which was only used to compile FogBugz (and FogBugz-adjacent projects). The purpose was not the same as yours, though.

        1. 12

          Similarly, Facebook developed HipHop just to compile their “one” PHP application.

          1. 4

            What did you guys inside Fogcreek think of all the vitirol Wasabi got online? I recall reading a lot of threads about on hn and elsewhere that decried the entire excercise as being misguided at best, for example.

            1. 10

              Like many things, I think most of the outrage came from people who don’t read good, and the rest from people who think because they are reading about a decision today that means it was decided today. Contrary to popular belief, Fogcreek didn’t decide one day to write a bug tracker, and then put “write custom compiler” at the top of the todo.

              I think the takeaway was never talk about solving a problem other people may not have.

              1. 4

                I like Joel’s response best:

                What’s more interesting about the reaction is that so many average programmers seem to think that writing a compiler is intrinsically “hard,” or, as Jeff wrote in 2006, “Writing your own language is absolutely beyond the pale.” To me this sounds like the typical cook at Olive Garden who cannot possibly believe that any chef in a restaurant could ever do something other than opening a bag from the freezer and dumping the contents into the deep fryer. Cooking? With ingredients? From scratch! Absolutely beyond the pale! Compilers aren’t some strange magic black art that only godlike programmers are allowed to practice. They’re something you learn how to create in one semester in college and you can knock out a simple one in a few weeks. Whatever you think about whether it’s the right thing or not, if you think that a compiler is too hard to do as a part of your day job, you may want to invest some time in professional development and learning more about your craft.

                As someone who took one semester of compilers in college, and ended up maintaining this compiler for several years, I agree. People create new web frameworks all the time. People create their own DSLs and ORMs. There’s nothing harder or weirder about compilers than making tools at these other layers of the stack, but for some reason “compiler” shuts off the part of some people’s brains that lets them think “oh, this is just a program, it takes input and creates output and is 100% understandable.”

                (I have this same belief-bug, but mine’s around encryption and security.)

                1. 2

                  I think a lot of people assume you have to have all the optimizations, too. The work that goes into compilers and their optimizations are often mentioned together. In many applications, one doesn’t need all those optimizations. They’re worried about a problem they wouldn’t have.

                  1. 2

                    Yep! Wasabi targeted .NET, where even for C#, most of the optimization’s actually in the CLR JITter, rather than in the ahead-of-time compilation phase. We chose to write a transpiler for Wasabi 3 rather than generating bytecode directly, but even if we had done the latter, we would still certainly have done almost no optimizations ourselves. (It also helped that our previous target runtime, ASP VBScript, is notoriously slow, so switching to .NET and doing zero other optimizations was still an enormous performance win.)

              2. 3

                Googling gives a dead link to a blog about this. Does these blog entries live anywhere now?

                Is it this? http://jacob.jkrall.net/wasabi-the-parts/index.html

                1. 1

                  blog.fogcreek.com is currently undergoing some sort of maintenance due to the recent acquisition of Manuscript/FogBugz/Kiln. I’ll see if I can repost the Wasabi articles on jacob.jkrall.net.

                  “Wasabi: The ??? Parts” is, basically, the documentation for Wasabi. It was not written by me, but I re-hosted it publicly for the people on HN who asked for it.

              1. 3

                What distinguishes a “Minimalist” software engineer from a non-minimalist one?

                This manifesto seems to talk generic stuff which could as well be applied to an enterprise waterfall software engineer. I kind of assume here that Enterprise and Waterfall are buzzwords considered to be the anti-thesis of Minimalism, isn’t it?

                1. 1

                  Indeed, most of the manifesto are just platitudes.

                1. 5

                  Well that’s a weird trend. Two articles in two days about people writing half-assed C++ compilers. By these low standards, every graduating CS student at my university wrote a C++ compiler.

                  Don’t get me wrong, I think they’re interesting projects, I just don’t think it’s correct to call them C++ when they’re missing almost all of the features.

                  1. 5

                    Walter Bright is the only person on this planet who can claim to have written a C++ compiler: https://www.digitalmars.com/

                  1. 5

                    I made my own custom one in 241 lines of Python + 162 lines of templates of HTML and Atom.

                    It does not have some features most have like lazy compilation but is still plenty fast for me.

                    It has some features other usually have not, like per page summaries, explicit inclusion of pages into blogs, TweetThis blocks, and non-Javascript social media buttons.

                    The older parts of my website are still running on Ikiwiki but I really like the simplicity where I know and understand all the code.

                    1. 2

                      I haven’t had ads on my blog in over a decade. I’ve been meaning to remove the Facebook Page/Twitter widgets too when I get around to my redesign, since I’m pretty much giving both companies free information with them.

                      1. 4

                        There’s a lot of implementations that load the widget once the user want to use them. They are pretty common in Germany and pretty much work by having the button “primed” with one click, which loads and activates the JS and the widget.

                        1. 1

                          I think thats how most privacy extensions make them work. Disabled until you click them.

                        2. 3

                          I have such buttons that work without Javascript. Just normal links.

                          1. 1

                            You might consider using these or similar social sharing buttons without javascript or tracking.

                          1. 5

                            Roughly thousands of manhours of manual testing all over the world. Requires a few months and millions of dollars. Details vary depending on the customer.

                            For the developers there is continuous integration testing for each pull request done via x86 simulation and on target device. Code reviews for each pull request. Sometimes manual full system testing by developers but mostly by a special integration team.

                            I’m working for an automotive supplier so our processes are probably not applicable to you. The contrast might be interesting though. ;)

                            1. 1

                              Work with an Android integrator, sounds remarkably similar. I develop tools to help people test the actual android devices, so I don’t work with that specific workflow, but the people that actually work on Android, it’s pretty similar.

                            1. 5

                              Also, stop with the flat and “clean” design. If there’s something your users are supposed to click on, make it look like something you can click on. Make links that look like links, buttons that look like buttons, etc. Even lobsters fails at this, there’s a menu at the top of the page but it doesn’t look anything like a menu, it’s just a horizontal line of gray words.

                              1. 3

                                Um… those gray words are all just links to other pages. No hamburger menus on Lobsters!

                                1. 1

                                  Also, the names of the words make a user think they might be menu options. Then, the user hovers over them to see the link icon appear. There is an investigate step there vs links that are obviously links which is a usability loss. I don’t the loss is significant, though, given nature of our community. We’re technologists and explorers. Heck, the whole point of the site was coming to look for good links. :)

                                  1. 1

                                    Still, a feedback as simple as “reduce opacity or add an underline on hover” would go a long way in showing the user there’s an interaction “here”.

                                    1. 2

                                      Submit a pull request? https://github.com/lobsters/lobsters

                                      1. 1

                                        Didn’t know that was an option (well, I never looked into that anywyas).

                                        I’ll keep it under hand for when I find time to do so, thanks.

                                  2. 2

                                    If it changes state on the server, make it a button. Otherwise make it a link.

                                  1. 4

                                    The takeaway here is: your job is to find solutions to problems

                                    Nope! It’s not. It’s to make the client happy, both long- & short-term. Here we go again, reaching for the Technical Hammer when we have a People Problem.

                                    1. 2

                                      Not happy usually, just content enough that he pays you and maybe even does business with you again.

                                      1. 2

                                        For us, finding solutions to problems is part of making the client happy. That is why we wrote:

                                        Like captain Spock, we combine the worlds of logical thinking with the human dimension, which may seem irrational when analyzed through the cold prisma of mathematical rationality but has its own logic and meaning. And we need to develop skills in both areas, because, ultimately, we are humans working for other humans — code is just our tool.

                                        1. 1

                                          How do you make the client happy?

                                        1. 3

                                          Kingdomino is a really great game. It has an amazing depth for the short play time (15 min for experienced players). It is easy to learn, so it is fun for casual players as well.

                                          1. 2

                                            I agree, and writing AI agents for it is also a lot of fun!

                                          1. 4

                                            My english blog with mostly technical content: http://beza1e1.tuxen.de/blog_en.html

                                            My german blog with less technical content: http://beza1e1.tuxen.de/blog_de.html

                                            Less than one post per month lately.

                                            1. 4

                                              As someone who never used Rust I want to ask: does the section about crates imply that all third-party libraries are recompiled every time you rebuild the project?

                                              1. 6

                                                Good question! They are not; dependencies are only built on the first compilation, and they are cached in subsequent compilations unless you explicitly clean the cache.

                                                1. 2

                                                  I would assume dependencies are still parsed and type checked though? Or is anything cached there in a similar way to precompiled headers in C++?

                                                  1. 10

                                                    A Rust library includes the actual compiled functions like you’d expect, but it also contains a serialized copy of the compiler’s metadata about that library, giving function prototypes and data structure layouts and generics and so forth. That way, Rust can provide all the benefits of precompiled headers without the hassle of having to write things twice.

                                                    Of course, the downside is that Rust’s ABI effectively depends on accidental details of the compiler’s internal data structures and serialization system, which is why Rust is not getting a stable ABI any time soon.

                                                    1. 4

                                                      Rust has a proper module system, so as far as I know it doesn’t need hacks like that. The price for this awesomeness is that the module system is a bit awkward/different when you’re starting out.

                                                    2. 1

                                                      Ok, then I can’t see why the article needs to mention it. Perhaps I should try it myself rather than just read about its type system.

                                                      It made me think it suffers from the same problem as MLton.

                                                      1. 4

                                                        I should’ve been more clear. Rust will not recompile third-party crates most of the time. It will if you run cargo clean, if you change compile options (e.g., activate or deactivate LTO), or if you upgrade the compiler, but during regular development, it won’t happen too much. However, there is a build for cargo check, and a build for cargo test, and yet another build for cargo build, so you might end up still compiling your project three times.

                                                        I mentioned keeping crates under control, because it takes our C.I. system at work ~20 minutes to build one of my projects. About 5 minutes is spent building the project a first time to run the unit tests, then another 10 minutes to compile the release build; the other 5 minutes is spent fetching, building, and uploading a Docker image for the application. The C.I. always starts from a clean slate, so I always pay the compilation price, and it slows me down if I test a container in a staging environment, realize there’s a bug, fix the bug, and repeat.

                                                        One way to make sure that your build doesn’t take longer than is needed to is be selective in your choice of third party crates (I have found that the quality of crates varies a lot) and making sure that a crate pays for itself. serde and rayon are two great libraries that I’m happy to include in my project; on the other hand, env_logger brings a few transitive libraries for coloring the log it generates. However, neither journalctl nor docker container logs show colors, so I am paying a cost without getting any benefit.

                                                        1. 2

                                                          Compiling all of the code including dependencies, can make some types of optimizations and inlining possible, though.

                                                          1. 4

                                                            Definitely, this is why MLton is doing it, it’s a whole program optimizing compiler. The compilation speed tradeoff is so severe that its users usually resort to using another SML implementation for actual development and debugging and only use MLton for release builds. If we can figure out how to make whole program optimization detect which already compiled bits can be reused between builds, that may make the idea more viable.

                                                            1. 2

                                                              In last discussion, I argued for multi-staged process that improved developer productivity, esp keeping mind flowing. The final result is as optimized as possible. No wait times, though. You always have something to use.

                                                              1. 1

                                                                Exactly. I think developing with something like smlnj, then compiling the final result with mlton is a relatively good workflow. Testing individual functions is faster with Common Lisp and SLIME, and testing entire programs is faster with Go, though.

                                                                1. 2

                                                                  Interesting you mentioned that; Chris Cannam has a build setup for this workflow: https://bitbucket.org/cannam/sml-buildscripts/

                                                      1. 8

                                                        KeePass has clients that work the 3 operation systems in question, and I’ve had good luck using Syncthing to share the password file between computers, but the encryption of the database means that any good sync utility can work with it.

                                                        1. 4

                                                          I KeePassX together with SyncThing on multiple Ubuntus and Androids for two years now. By now I have three duplicate conflict files which I keep around because I have no idea what the difference between the files is. Once I had to retrieve a password from such conflict file as it was missing in the main one.

                                                          Not perfect, but works.

                                                          Duclare, using ssh instead of SyncThing would certainly work since the database is just a file. I prefer SyncThing because of convenience.

                                                          1. 2

                                                            Duclare, using ssh instead of SyncThing would certainly work since the database is just a file.

                                                            Ideally it’d be automated and integrated into the password manager though. Keepass2android does support it, but it does not support passwordless login and don’t recall it ever showing me the server’s fingerprint and asking if that’s OK. So it’s automatically logging in with a password to a host run by who knows. Terribly insecure.

                                                            1. 1

                                                              I had the same situation. 3 conflict files and merging is a pain. I’ve switched to Pass instead now.

                                                            2. 2

                                                              I use Keepass for a few years now too. I tried other Password managers in the meantime but I never got quite satisfied, not even pass though that one was just straight up annoying.

                                                              I’ve had a few conflicts over the years but usually Nextcloud is rather good at avoiding conflicts here and KPXC handles it very well. I think Syncthing might casue more problems as someone else noted, since nodes might take a while to sync up.

                                                            1. 3

                                                              In the industry, you can optimize througput over latency because you produce the same thing over and over again. But in software development, you usually develop something new. If the software you need already exists, you just use it. You need an agile process because you develop something new, and you cannot plan everything ahead of time. Some issues are discovered along the road. Because of this, I don’t think the latency versus throughput trade-off is really relevant here.

                                                              1. 2

                                                                And yet, we do reinvent the wheel very often in software development. Sure, nobody writes the same program a million times but there are plenty of programmers who pump out CRUD web apps.

                                                                1. 3

                                                                  But those CRUD apps are customized, and the customer paying for them may change the requirements. “Develop something new” doesn’t have to mean “develop something revolutionary” or even “develop something novel”, it just means something that doesn’t already exist.

                                                                  To the extent that most CRUD apps share certain design characteristics, that’s why we’ve got LEGO programming or whatever people are calling it now. But even if you use a bunch of off-the-shelf components the customer can make decisions that result in the need to put the pieces together somewhat differently.

                                                              1. 1

                                                                Maybe Frankenstein would have been a better name for the distro?

                                                                1. 3

                                                                  It’s from 2014, so part 2 will probably never come.

                                                                  1. 1

                                                                    The big question is if they actually need to be saved.

                                                                    A year ago I would have said no. Feed readers may not be as popular as Facebook, Twitter, etc. but sites have feeds and I can use them.

                                                                    For large parts this is still true. Luckily lots of sites build on Wordpress, Drupal, etc and they come with feeds out of the box. Sometimes the author may not even know he provides a feed for me.

                                                                    However, lately I have the feeling this is in decline. It seems a wix.com (yet another diy website ui) blog does not provide a feed by default. Some wordpress blogs lack the auto-discovery HTML header for the feed. Signs that supporting RSS/Atom is not that important for content producers anymore.

                                                                    1. 2

                                                                      RSS was a great concept (and appropriate for its time), but was designed by people who didn’t comprehend XML namespaces, instead forcing implementations (both generators and readers) to escape XML and/or HTML tags, which requires multiple passes for generating and parsing feeds - with an intermediate encoding/decoding step (Really Simple?). They purportedly addressed this in RSS 2.0, but if you have a look at their RSS 2.0 example, they still got it wrong, persisting a 1990’s understanding of the web. Although I still use it, I shake my head in disappointment every time I see RSS source. RSS 2.0 should really have been based on something that could be validated, such as XHTML.

                                                                      At this point, it is probably way too late for a comeback, as:

                                                                      1. Social media platforms like Twitter are commonly used as a substitute and have a large hegemony over content.
                                                                      2. Browsers have given up on RSS in favor of their own peculiar readers.
                                                                      3. Google, Microsoft, Yandex and whatever Yahoo is now are pushing for an entirely different system based on extracting information from HTML content via an ever-changing pseudo-ontology that lacks definitions and is inconsistently employed by every practitioner.

                                                                      You could read the above points as things that RSS should be able to overcome. If RSS were indeed to make a comeback, I would hope that in a new “RSS 3.0” incarnation it would satisfy the following criteria:

                                                                      1. Standard comes before implementation (e.g., utilize existing standards).
                                                                      2. Validatable (e.g., employ XML namespaces and utilize an XSD for document validation).
                                                                      3. Human readable (i.e., subset of XHTML or HTML, that can be consistently rendered as in any modern web browser)
                                                                      4. Strict specification (use a well-defined structure with a minimal tag set that prevents multiple interpretations of the specification).

                                                                      I’ll admit, I do not like JSON one bit because it is antithetical to several, if not all of the above criteria. However, since a JSON alternative is desired, I would recommend that it be directly based on an XML/HTML version that does satisfy the above criteria. Then a simple XSL (read “standardized”) spreadsheet could be employed to generate the equivalent JSON version, satisfying both worlds.

                                                                      1. 9

                                                                        Doesn’t Atom fulfill your RSS 3 criteria?

                                                                        1. 2

                                                                          they still got it wrong, persisting a 1990’s understanding of the web. Although I still use it, I shake my head in disappointment every time I see RSS source. RSS 2.0 should really have been based on something that could be validated, such as XHTML.

                                                                          Atom does fulfill your second list’s criteria, is often used today in place of RSS, and can even be validated. My article even says that if in doubt, use Atom.

                                                                          Social media platforms like Twitter are commonly used as a substitute and have a large hegemony over content.

                                                                          The entire point of the site is to set something against this before it is too late. Today, there still are many sites providing feeds, and I do hope that this article will sustain that. To be clear, I don’t advocate leaving social media. All I ask in that article is to provide a feed additionally to your social media presence.

                                                                          Browsers have given up on RSS in favor of their own peculiar readers.

                                                                          I’ve actually never used Firefox’ RSS/Atom support and I don’t believe that browsers are the correct target for RSS/Atom feeds. There are feed reader programs that deal specifically with feeds and they are still being maintained, so I don’t see browsers removing their feed support as problematic.

                                                                          Google, Microsoft, Yandex and whatever Yahoo is now are pushing for an entirely different system

                                                                          You listed yourself why it isn’t a real alternative.

                                                                        1. 3

                                                                          Maybe my 6yo and me finally get around to try the ~50 years old Lego train my dad already played with.

                                                                          I also have to build a nice box for my Raspberry Pi RFID player, but inspiration is missing so far.

                                                                          1. 8

                                                                            I have been thinking similar thoughts since I read this article.

                                                                            Why would you use WebAssembly? There are various similar technologies. For example, the JVM certainly has a more mature ecosystem.

                                                                            Some have compared WebAssembly to Java applets; in some ways, they’re very right, but in some ways, they’re very wrong. Eventually I’ll write a post about the wrong

                                                                            I’m waiting for that post.

                                                                            1. 4

                                                                              From following the development of the wasm standard, one of its greatest strengths over using existing “virtual machine bytecode”s would be a focus on compressibility and fast parsing+type checking+JIT transformation. These design constraints weren’t really primary or even secondary concerns in the development of those existing bytecodes, which started more as an intermediate step in the black box of a compiler