1. 28

Yyyyyyyep.

Edit: To expand a little, now that I’ve had some coffee, let me repeat one of soc’s points a little more vigorously: C++ will not get better until they start taking things away as well as adding things. I still have vivid memories of reading through a C++ book in undergrad, back in 2004 or something, and thinking “ok, I can’t just entirely avoid pointers in favor of references… so what good are references besides making more ways of doing the same thing?”

1. 10

References serve an entirely different purpose than pointers. A pointer is a type that can be NULL, so it may represent an optional object, whereas references are (theoretically) never NULL and always valid. These are valuable safety features. However, you would find it extremely difficult to write C++ code that exclusively uses references and no pointers. So both have their place in the language, though I would replace almost all pointers with std::unique_ptr if C++11 is available.

I agree with the sentiment that the relentless growth of C++ and the reluctance of the committee to deprecate features is the source of many problems. Even though I know and write C++ for many years, I still find C++ features that weren’t previously known to me on a daily (!) basis. In other words, for thousands of days, I am daily surprised by this language. The complexity is unfathomable.

1. 5

references are (theoretically) never NULL and always valid.

They are never NULL and often invalid. Like when you saved a reference to something that either (a) went out of scope; (b) was the result of dereferencing a pointer to memory that was freed afterwards. You can also make a null reference if you try very hard, but that doesn’t happen in practice, whereas invalid references sure do.

1. 3

You can also make a null reference if you try very hard, but that doesn’t happen in practice

That’s just not true.

void foo(int& i);
...
p = nullptr;
...
foo(*p);

is not at all hard, nor is it it even that uncommon.

1. 3

It has never shown up in my 4 years of commercial C++ experience + 5 years non-commercial before that.

1. 1

I’ve seen it in the wild; debugged a core dump that had it. The evil thing is that the program crashes at the point where the NULL-reference is used, not where NULL is dereferenced (in compiled assembly, dereferencing a pointer to store it in a reference is a no-op). These two points may be far apart.

2. 1

This is undefined behavior.

3. 1

They shouldn’t be invalid, but it’s not entirely enforced by the compiler. The idea is that by using references, a programmer signals to the reader of the code that the value is intended to be a non-NULL valid object, and if that is not the case, the error is at whatever line of code that creates this invalid reference, not the one that uses it. In the wild, even inexperienced C++ programmers rarely create invalid references simply because it is harder to do so, whereas they routinely create invalid pointers. In addition to that, invalid references are almost always a programmer error, but invalid pointers are a regular occurrence and often part of a design that allows and accounts for such states.

1. 4

The idea is that by using references, a programmer signals to the reader of the code that the value is intended to be a non-NULL valid object, and if that is not the case, the error is at whatever line of code that creates this invalid reference, not the one that uses it.

Yes, nonetheless it happens. And it doesn’t matter that “somebody else” made the error. The error is still there and it’s your code that crashes. I feel much better using a language that doesn’t put me in such situations.

In the wild, even inexperienced C++ programmers rarely create invalid references simply because it is harder to do so

For some definition of “rarely”. I’ve often seen code that passed references to objects wrapped in std::shared_ptr (often to avoid ARC overhead) into function arguments where those functions captured those by reference in lambdas which were executed after the std::shared_ptr freed the memory. Similarly, this regardless of its type is in practice used like a reference. There’s a lot of code in the wild that does something like PostTask([this] { /* do smth */ });.

At the end of the day it doesn’t matter that “the error is somewhere else”. If the language makes something remotely straightforward, I’m sure to have to deal with that in a shared codebase at one point no matter how few errors I make myself.

1. 2

Yes, whatever the compiler doesn’t prohibit is likely to end up in the codebase. However, I still think that references are valuable and provide better safety than pointers. Obviously it would be better if making mistakes was impossible. I’m not whitewashing C++’s serious shortcomings here.

As for the discussion about invalid references, I’ll leave you with this short story: http://www.gotw.ca/conv/002.htm

4. 3

References serve an entirely different purpose than pointers.

The official (D&E of C++) purpose of references is to enable operator overloading.

1. 2
1. 10

That reminds me of the time when I realized that I made the grave mistake of making all my tables have the encoding utf8_unicode_ci and thinking they were UTF-8, not knowing that utf8_unicode_ci is a legacy broken encoding that only supports up to 3 bytes per character. How naïve of me to assume that utf8 means UTF-8. I realized it when I tried to insert some 4 byte UTF-8 characters and the database spewed back errors at me. You should always use utf8mb4_unicode_ci [1]. That was my big WTF moment with MySQL. Not surprised that there are more skeletons in the closet.

1. 7

MySQL is full of WTFs. The most recent one I ran into is that GROUP_CONCAT has a maximum length, when exceeded will just silently truncate the resulting string.

Another one that sticks out in recent memory is that it thinks "foo " = "foo" is true (so trailing spaces don’t matter). Oh, and the always fun gotcha that DDL statements aren’t transactional. PostgreSQL was such a breath of fresh air after having dealt so often with the pain of having migrations half-applied and then crashing (while writing them). And this kind of nonsense goes on and on and on.

1. 2

You’re not the only one. I got caught by this one when someone stuck an emoji in a search query that ended up in a URL.

1. 2

Having worked with Oracle databases before, this really explains why Oracle the company was happy to buy MySQL - like sees like.

1. 1
1. 6

Boost::Asio (async I/O, network library) calls these strand. I’ve worked extensively with Boost::Asio and I conclude that strands are not worth it unless extreme performance requirements force you to use them. You just don’t want yet another scheduler in your pipeline. In particular, it’s difficult to keep track of what happens when and why. In C++, you have the additional problem of lifetimes (when is the object deleted), which is alleviated by std::shared_ptr, but then suddenly your objects live because some strand somewhere still has a std::shared_ptr to it, and it’s very difficult to have an overview of lifetimes and memory management. You trade a whole lot of inconvenience for a tiny bit of performance.

1. 1

Oh! I’ve done this before. I’ve had surprising luck contacting original authors; perhaps I should dig some up…

1. 1

You also reversed and reimplemented the game? That’s awesome, I’d love to read more about it. Did you reverse the game logic by inspecting the binary? Did you figure out the exact logic how the enemies move? I did everything down to pixel perfection, but I couldn’t fully figure out the AI.

1. 1

No, not this game, but similar games. I have a soft spot for that early Windows shareware aesthetic, but this game was new to me.

1. 1

Hah, here I thought that I could finally get some answers to my open questions about the game internals. I’d still be interested in your reversing of other games.

1. 1

1. 2

About a year later, the programmer of the game, Rüdiger Appel, contacted me through my website. We had a nice e-mail exchange in which he revealed that the game was originally written in Pascal and that the source code is unfortunately lost. You can find more information about the game and the source code of my clone on GitHub. Note that this was my first larger JavaScript project, so the code quality is mediocre at best.

1. 9

The most hilarious thing about MQTT is not mentioned. Look at the Quality of Service (QoS) parameter [1]. It has three levels, and the highest guarantee is “Exactly once (assured delivery)”. If someone designs a protocol for a highly distributed system and offers you this option, you know that you can safely discard the protocol and use something else (it is generally impossible to make such a guarantee). In a highly distributed system, error handling is the secret sauce. You can’t just delegate that to some magical mechanism that “guarantees” your delivery (most likely with blind retries). You can’t just live in blissful ignorance of the ugliness of network problems or failing devices. Your software on each device in the network needs to handle all these error states and respond accordingly. I personally think that MQTT brings little to the table that actually addresses the problems of a highly distributed system.

1. 8

I’m surprised bitmap fonts survived for so long. They should have been removed from everything ages ago! The presence of this ancient technology is still causing unpleasant surprises sometimes.

1. 11

As somebody who exclusively uses bitmap fonts (Misc Fixed in particular is my absolute favorite), this comment pains me.

1. 3

Can you describe the OS, desktop environment, browser, text editor, IDE, etc. you’re using? I love bitmap fonts because of their enhanced clarity and design, but most software is unable to handle them properly. I have yet to find a setup that lets me use bitmap fonts across the board, from desktop environment all the way to the applications. SerenityOS seems to do really well on this front.

1. 5
• Distro: Gentoo
• Environment: no DE (I use bspwm or openbox as my WM)
• Browser: Pale Moon
• Editor: Emacs
• Graphical Toolkits: Athena (Emacs), Gtk (everything else)

I’ve gotten almost everything to work with Misc Fixed, with the exception of Discord.

Even with BBD (Bandaged Better Discord) I still can’t get it to use the Misc Fixed font. -_-

1. 1

Try using Ripcord https://cancel.fm/ripcord/ for Discord. It supports any font you like, and additionaly it’s a fully native Qt application, not an Electron crap.

2. 4

Fonts more than many other things in normal graphics depends on the ‘context of use’ and what price you must pay versus the price you can tolerate paying. It’s your cycles and all that but the vector step is a big and pricey complexity leap – rough incremental cost (anything non-printer):

• bitmap ascii-set? font in cache and straight masked bitblt (< 100loc range)
• bitmap hdpi / color? multiple caches, fetch/add per pixel (< 500loc range)

(arguably hershey vector font set kind of things fit in here somewhere, but mainly embedded use?)

• bitmap to intermediate-vector->bitmap? fetch+mul per pixel, heavy format parsing+intermediate buffer+glyph cache+device dependent shaping+raster cache and complex data formats (multiple support libs and years of work)
• cursive/kerned? picking / selection operations require LTR/RTL sweep (mouse motion gets pricey)
• shaped/substitutions/ligatures? all string and layouting operations more complex, table preprocessor/lut
• gpu drawing context? atlas packing cache management, reduced feasible set of active use glyphs
• ZUI? mipmap levels and filtering..
• 3D/VR? offline (m)SDF gen, manual fixups for troublesome glyphs..

I use most of that range daily, whenever possible though, bitmap for the win (and cycles).

The thing with that discussion though is like, while bitmap fonts may have some space optimization weirdness that makes parsing a little bit of work, for what they are, it doesn’t register on the complexity scale. Heck, those fun emojis are embedded PNGs forcing both bitmap management and pixel-scaling to be there anyhow..

Given two particular names in that thread, I’d go with ‘obstinate’..

1. 15

I feel like GUI programming needs an analogous rule to the Linux Kernel’s “We do not break user space!” rule. If your decision to remove something causes distressed comments and workaround tutorials to spring up like mushrooms, you probably blundered - no matter the amount of cleanup it allows you to do in your code base. It is really frustrating when the maintainers of an ubiquitous library mark your bug report as “wontfix” and close it. On the other hand, will you, the user, step up, take on the burden of this problem and straighten it all out? My experience is that if you submit a clean bugfix and are willing to talk and work with people, they will accept your PR.

1. 8

GUI programming for Windows in C and C++ pretty much follows the “We do not break user space!” rule.
Cake, for example, is a Windows 1.0 application that I ported to Win64 as a weekend hack.
I think these are the only code changes that would be needed to get the application working on Win32.

1. 2

I saw one of the participants’ name turn up and instinctively knew that it would turn into an unpleasant outcome that would leave everyone worse off.

My approach is to stay as far away from the Gnome project and that individual as possible, and it kinda works.

1. 2

I find that it’s the keyboard shortcuts that most make or break an emulation of another system. A few years ago I got so fed up trying to switch workspaces with s-n that I switched to JWM and copied all the Windows shortcuts across. For some reason, most of the standalone window managers don’t seem to like having an action bound to the super key on its own.

(Just a quick heads up - I enjoyed the article but there were a few grammatical errors. The use of “was” instead of “were” and “it’s” instead of “its”. Nothing major though)

1. 3

I’m from the north of England, so “there was a number of distros…” sounds correct to me. You’re right, grammatically it is incorrect, but “there were a number of distros…” just sounds wrong to me. 😊

Thanks for the heads up on “it’s” looks like it slipped through the net during the proof read. Fixed now.

1. 2

The grammar of “there was a number of” actually seems to be a subtle issue:

https://www.quora.com/Is-it-correct-to-say-there-was-a-number-of-or-there-were-a-number-of

Instinctively, I’d say “there was a number of” is correct because “a number” is singular (the linked answer writes at length about it). However, it’s debatable.

When I write blog posts, I sometimes agonise over things like this for hours just to go down an ever more confusing rabbit hole of linguistics and etymology. Often, I arrive at disputes that let me conclude that both variants are valid. My advice is to just pick one and stick with it. Consistency is more important than finding the “correct” answer.

1. 1

It’s correctness isn’t debatable, only preferences. Which stands correct when simplified?

“There was a number.” or “There were a number.” Addition “of distros” doesn’t change conjugation.

1. 1

Consistency is probably the best solution here. Initially it sounded off to me, but it’s something I would say myself, and by that point I wasn’t at all sure anymore but thought I’d flag it anyway.

2. 3

Wow, what would you say if you read some of my posts. I am not a native English speaker. :(

1. 4

I don’t think their ‘heads up’ comment was meant to be offensive or mean. It’s ok if you make mistakes, sometimes you don’t know you did unless someone points it out.

1. 2

I agree, I have learned a lot from blogging. Some people was mean, some gentle, I have learned anyway from all of them.

1. 2

Yes, I didn’t mean to sound rude. I tried to phrase it politely, but know that I appreciate those kind of corrections both in English and any other language I might be speaking.

1. 6

Prefer to push fixes upstream instead of working around problems downstream

I wholeheartedly agree with this advice and the rest of the post. However, I think you should do both. If you have to solve a problem, you need a short-term and a long-term solution. Yes, the long-term solution is pushing the fix upstream, but until your PR is accepted and released in an official version, you need to have a solution, and that is why you most likely also need the workaround downstream, unless you have time to wait. Bonus points if you achieve a seamless transition between those two solutions.

1. 3

Yeah, I think this effect is particularly pronounced when the issues are in the operating system or some base infrastructure software that is generally deployed on a totally separate time scale from the upper-level software by many users. That is: if there is a bug in a publicly used library (e.g., libc) you want to fix it in the OS, but you also probably need to work around it in the consuming software experiencing the bug so that your deployed software can work immediately even on older versions of the OS that have not been updated yet and may not be for some time.

1. 11

I invested a lot of time into learning OpenGL and I was really bummed out when Apple dropped OpenGL support and more and more devices and programs started to use Vulkan. The fact that it takes 1000+ lines to render a triangle always turned me off. However, this article made the subject approachable.

This article is intended to be a concise overview of the fundamentals of Vulkan. It was written for somebody who knew Vulkan but has forgotten many of the details (i.e. future me).

I have never used Vulkan before, but the article made sense from start to finish. It sure made me feel like I knew Vulkan but forgot many of the details! I’m more likely to dive into Vulkan development now.

It’s good to see that Vulkan addresses my biggest gripe with OpenGL, the “global state hell”, by making command buffers and their corresponding synchronization primitives explicit.

1. 6

Your comment on OpenGL “global state hell” is spot on. I’ve found that once I’ve gotten past the initial boilerplate setup, developing in Vulkan is much more enjoyable and easier to debug than OpenGL.

P.S. I’m glad you enjoyed my article :)

1. 5

Hah, this reminds me of an XKCD comic: https://xkcd.com/810/

Is this a problem if the content that the AI generates is a worthwhile read?

It would be the biggest plot twist if the entire blog post was generated by GPT-3, including the last section.

1. 3

It’s a problem because the experiment didn’t happen. And unless someone fact checks all of its output, AI generated articles are going to lie a lot.

1. 4

I tried the C++ code with and without sorting and both versions ran in about 1.8s on my machine.

Has something changed in compilers or hardware since 2012 when this was first posted?

1. 6

I have a CPU from that era (Intel i7 2600K, still going strong and I’m very happy with it) and I could reproduce it. With sort about 6.3 seconds, without sort about 16.0 seconds. My g++ version is 7.5.0. My only guess is that yes, they must have improved something about the branch prediction (in hardware or microcode?) for this effect to disappear.

Also, wow, your machine is so much faster than mine. I didn’t know I could expect such a big single core speedup from upgrading my CPU. Maybe I should buy a new CPU after all. However, having an old CPU is really beneficial for development because if it performs well on my machine, it will perform even better on other people’s.

1. 7

Using old hardware for the benefit of your users. You, sir, are a hero. ∠(･`_´･ )

I thought maybe -O2 optimisation was doing something fancy, but no, doesn’t seem to matter without -O2 either. I’m on an i7-7600U CPU @ 2.80GHz with gcc 8.3.0, which doesn’t seem that newer. I wonder if I’m doing something wrong in my test.

2. 2

Your compiler is almost certainly using cmov (conditional move) or adc (add with carry), if it hasn’t gone vectorization crazy (e.g. some versions of clang, depending on the flags). These instructions have the same timing regardless of the value being compared, since there is no branch involved.

If you use objdump, you’ll probably find something along these lines in your binary:

cmp word ptr [rax], 128

In my experience, an unrolled loop using adc is hard to beat in this context.

1. 2

Here’s the disassembly of gcc 8.3.0’s output up until the clock call at the end.

00000000000011a5 <main>:
11a5:	41 55                	push   %r13
11a7:	41 54                	push   %r12
11a9:	55                   	push   %rbp
11aa:	53                   	push   %rbx
11ab:	48 81 ec 08 00 02 00 	sub    $0x20008,%rsp 11b2: 49 89 e4 mov %rsp,%r12 11b5: 48 8d ac 24 00 00 02 lea 0x20000(%rsp),%rbp 11bc: 00 11bd: 4c 89 e3 mov %r12,%rbx 11c0: e8 6b fe ff ff callq 1030 <rand@plt> 11c5: 99 cltd 11c6: c1 ea 18 shr$0x18,%edx
11cb:	0f b6 c0             	movzbl %al,%eax
11ce:	29 d0                	sub    %edx,%eax
11d0:	89 03                	mov    %eax,(%rbx)
11d2:	48 83 c3 04          	add    $0x4,%rbx 11d6: 48 39 eb cmp %rbp,%rbx 11d9: 75 e5 jne 11c0 <main+0x1b> 11db: e8 80 fe ff ff callq 1060 <clock@plt> 11e0: 49 89 c5 mov %rax,%r13 11e3: be a0 86 01 00 mov$0x186a0,%esi
11e8:	bb 00 00 00 00       	mov    $0x0,%ebx 11ed: eb 05 jmp 11f4 <main+0x4f> 11ef: 83 ee 01 sub$0x1,%esi
11f2:	74 1d                	je     1211 <main+0x6c>
11f4:	4c 89 e0             	mov    %r12,%rax
11f7:	8b 08                	mov    (%rax),%ecx
11f9:	48 63 d1             	movslq %ecx,%rdx
11fc:	48 01 da             	add    %rbx,%rdx
11ff:	83 f9 7f             	cmp    $0x7f,%ecx 1202: 48 0f 4f da cmovg %rdx,%rbx 1206: 48 83 c0 04 add$0x4,%rax
120a:	48 39 e8             	cmp    %rbp,%rax
120d:	75 e8                	jne    11f7 <main+0x52>
120f:	eb de                	jmp    11ef <main+0x4a>
1211:	e8 4a fe ff ff       	callq  1060 <clock@plt>

So you’re saying cmovg is what’s removing the branching?

1. 4

(Butting in)

Yes, that’s right! The inner loop runs from 11f7 to 120d. It performs the addition every time, but only moves the result into the sum register if the condition is true.

1. 2

I’ve been dabbling in video game design ever since I started programming. During my time at university, I had various small side projects. One of them was reverse-engineering a Windows 3.1 game and reimplementing it in JavaScript (you can play it and read all about it here: https://mental-reverb.com/blog.php?id=3 and the source code is available here: https://github.com/BenjaminRi/Banania). I was also working on a C++ game (using SDL, OpenGL and boost::asio) which is how I learned the bulk of my C++ knowledge. When I applied for a job (embedded software engineer), I showed them my blog and I also showed them a functional and interactive demo of my game and I’d like to believe it increased my chances of landing the job.

Ironically, I never finished the C++ game (mostly due to difficulties with cross-platform C++ tooling, but I could write an entire blog post about why it never came to be). I consider it one of my biggest failures, but maybe I’ll reboot the project in Rust.

Fun fact: By pure chance, the creator of the Windows 3.1 game I reverse engineered stumbled upon my website and contacted me. We had a nice chat via email. Sometimes, the internet is a smaller place than you make it out to be.

1. 20

I think tone policing is very valuable. People need to learn to act politely, respectfully and productively, even if they feel negative emotions inside. This is something that is already taught to children, but becoming better at dealing with emotions is a life-long process. Negative emotions can be turned into something productive by using the energy to actively address an issue. For example, maybe it bothers you that an open source program has a certain bug. Then go on and fix it, rather than complaining about it. OP does not put forth any good arguments against tone policing.

1. 1

I agree with what you are saying. But tone policing can also be misdirected, if the “tone” detected is one’s own intuition, rather than coming from the person being policed.

• A is frustrated; writes a comment. B senses A’s frustrated tone, and indicates so.
• A is feeling neutral; writes a comment. B intuits a frustrated tone in A’s comment, and indicates so.

The former is appropriate, in regards to facilitating civil discourse. But the later is not; in fact it only hampers civil discourse.

Intuition is only as good as flipping a coin.

1. 26

This article doesn’t introduce anything new on the table, and it shrugs away security as “just use PGP” which is not a reasonable alternative. Why doesn’t anyone encrypted mails? Because PGP tooling sucks, it’s UX sucks and it doesn’t work as user friendly as for example signal/whatsapp/you name it.

1. 12

I’ve been training non-tech people on practical computer security. In the past couple of years we’ve switched from introducing PGP as a viable but very difficult/brittle option to just using it as an exercise to understand the ideas behind public-key cryptography and not strictly hierarchical trust models.

Given how easy it is to accidentally downgrade the channel, using it for very sensitive information is just a bad idea. Email clients with PGP plugins etc. just aren’t a “reasonably safe by default” option.

1. 7

I agree. I find the “email is not private, so I don’t treat it as such” argument a bit moot. PGP is so easy to misuse that it nearly shouldn’t be seen as secure. Why try convincing people that the privacy story of email is okay instead of attempting to do better? :(

1. 2

Another problem with PGP and email are mobile devices. I do not want to download whole inbox to my phone, but I want to be able to search through them. With chats it is less of the problem, as I search history a lot less frequent than my email.

1. 2

That’s because the point of the article is not privacy; it’s spam, privacy and workflow management.

1. -7

PGP tooling is completely fine! You can’t just say something sucks without giving any reason for it!

1. 15

The first time I used PGP, I started by generating a key pair for myself and the first thing the program asked me was if I want to use Elliptic Curve Cryptography or RSA. Then it proceeded to ask for various details like key size and so on. At the time, I was either in the final year of my computer science degree or already obtained it, and there were some real head-scratchers among those questions. Is Elliptic Curve Cryptography really more secure? What is a good key length? Question over question. Now, if this is what the onboarding process feels like for someone who has spent a significant amount of their life studying computers, I cannot imagine what it feels like if you’re new to computers. There is no way the masses are going to use a tool that asks you deep cryptographic questions, some of which cannot even be answered by industry experts.

PGP is fine in the sense that the software is robust and it works (though I’m really not a fan of the lack of perfect forward secrecy - I think it’s an issue that is hand-waved away far too often). But it’s not fine for people who just want to quickly connect without having to study cryptography - and that should be the target audience if you want widespread adoption.

1. 6

Lobsters, please don’t become Hackernews. People who want to talk about polarizing political subjects can head off to Twitter. Meanwhile, the rest of us can bond over our excitement about how computers work.

1. 6

I came here because I wanted a place that isn’t HN to have interesting discussions with smart people. I wish lobesters would allow more non-programming content. On HN you’ll get banned for writing anything that makes YC, YC companies, or VCs in general look bad because they don’t want that showing up on their own website.

1. 2

Exactly! I couldn’t agree more.

1. 1

I don’t think they have to be exclusive; it’s not like you need to participate in this thread.

Technical discussion spilling over in to politics would be problematic, but that’s not what’s going on here. I don’t see how an occasional thread about tech intersecting with politic takes away anything from your preference for more technical content. It’s perfectly reasonable for not wanting to engage in that, in which case you can just ignore it.

1. 23

It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript. There can be great value in interactive demos and things like MathJax, but there is no excuse to ever use JavaScript for buttons, menus, text and images which should be done in HTML/CSS as mentioned in the blog post. Additionally, the website should degrade gracefully if JavaScript is missing, e.g. interactive examples revert to images or stop rendering, but the text and images remain in place.

I wonder how we can combat this “JavaScript for everything” trend. Maybe there should be a website that names and shames offending frameworks and websites (like https://plaintextoffenders.com/ but for bloat), but by now there would probably be more websites that belong on this list than websites that don’t. The web has basically become unbrowsable without JavaScript. Google CAPTCHAs make things even worse. Frankly, I doubt that the situation is even salvageable at this point.

I feel like we’re witnessing the Adobe Flash story all over again, but this time with HTML5/JS/Browser bloat and with the blessing of the major players like Apple. It’ll be interesting to see how the web evolves in the coming decades.

1. 5

Rendering math on the server/static site build host with KaTeX is much easier than one might have thought: https://soap.coffee/~lthms/cleopatra/soupault.html#org97bbcd3

Of course this won’t work for interactice demos, but most pages aren’t interactice demos.

1. 9

If I am making a website, there is virtually no incentive to care about people not allowing javascript.

The fact is the web runs on javascript. The extra effort does not really give any tangible benefits.

1. 21

You just proved my point. That is precisely the mechanism by which bloat finds its way into every crevice of software. It’s all about incentives, and the incentives are often stacked against the user’s best interest, particularly if minorities are affected. It is easier to write popular software than it is to write good software.

1. 7

Every advance in computers and UI has been called bloat at one time or another.

The fact of the matter is that web browsers “ship” with javascript enabled. A very small minority actually disable it. It is not worth the effort in time or expense to cater to a group that disables stuff and expects everything to still work.

Am I using a framework?

Most of the time, yes I am. To deliver what I need to deliver it is the most economical method.

The only thing I am willing to spend extra time on is reasonable accommodation for disabilities. But most of the solutions for web accessibility (like screenreaders) have javascript enabled anyhow.

You might get some of what you want with server side rendering.

Good software is software that serves the end user’s needs. If there is interactivity, such as an app, obviously it is going to have javascript. Most things I tend to make these days are web apps. So no, Good Software doesn’t always require javascript.

1. 10

I actually block javascript to help me filter bad sites. If you are writing a blog and I land there, and it doesn’t work with noscript on, I will check what domains are being blocked. If it is just the one I am accessing I will temp unblock and read on. If it is more than a couple of domains, or if any of them are unclear as to why they need to be loaded, you just lost a reader. It is not about privacy so much as keeping things neat and tidy and simple.

People like me are probably a small enough subset that you don’t need our business.

1. 4

Ah, the No-Script Index!

How many times does one have to click “Set all this page to temporarily trusted” to get a working website? (i.e. you get the content you came for)

Anything above zero, but definitely everything above one is too much.

1. 3

The absolute worst offender is microsoft. Not only is their average No-Script index around 3, but you also get multiple cross site scripting attack warnings. Additionally when it fails to load a site because of js not working it quite often redirects you to another page, so set temp trusted doesn’t even catch the one that caused the failure. Often you have to disable no-script altogether before you can log in and then once you are logged in you can re-enable it and set the domains to trusted for next time.

That is about 3% of my total rant about why microsoft websites are the worst. I cbf typing up the rest.

2. 3

i do this too, and i have no regrets, only gratitude. i’ve saved myself countless hours once i realized js-only correlates heavily with low quality content.

i’ve also stopped using medium, twitter, instagram, reddit. youtube and gmaps, i still allow for now. facebook has spectacular accessibility, ages ahead of others, and i still use it, after years away.

1. 1

My guess is that a lot of people who use JS for everything, especially their personal blogs and other static projects, are either lazy or very new to web development and programming in general. You can expect such people to be less willing or less able to put the effort into making worthwhile content.

1. 2

that’s exactly how i think it work, and why i’m happy to skip the content on js-only sites.

3. 6

The only thing I am willing to spend extra time on is reasonable accommodation for disabilities.

Why do you care more about disabled people than the privacy conscious? What makes you willing to spend time for accommodations for one group, but not the other? What if privacy consciousness were a mental health issue, would you spend time on accommodations then?

1. 12

Being blind is not a choice: disabling JavaScript is. And using JavaScript doesn’t mean it’s not privacy-friendly.

1. 4

It might be a “choice” if your ability to have a normal life, avoid prison, or not be executed depends on less surveillance. Increasingly, that choice is made for them if they want to use any digital device. It also stands out in many places to not use a digital device.

1. 2

This bears no relation at all to anything that’s being discussed here. This moving of goalposts from “a bit of unnecessary JavaScript on websites” to “you will be executed by a dictatorship” is just weird.

1. 4

You framed privacy as an optional choice people might not need as compared to the need for eyesight. I’d say people need sight more than privacy in most situations. It’s more critical. However, for many people, privacy is also a need that supports them having a normal, comfortable life by avoiding others causing them harm. The harm ranges from social ostracism upon learning specific facts about them to government action against them.

So, I countered that privacy doesn’t seem like a meaningless choice for those people any more than wanting to see does. It is a necessity for their life not being miserable. In rarer cases, it’s necessary for them even be alive. Defaulting on privacy as a baseline increases the number of people that live with less suffering.

1. 2

You framed privacy as an optional choice

No, I didn’t. Not even close. Not even remotely close. I just said “using JavaScript doesn’t mean it’s not privacy-friendly”. I don’t know what kind of assumptions you’re making here, but they’re just plain wrong.

1. 3

You also said:

“Being blind is not a choice: disabling JavaScript is.”

My impression was that you thought disabling Javascript was a meaningless choice vs accessibility instead of another type of necessity for many folks. I apologize if I misunderstood what you meant by that statement.

My replies don’t apply to you then: just any other readers that believed no JS was a personal preference instead of a necessity for a lot of people.

2. 3

The question isn’t about whether it’s privacy-friendly, though. The question is about whether you can guarantee friendliness when visiting any arbitrary site.

If JS is enabled then you can’t. Even most sites with no intention of harming users are equipped to do exactly that.

1. -1

disabling js on a slow device is not a choice, but required for functioning. you are basically saying fuck you to all the disadvantaged.

and all because you are being lazy.

1. 4

When you can get a quad core raspberry pi for $30 and similar hardware in a$50 phone, I really doubt that there are devices that can’t run most JS sites and someone who has a device of some sort can’t afford.

What devices do you see people using which can’t run JS?

The bigger question in terms of people being disadvantaged is network speed, where some sites downloading 1MB of scripts makes them inaccessible - but that’s an entirely separate discussion.

1. 1

how is that a separate discussion? it’s just one more scenario when js reduces accessibility.

as for devices, try any device over 5 years old.

2. 2

I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”.

Turns out, that many JavaScript sites work just fine. Yeah, Slack and Twitter don’t always – I don’t know how they even manage to give their inputs such input latency – but Lobsters works just fine (which uses JavaScript), my site works just fine as well (which uses JavaScript), and my product works great on low-end devices (which requires JavaScript), etc. etc. etc.

You know I actually tried very hard to make my product work 100% without JavaScript? It was a horrible experience for both JS and non-JS users and a lot more code. Guess I’m just too lazy to make it work correct 🤷‍♂️

So yeah, please, knock it with this attitude. This isn’t bloody Reddit.

1. 6

“I literally have the cheapest phone you can buy in Indonesia (~€60) and I have the almost-cheapest laptop you can buy in Indonesia (~€250). So yeah, I’d say I’m “disadvantaged”. Turns out, that many JavaScript sites work just fine.”

I’ve met lots of people in America who live dollar to dollar having to keep slow devices for a long time until better hand-me-downs show up on Craigslist or just clearance sales. Many folks in the poor or lower classes do have capable devices because they would rather spend money on that than other things. Don’t let anyone fool you that being poor always equals bad devices.

That said, the ones taking care of their families, doing proper budgeting, not having a car for finding deals, living in rural areas, etc often get stuck with bad devices and/or connections. I don’t have survey data on how many are in the U.S.. I know poor and rural populations are huge, though. It makes sense that some people push for a baseline that includes them when the non-inclusive alternative isn’t actually even necessary in many cases. When it is, there were lighter alternatives not used because of apathy. I’ve rarely seen situations where what they couldn’t easily use was actually necessary.

The real argument behind most of the sites is that they didn’t care. The ones that didn’t know often also didn’t care because they didn’t pay enough attention to people, esp low-income, to find out. If they say that, the conversations get more productive because we start out with their actual position. Then, strategies can be formed to address the issue in an environment where most suppliers don’t care. Much like we had to do in a lot of other sectors and situations where suppliers didn’t care about human cost of their actions. We got a lot of progress by starting with the truth. The web has many, dark truths to expose and address.

1. 3

thank you for writing this out. the cheapest new phone in indonesia is probably much faster than your typical “obamaphone” or 3-year-old average device.

1. 1

The Obama phones are actually Android devices that also have pre-installed government malware that can’t be removed. They have Chrome and run JS fine.

1. 2

They have Chrome, and they run JS very slowly.

1. 1

Are you going to cite any devices here? Which JS do they run slowly?

My guess is that the issue is on specific documents. I’d think that the fact that JS is so often used in ways that don’t perform well is a much larger issue than this one. Sites using JS in ways that are slow is a completely different debate to be had in my opinion. Although giving someone a version of the page without JS seems a solution, it ignores the entire concept of progressive web apps and the history of the web that got us to them.

EG, would you prefer the 2008 style of having a separate m.somesite.com that works without JS but tends to be made for small devices which tends to let corporations be okay with removing necessary functionality to simplify the “mobile experience”? Generally, that’s what we got that solution.

The fact that even JS-enabled documents like https://m.uber.com allow you to view a JS map and get a car to come pick you up with reasonable performance on even the cheapest burner phones shows just how much bad programming plays into your opinion here instead of simply whether or not JS is the problem itself.

It’s also worth noting that I am strongly interested in people doing less JS and the web being JS-less, but this isn’t the hill to die on in that battle if you ask me. Not only are you going to generally find people that aren’t sympathetic to disadvantaged people (because most programmers tend to not give any fucks unfortunately) but also because the devices that run JS are generally not going to be slow enough that decent JS isn’t going to run. If we introduce some new standard that replaces HTML, it’ll likely still be read by browsers that still support HTML / JS - which means the issue still remains because people aren’t going to prioritize a separate markup for their entire site depending on devices which is the exact reason that most companies stopped doing m.example.com. The exception to this rule seems to be bank & travel companies in my experience.

1. 2

Here is an example device I test with regularly:

This iPad is less than 10 years old, and still works well on most sites with JS disabled. With JS enabled, even many text-based sites slow it down to the point of being unresponsive.

This version of iOS and Safari are gracious enough to include a JavaScript on/off toggle under Advanced, but no fine-grained control. This means that every time I want to toggle JS, I have to exit Safari, open Settings, scroll down to Safari, scroll down to Advanced, toggle JS, and then return to Safari.

Or are you going to tell me that my device is too old to visit your website? I’ll be on my way, then.

1. 2

It’s also worth noting that I am strongly interested in people doing less JS and the web being JS-less, but this isn’t the hill to die on in that battle if you ask me. Not only are you going to generally find people that aren’t sympathetic to disadvantaged people (because most programmers tend to not give any fucks unfortunately)

I think this is changing for the better, slowly but faster more recently.

but also because the devices that run JS are generally not going to be slow enough that decent JS isn’t going to run. If we introduce some new standard that replaces HTML, it’ll likely still be read by browsers that still support HTML / JS - which means the issue still remains because people aren’t going to prioritize a separate markup for their entire site depending on devices which is the exact reason that most companies stopped doing m.example.com.

I think with some feature checking and progressive enhancement, you can do a lot. For example, my demo offers basic forum functionality in Mosaic, Netscape, Opera 3.x, IE 3.x, and modern browsers with and without JS. If you have JS, you get some extra features like client-side encryption and voting buttons which update in-place instead of loading a new page.

I think it’s totally doable, with a little bit of effort, to live up to the original dream of HTML which works in any browser.

The exception to this rule seems to be bank & travel companies in my experience.

2. 3

Aside from devices without a real browser, JavaScript should run fine on any device people are going to get in 2020 - even through hand-me-downs.

1. 3

I’m going to try to replace my grandmother’s laptop soon. I’ve verified it runs unbearably slow in general but especially on JS-heavy sites she uses. It’s a Toshiba Satellite with Sempron SI-42, 2GB of RAM and Windows 7. She got it from a friend as a gift presumably replacing her previous setup. Eventually, despite many reinstalls to clear malware, the web sites she uses were unbearably slow.

“When you can get a quad core raspberry pi for $30 and similar hardware in a$50 phone,”

She won’t use a geeky setup. She has a usable, Android phone. She leaves it in her office, occasionally checking the messages. In her case, she wants a nice-looking laptop she can set on her antique-looking desk. Big on appearances.

An inexpensive, decent-looking, Windows laptop seems like the best idea if I can’t get her on a Chromebook or something. I’ll probably scour eBay eventually like I did for my current one ($240 Thinkpad T420). If that’s$240, there’s gotta be some deals out there in the sub-Core i7 range. :)

1. 3

Sure, but just to clarify - we are talking about people who may need to save money to get the $30 for something like a raspberry pi. Not someone who can drop$240 on a new laptop.

1. 3

Oh yeah. I was just giving you the device example you asked for. She’s in the category of people who would need to save money: she’s on Social Security. These people still usually won’t go with a geeky rig even if their finances justify it. Psychology in action.

I do actually have a Pi 3 I could try to give her. I’d have to get her some kind of nice monitor, keyboard, and mouse for it. I’m predicting, esp with the monitor, the sum of the components might cost the same as or more than a refurbished laptop for web browsing. I mentioned my refurbished Core i7 for \$240 on eBay as an example that might imply lower-end laptops with good performance might be much cheaper. I’ll find out soon.

2. 1

But what about a device people got in 2015 or 2010? Or, dare I say, older devices, which still work fine, and may be kept around for any number of reasons like nostalgia or sentimental attachment?

Sure, you can tell all these people to also stuff it, but don’t pretend they don’t exist.

2. 12

Why do you care more about disabled people than the privacy conscious?

Oh that one is easy: Its the law.

Being paranoid isn’t a protected class, it might be a mental health issue - but my website has nothing to do with its treatment.

For the regular privacy, you have other extensions and cookie management you can do.

3. 3

You have some good points. One thing I didn’t see addressed is the number of people on dial-up, DSL, satellite, cheap mobile, or other bad connections. The HTML/CSS-type web pages usually load really fast on them. The Javascript-type sites often don’t. They can act pretty broken, too. Here’s some examples someone posted to HN showing impact of JavaScript loads.

“If there is interactivity, such as an app, obviously it is going to have javascript. “

I’ll add that this isn’t obvious. One of the old models was client sending something, server-side processing, and server returns modified HTML. With HTML/CSS and fast language on server, the loop can happen so fast that the user can barely perceive a difference vs a slow, bloated, JS setup. It would also work for vast majority of websites I use and see.

The JS becomes necessary as the UI complexity, interactivity (esp latency requirements), and/or local computations increase past a certain point. Google Maps is an obvious example.

1. 3

It is interesting to see people still using dialup. Professionally, I use typescript and angular. The bundle sizes on that are rather insane without much code. Probably unusable on dialup.

However, for my personal sites I am interested in looking at things like svelte mixed with dynamic loading. It might help to mitigate some of the issues that Angular itself has. But fundamentally, it is certainly hard to serve clients when you have apps like you mention - Google Maps. Perhaps a compromise is to try to be as thrifty as can be justified by the effort, and load most of the stuff up front, cache it as much as possible, and use smaller api requests so most of the usage of the app stays within the fast local interaction.

1. 2

<rant>

Google Maps used to have an accessibility mode which was just static pages with arrow buttons – the way most sites like MapQuest worked 15 years ago. I can only guess why they took it away, but now you just get a rather snarky message.

Not only that, but to add insult to injury, the message is cached, and doesn’t go away even when you reload with JS enabled again. Only when you Shift+reload do you get the actual maps page.

This kind of experience is what no-JS browsers have to put up with every fucking day, and it’s rather frustrating and demoralizing. Not only am I blocked from accessing the service, but I’m told that my way of accessing it itself invalid.

Sometimes I’m redirected to rather condescending “community” sites that tell me step by step how to re-enable JavaScript in my browser, which by some random, unfortunate circumstance beyond my control must have become disabled.

All I want to say to those web devs at times like that is: Go fuck yourself, you are all lazy fucking hacks, and you should be ashamed that you participated in allowing, through action or inaction, this kind of half-baked tripe to see the light of day.

My way of accessing the Web is just as valid as someone’s with JS enabled, and if you disagree, then I’m going to do everything in my power to never visit your shoddy establishment again.

</rant>

Edit: I just want to clarify, that this rant was precipitated by other discussions I’ve been involved in, my overall Web experience, and finally, parent comment’s mention of Google Maps. This is not aimed specifically at you, @zzing.

2. 9

It shouldn’t be extra effort, is the point. If you’re just writing some paragraphs of text, or maybe a contact form, or some page navigation, etc etc you should just create those directly instead of going through all the extra effort of reinventing your own broken versions.

1. -2

Often the stuff I am making has a lot more than that. I use front end web frameworks to help with it.

Very few websites today have just text or a basic form.

1. 10

Ok, well, that wasn’t at all clear since you were replying to this:

It boggles my mind that there are more and more websites that just contain text and images, but are completely broken, blank or even outright block you if you disable JavaScript.

Many websites I see fit this description. They’re not apps, they don’t have any “behaviour” (at least none that a user can notice), but they still have so much JS that it takes over 256MB of RAM to load them up and with JS turned off they show a blank white page. That’s the topic of this thread, at least by the OP.

1. 0

Very few websites today have just text or a basic form.

Uhh… Personal websites? Blogs? Many of the users here on Lobsters maintain sites like these. No need to state falsehoods to try and prove your point; there are plenty of better arguments you could be making.

As an aside, have you seen Sourcehut? That’s an entire freakin’ suite of web apps which don’t just function without JavaScript but work beautifully. Hell, Lobsters almost makes it into this category as well.

2. 1

Some types of buttons, menus, text and images aren’t implemented in plain HTML. These kinds should still be built in JS. For instance, 3-state buttons. There are CSS hacks to make a button appear 3-state, but no way to define behavior for them without JS. People can hack together radio inputs to look like a single multi-state button, but that’s a wild hack that most developers aren’t going to want to tackle.

1. 1

I’m trying to learn more about accessibility, and recently came across a Twitter thread with this to say: “Until the platform improves, you need JS to properly implement keyboard navigation”, with a couple video examples.

1. 2

I think that people that want keyboard navigation will use a browser that supports that out of the box, they won’t rely on each site to implement it.

1. 2

The world needs more browsers like Qutebrowser.

1. 21

Maybe I’m just not over the hump yet, but my experience with Rust hasn’t been super happy. Last month I built both a Rust and a Nim binding to a C API [that I wrote and own.] I’m a newbie a both languages. The Nim binding was a joy to write, simple and clean. The Rust one involved tons of fighting with the borrow checker, and a couple of areas I just had to leave as ugly hacks because Doing it right created bizarre errors I couldn’t understand — something about interactions between lifetimes and generics.

I don’t think I’m having the typical newbie problems understanding lifetimes. I’ve been using C since 1981 and C++ since 1990, and I and understand stacks and move semantics and where data lives. I appreciate the work Rust has done to make delicate lifetime dependencies checkable by the compiler! But it seems to have made the language extremely complex. I found a blog post about “What newbies get wrong about understanding lifetimes” last week, and it made me basically give up — trying to understand the subtle distinctions between what I thought was going on and what is actually going on just made my brain hurt, in the same way it hurts when I try to understand template metaprogramming or hyperbolic geometry.

Honestly I’m not sure the complexity of the lifetime analysis in Rust is worth it, compared to the minor performance overhead of using some ref-counted objects (as in Nim, or my current C++ code.) But I’m willing to be convinced otherwise.

1. 7

I hear you! I hate to give the unsatisfying answer, but your description of your experience so far sounds an awful lot like mine before I got “over the hump”.

I would posit also that your extensive experience with C and C++ may even be hindering you, in one particular way, and that is this:

I’ve been using C since 1981 and C++ since 1990, and I and understand stacks and move semantics and where data lives.

trying to understand the subtle distinctions between what I thought was going on and what is actually going on just made my brain hurt

I suspect the process you’re going through is one of relinquishing control over your understanding of where data lives, its owners, and its lifetimes, and giving it to the compiler. As someone who has been writing C and C++ for a long time, and has therefore been forced to reason about these things entirely in your own head to the point where it’s second nature now, I can imagine the process of unlearning that is uncomfortable and painful.

I wasn’t an experienced user of C or C++ when I came to Rust, though, so I’m interested to hear what you think about my perspective on this. Am I totally off base here?

1. 3

relinquishing control over your understanding of where data lives, its owners, and its lifetimes

Probably part of the problem is I’m wrapping a library that already owns and manages data, with some tricky lifetimes (interior pointers and such.) Part of the appeal of Rust is to make this API safer — in C++ you have to keep track of some of the dependencies by hand and be aware that releasing X will invalidate Y. I know I can teach the borrow checker about this, it’s what it was made to do, but it’s a more difficult task than what most people probably start out with in Rust.

1. 4

In Rust, more so than other languages, I strongly think it’s best to learn the language the standard way, by reading the official Rust book or one of the major alternative introductory books, prior to doing any sort of major project with it. In my experience, people coming from C or C++ struggle the most with Rust because they expect it to be similar, and expect their knowledge to transfer. Rust is more different than they expect, and they get frustrated.

In your case, you chose a project which particularly hits against one of the harder challenges in Rust when you haven’t learned the language well enough yet: writing safe FFI bindings for another library. Doing so in an idiomatic way in Rust requires you to have a sense of what ownership structures are easy to represent in Rust, and which ones are hard, and having that architectural sense is difficult when you’re new to the language.

I’m not blaming you, by the way, this approach to learning a new language is a common one. In my opinion, it works poorly for learning Rust. Rust has a learning curve, and there aren’t shortcuts over it. Pick an easier problem to learn the language better first, or go through the book (without working on your current project) and return when you’ve finished it. That’s my two cents.

1. 2

Yeah, it definitely sounds like you’ve jumped in at the deep end, in the place where Rust’s unique features benefit you least :)

I suppose you’re fighting not only your own brain attempting to override the suggestions the compiler is making, but also the mismatch between Rust and C++’s understanding of that same ownership information.

2. 7

Your post surprised me. I’ve been using C++ for more than a decade and the transition to Rust was largely painless. After reading half the Rust Book and 2 weeks of experimenting, I felt fairly comfortable with the language. Because I was already using C++11 features like unique_ptr and move semantics, the transition to Rust didn’t require me to change anything at all about my programming style¹: The lifetime management in C++ is largely the same, just without strict compiler checks. Because of that, I assumed that coming from modern C++ is a great shortcut to picking up Rust. I did have some trouble understanding the lifetime syntax like 'a and so on, but the language has some excellent documentation that helped me understand it and the Rust channel on Discord is full of super nice and helpful people.

At this point, I’m already about 3-5x more productive with Rust than I am with C++², even though I started looking into Rust less than a year ago. I can churn out software of a size and level that was impossible to churn out before. That is incredibly empowering. Rust made me feel like when I was a kid learning my first programming language. I think it’s a real, life-changing innovation. I haven’t felt this excited about programming since I was a kid.

¹ This programming style is such that your C++ code does not contain a single new, delete, malloc or free. Instead, you use make_unique and make_shared and clean up by letting the unique_ptr and shared_ptr go out of scope. It is pure RAII, exception-safe and prevents leaks and double deletes by default.

² For various reasons:

• The standardized build system with cargo makes cross-platform development and library usage so much simpler.
• In particular, cross-platform support is basically free: I developed on Windows for months and cargo run worked on Linux first try without a single change. This is unheard of in the C++ world for any nontrivial program.
• The Rust standard library is much better than the C/C++ standard library.
• Rust has a proper, UTF-8 encoded string type.
• The error messages that are emitted by rustc are almost always very detailed and accurate. When I was a beginner, I would literally just copy-paste what the compiler proposed, and it would make my code compile and work properly.
• Error handling is simple thanks to the powerful Result enum (sum type). Additionally, everyone uses Result, so there is a standardized way to return errors across the entire ecosystem, something that C++ lacks.
• The compiler prevented countless errors (and thus debugging) on my part.
1. 3

Yup, everything you’re saying is what I want to get out of Rust — the goal is to port some complex, finicky, concurrent networking code that uses this API I’m wrapping, because it’s too complicated, messy and hard to maintain in C++. And I’ve really enjoyed parts of Rust like Cargo and the wonderfully informative compiler error messages, which feel like a TA helping teach me. (Until the point where I get the same error over and over no matter how I tweak the code…)

I think I need to find a forum where I can post some snippets that don’t work and ask for help. I’m not really sure where to go, though, as Rust is so big there isn’t a single obvious place (like the Nim forum, for Nim.) Do I just go directly to SO?

(And yeah, my C++ style leans heavily on unique_pointer, moves, and on homemade RefCounted and Ref<> classes. I’ve been kind of writing Rust-like code for years, especially after I read about Rust in 2014(?) and began envying it.)

1. 4

Two good and official palce to ask for help are the users forum and the the community Discord.

1. 1

Me, I once fell into an even more frustrating case, of a “bistable error message/hint”: the compiler showed me an error message with a helpful suggestion how to change my code. I changed it as suggested, just to be greeted with a new error message with a helpful suggestion to change it… back to the exactly same shape of code I had first :(

Another thing that made me put it down for the time being, until I regenerate enough psychical stamina to try and attack the hump once again, was a feeling that Rust requires me to sacrifice the ideal of being able to write simple APIs (i.e. hiding complexity from their users/callers). In other words, I found there seems sometimes to be just one way I can structure an API to make it safe and correct, and it makes my API super complex, and requires the caller to know some nontrivial Rust incantations to actually be able to use it.

Really sorry I can’t provide you with a concrete example of a code in question now. I have it buried somewhere in the inaccessible past comments on lobste.rs :( I discussed them with some Rust user here on lobste.rs and they seemed to agree my cases were some rough spots and they didn’t have an idea how to help me resolve them. Although I do also sometimes think, that maybe if I learn and internalize enough Rust some day, I migh find a way to structure my API somehow completely differently to render the whole issue void?

2. 11

You can use ref-counted objects in Rust too, and in fact I recommend people having trouble with borrow checker to use it. Reference counting is well supported in Rust, although currently it looks ugly. I believe making reference counting looks more beautiful and natural is the important next step for Rust, since it is often the right choice.

1. 1

More ergonomic RC for Rust sounds interesting. Is there any work being done towards that goal?

2. 3

I love Rust but I had my struggles with it.

1. Usually, at some point it made click and I had a deeper understanding of what I did before and I made it work. But not always. Sometimes, I really hated the way out. So I can relate to the frustration.

2. What newbies get wrong about understanding lifetimes” was a great resource to me but I found it confusingly written at times. I used it more as an invitation for what to think about. Then figured things out on my own. So don’t think that you are the only one.

3. Using Arc, Rc, clone is totally fine in most cases. Don’t try too hard to avoid them.

1. 9

I can and will retrain my hand placement habits. After all, this touchbar-keyboard-trackpad combo is forcing many people to learn to place their hands in unnatural positions to accommodate these poorly designed peripherals.

It is amazing to me what people put up with to use these devices. I generally find the issue with accidentally touching the trackpad so severe that I only use laptops with trackpoint and the first thing I do on my device is to disable the trackpad completely.

1. 15

Part of why I ended up becoming a programmer is frustration with a touchpad. It led me to keyboard-only UIs, which lead me to Arch/XMonad, which led me to Haskell, which confused me but led me to Python, which… <10 years later> I have a career as a software engineer :)

1. 2

Part of why I got into programming more passionately was excitement when the apple trackpad came out ten years ago. It led me to think about possibilities beyond keyboard-centric UIs. It led me to make zany things. While I’ve never succeeded professionally as a full-blown software engineer it made me appreciate how hard developing great experiences for humans are.

2. 5

At work, we actually had to modify a piece of software to deliberately ignore most of the input from recent mac touchpads. The application is multi-touch capable, which on some of the hardware it runs on is really useful. However, on mac, the combination of the oversized touchpad and the fact that it doesn’t map to the screen (it’s mapped to a smaller area which follows the cursor around, so nobody really knows what they’re touching) meant that macbook users were constantly touching things with their tentacles which they didn’t mean to touch.

1. 1

macbook users were constantly touching things with their tentacles which they didn’t mean to touch.

So, uh, what exactly do you do for work?

2. 1

Honestly, I liked trackpoints for several years, but after getting a Thinkpad with both trackpoint and trackpad, I have firmly settled on preferring trackpads for this reason: I can accurately point at things faster than with the trackpoint. I do use the trackpoint on rare occasion, but only when I need fine control with something, like moving in a very small screen area, or scrolling only a tiny bit.

I acknowledge that many other people around the Internet have a problem with accidental palm touches, but, for some reason, that’s never been a problem for me. Then again, I haven’t used Windows in the last several years (only Linux and OSX), so maybe that’s the reason?

1. 2

The Thinkpad has those two big buttons at the top of the trackpad. They require force, so you won’t accidentally press them, and they’re placed about where my thumb wants to rest when I use the keyboard.