1. 9

This is a bold statement, I do quite a bit of ssh -X work, even thousands of miles distant from the server. I do very much wish ssh -X could forward sound somehow, but I certainly couldn’t live without X’s network transparency.

1. 6

Curious, what do you use it for? Every time I tried it, the experience was pain-stakingly slow.

1. 7

I find it okay for running things that aren’t fully interactive applications. For example I mainly run the terminal version of R on a remote server, but it’s nice that X’s network transparency means I can still do plot() and have a plot pop up.

1. 5

Have you tried SSH compression? I normally use ssh -YC.

1. 4

Compression can’t do anything about latency, and latency impacts X11 a lot since it’s an extremely chatty protocol.

1. 4

There are some attempts to stick a caching proxy in the path to reduce the chattiness, since X11 is often chatty in pretty naive ways that ought to be fixable with a sufficiently protocol-aware caching server. I’ve heard good things about NX, but last time I tried to use it, the installation was messy.

1. 1

There’s a difference between latency (what you talk about) and speed (what I replied to). X11 mainly transfers an obscene amount of bitmaps.

1. 1

Both latency and bandwidth impact perceived speed.

2. 6

Seconded. Decades after, it’s still the best “remote desktop” experience out there.

1. 3

I regularly use it when I am on a Mac and want to use some Linux-only software (primarily scientific software). Since the machines that I run it on are a few floors up or down, it works magnificently well. Of course, I could run a Linux desktop in a VM, but it is nicer having the applications directly on the Mac desktop.

Unfortunately, Apple does not seem to care at all about XQuartz anymore (can’t sell it to the animoji crowd) and XQuartz on HiDPI is just a PITA. Moreover, there is a bug in Sierra/High Sierra where the location menu (you can’t make this up) steals the focus of XQuartz all the time:

So regretfully, X11 is out for me soon.

1. 3

Second. I have a Fibre connection at home. I’ve found X11 forwarding works great for a lot of simply GTK applications (EasyTag), file managers, etc.

Running my IntelliJ IDE or Firefox over X11/openvpn was pretty painfully slow, and IntelliJ became buggy, but that might have just been OpenVPN. Locally within the same building, X11 forwarding worked fine.

I’ve given Wayland/Weston a shot on my home theater PC with the xwayland module for backward compatibility. It works .. all right. Almost all my games work (humble/steam) thankfully, but I have very few native wayland applications. Kodi is still glitchy, and I know Weston is meant to just be a reference implementation, but it’s still kinda garbage. There also don’t appear to be any wayland display managers on Void Linux, so if I want to display a login screen, it has to start X, then switch to Wayland.

I’ve seen the Wayland/X talk and I agree, X has a lot of garbage in it and we should move forward. At the same time, it’s still not ready for prime time. You can’t say, “Well you can implement RDP” or some other type of remote composition and then hand wave it away.

I’ll probably give Wayland/Sway a try when I get my new laptop to see if it works better on Gentoo.

1. 2

No hand waving necessary, Weston does implement RDP :)

1. 5

Work:

I have written and submitted some patches 1-2 weeks ago to the Rust Tensorflow bindings to make tensors, graphs, and ops Send + Sync. In the latter two cases, this was trivial, but for tensors this required a bit of work since tensors of types where C and Rust do not have the same representations are lazily unpacked. I didn’t want to replace Cell for interior mutability by RwLock, because it pollutes the API with lock guards. So, I opted for separating the representation for types where C/Rust types do/don’t have the same representation, so that tensors are at least Send + Sync for types where the representations match.

Since the patches were accepted, I am now implementing simple servers for two (Tensorflow-using) natural language processing tools (after some colleagues requested that make them available in that way ;)).

Besides that, since it’s exam week I am writing an exam for Wednesday and there’s a lot of correction work to do after that.

Semi-work/semi-home:

I have been packaging one application (a treebank search tool) as a Flatpak. Building the Flatpak was far less work than I expected. Thus far, I had been rolling Ubuntu and Arch packages. Building the Flatpak was far less work than the Ubuntu packages. Also, the application seems to work well with portals, since most file opening/saving goes through QFileDialog. I guess I am also benefitting from rewriting some sandbox-unfriendly code when sanboxing the macOS build.

1. 2

I thought some of you might have never heard of this concept. Here’s the PCI version I’m really focusing on. Thought it better to submit where it started then add that. The problem was the UNIX workstations, thought to be better in many ways, couldn’t run the PC software people were getting locked-in to. Instead of regular virtualization, Sun had the clever idea to straight-up put PC hardware in their UNIX boxes to run MS-DOS and then I think Windows apps. Early PS3’s did something similar keeping a PS2 Emotion Engine in them for emulation. I can’t recall others off top of head, though.

The reason I’m posting this is that we’re currently trying to escape x86 hardware in favor of RISC-V and other stuff. My previous solution was two boxes in one chassis with KVM switch. That might be too much for some users. I figured this submission might give people ideas about using modern, card computers… which have usable specs… with FOSS workstations to run the legacy apps off the cards. The FOSS SoC, esp its IOMMU, might isolate them in shared RAM from the rest of the system. It would also run apps from the shared hard drive in a way that was mediated. There should also be software that can seemlessly move things into or out of the system but with trusted part running on FOSS SoC. It could even be a hardware component managed by trusted software if wanting a speed boost and possible reduction in attack surface.

1. 3

I think you might be misreading - the 386 in the Sun386i is the only CPU - there’s no SPARC, it runs an x86 Solaris with DOS VDMs provided by V86 mode on the 386.

PC-on-cards were somewhat popular with Macs before software emulation in the late 90s got good enough.

1. 1

To be extra clear, this is what my comment is about. It’s a PCI card that runs x86 software alongside Solaris/SPARC. I found other one searching for it. If they’re unrelated, then my bad, thanks for tip, and lets explore the PCI card concept for x86 and RISC-V.

2. 1

There should also be software that can seemlessly move things into or out of the system but with trusted part running on FOSS SoC.

I know it is proprietary, but wouldn’t the Apple T1 in the MacBook Pro Retina and T2 in the iMac Pro be good examples as well?

https://en.wikipedia.org/wiki/Apple-designed_processors#Apple_T1

tl;dr: T1/T2 is a separate ARM SoC that acts as a secure enclave and is a gatekeeper for the mic and Facetime camera.

1. 5

Home: finish up a small treebank viewer in Rust and gtk-rs. It is primarily for my own uses to quickly browse through CoNLL-X dependency treebanks and dump trees in the Graphviz dot and TikZ-dependency formats for teaching, papers, etc. Since this was my first project with gtk-rs, I was surprised at how complete the gtk-rs and related bindings are. What I mildly dislike is that I had to litter a lot of Rc<RefCell>s everywhere, which is not commonly the case in my daily Rust code. Also, communicating between a worker thread and a GTK main thread is kinda ugly [1]. Also, since Rust does not support inheritance, defining your own widgets is a bit cumbersome (typically, you wrap a Gtk+ widget and implement the Deref trait to provide the inner widget).

Anyway, given all the constraints, I think the gtk-rs people have really done a nice job!

Work: teaching (this week’s program: remainder of parse choices for dependency parsing and the implementation of CKY). Hopefully, I will have some time to work on a sequence to sequence learning project that I have started.

1. 1

:D I filter the ‘javascript’ tag, but due to lucky circumstance I was not logged in. Thanks for the good laugh.

1. 3

Stability is often undervalued, especially when it comes to Desktop computers.

I think stability is overvalued. I use FreeBSD -CURRENT, LineageOS snapshots, Firefox Nightly, LibreOffice beta, Weston master… and nothing is “buggy as hell”. Seems like developers just don’t write that many bugs these days :)

1. 4

I that case you must have been lucky. I remember using Arch some years ago, and then it “suddenly” gave up on me. Part of the reason was of course that there were incoherent contradiction between configuration files, some of them were my own fault but others were due to updates. And I really liked updating Arch every day, inspecting what was new, what was better. And it’s good for a while, in my experience, but if you don’t know what you’re doing, you’re too lazy to be minimalist or just don’t have the time and power to properly make sure everything is ok, it breaks. And it only gets worse, the more edge cases you have, the more non-standard setups you need and the more esoteric (or just unsupported/unpopular) hardware you have.

I have similar experiences with Fedora and Debian unstable/testing, albeit to a lesser degree. Debian stable, with a few packages from testing, was a real “shock” in comparison to that, and while it was “boring” it was, at least for me, less stressful and removed a certain tension. I would have certainly learned less about fixing broken X11 setups or configuring init systems, if I had chosen it from the begining, but eventually it is nice to be able to relax.

1. 3

I agree. My Linux desktop history from 1994 onwards was roughly Slackware -> Debian -> Ubuntu -> CentOS -> Debian -> Fedora -> Arch. I didn’t find more cutting-edge distributions such as Fedora or a rolling distribution such as Arch to be less stable than the conservative (Slackware) and stable distributions (Ubuntu LTS, CentOS, Debian).

Moreover, I found that Fedora and Arch receive fixes for bugs far more quickly. When you report bugs upstream and they are in Fedora/Arch in typically a few days/weeks, while in some conservative distributions it could take months or years. Besides that the hardware support is generally better. E.g., the amdgpu drivers work much better on my AMD FirePro the older radeon driver, but it might literally take years for amdgpu Southern/Sea Islands cards to land in stable distributions.

1. 3

On Photos:

But in terms of acting like a good Mac app, it does not.

Not just Photos. Apple also replaced iWork on the Mac by a codebase that is shared with the iOS version. Years later, it is still missing many features and a poor imitation of its former self. I still regularly look for I feature I was sure that was there, but hasn’t survived the iOSification of iWork. For example, Pages was a great poor man’s DTP program. Unfortunately, the ‘upgrade’ removed linked text boxes and it took them four years to bring back this functionality.

If Marzipan is really realized, I think macOS apps will definitely become an afterthought, the iOS market is many times larger and its economically simply too attractive dump an iOS copy on MacOS. Electron apps have shown that many companies go for the lowest common denominator for economical benefit.

1. 11

… seriously? They told us Wayland would learn from the past and avoid the shitty hacks that made X11 “unmaintainable.” Yet it’s 2017, and GNOME under Wayland performs worse than under X, has more stuttering and tearing, sometimes randomly crashes when I connect an external monitor, and we are celebrating a new hack. This is insane.

1. 4

Interesting, what video card and driver? I dread every time I have to use X.org [1] because Wayland is so much smoother and has visibly less stuttering. I agree that there are/were too many random crashes, the initial versions of mutter/gnome-shell 2.26 would often fail assertions, etc. on monitor-related events. They patched many of the issues and it works fine now for me in the latest mutter in Arch ( 3.26.2+31+gbf91e2b4c-1).

Looking for the day I can switch to Sway though ;). Currently it scales up XWayland apps on HiDPI which makes them very blurry (GNOME doesn’t).

[1] E.g. the Parallels VM doesn’t emulate a GPU with KMS support and only has an X.org driver.

1. 3

Oh yeah, blurry Xwayland apps is also an issue in Weston. I’ve discussed this with Weston devs, and it’s a hard problem. Dealing with the X11 clients on HiDPI is a massive pain. Especially in a multi-monitor world.

Thankfully, more and more apps can run natively, including complex ones like Inkscape, LibreOffice and Darktable.

Firefox though… Wayland support is being developed here and it’s finally getting upstreamed. It’s almost usable… almost. GL does not work yet (only software rendering) and on HiDPI it’s pretty screwed up (screen does not refresh correctly when you type/click).

2. 4

From what I’ve heard, GNOME’s mutter is uhhh not a very good compositor. But even mutter should NOT have any tearing or stuttering. Something is going very wrong on your machine.

I use Weston git master on FreeBSD 12-CURRENT (so much supposed “unstable” stuff, huh). It does not randomly crash and it’s incredibly fast and smooth. Heck, GTK3 and Qt5 applications have perfectly smooth resizing (which is something I’ve only seen with Cocoa apps on macOS before).

Less off-topic: this is NOT a “shitty hack”. This is a rather elegant fix to a non-trivial mistake in the reference protocol implementation library libwayland. Asynchronous protocols are hard (but worth it)

1. 6

It’s nice to see that this vulnerability is fully mitigated in HardenedBSD with:

1. PaX ASLR
2. PaX NOEXEC
3. PIE
4. RELRO + BIND_NOW
1. 4

Asking as someone does not actively following FreeBSD anymore: why doesn’t FreeBSD have ASLR or use these changes from HardenedBSD?

1. 2

That’s a tough question, but I think it boils down to different priorities of FreeBSD developers and clashing personalities. I’m @lattera can speak about that.

1. 2

You’ll need to ask FreeBSD that question. I cannot and do not speak on their behalf.

1. 7

Some points:

• I agree with the sentiment we can do better in shells than Unix - PowerShell is very interesting, not just because of the scripting language being typed, but the way you interact with the system as well.

• I think putting anything more than a command-line conversation in a glorified vt100 is counterproductive at best and reactionary at worst - we can do so much better with actual GUIs, with the advantages that they bring. (Even a command line shell might be better done in a real GUI - see Mathematica notebooks and Jupyter for rich REPLs, and Acme, pad, Oberon, and MPW for editor-shell hybrids.)

I agree with this article’s sentiment, even if eshell isn’t my way. It’s such a shame a slavish adherence to “the Unix philosophy” has held back UI research and experimentation with developer tools and command lines. Emacs harkens back to the Lisp machine, even if it’s just a shadow of what a real one could do.

1. 6

we can do so much better with actual GUIs, with the advantages that they bring

Maybe. A GUI is discoverable in a way a CLI isn’t and convenient when you do something infrequently but my experience is they limit rather than extend what I do as my expertise grows. For example, Handbrake UI is an improvement over cli for ripping my kid’s DVD’s both because I don’t want to spend the time fiddling with ffmpeg, etc. on a one off and because the feedback (adding to queue, progress, etc.) is better. On the other hand ffmpeg at the command line is perfect for batch adjusting the audio gain on videos or copying streams to a different container format. Two mildly interesting examples where a simple UI improves my productivity are bpython and, in Emacs, magit. There’s no question that I make fewer mistakes and have an easier time when I use those even though I already know [more than I’d like of] the underlying tools.

1. 5

By this, I don’t mean GUIs vs. command lines; they have both have a place and can compliment and improve each other, as mentioned. I mean things like curses based UIs, or the old Turbo Vision “TUIs.” These poorly emulate a GUI inside a vt100-like thing, and our attachment with them holds us back.

2. 2

I think putting anything more than a command-line conversation in a glorified vt100 is counterproductive at best and reactionary at worst - we can do so much better with actual GUIs, with the advantages that they bring.

I agree in principle that such a thing could and maybe even should exist, but for a lot of the things I do, TUIs still end up the least-bad thing currently available to me. In the cases I use them, I like three things about them: 1) ability to run remotely, 2) sessions are stored on the remote side and persistent, so a flaky local connection doesn’t kill things, and 3) there’s very low UI lag.

For years, the main “real GUI” satisfying #1 was X, but it’s not very good at #2 or #3. I’ve heard NX fixes that, but it appeared after I’d stopped using X, so I haven’t taken a look yet (maybe I should?). The modern-era alternative is webapps, but they often aren’t great with #2 and rarely satisfy #3; despite a pile of newer technology, clicking around in a webapp in Chrome or Firefox ends up usually feeling far more sluggish than an ncurses app in iTerm does. This all seems like it should be fixable, but as it is now it feels very much like a set of tradeoffs rather than a clearly superior modern solution that makes TUIs obsolete.

1. 2

Even a command line shell might be better done in a real GUI - see Mathematica notebooks and Jupyter for rich REPLs, and Acme, pad, Oberon, and MPW for editor-shell hybrids.

And Emacs! org-mode is pretty much a Jupyter-like notebook, except that it doesn’t result in a giant JSON blob. Like Jupyter, you can execute code fragments, plot graphs, display the output inline, etc. Similarly to Jupyter, I have used org-mode to train small neural networks, visualize the decision boundaries, etc.

I think for the larger population, elisp is one of the things that holds people back from Emacs. But it seems more editors are moving in this direction, e.g. Sublime Text can display HTML fragments inline, which is e.g. used one of the most-widely used LaTeX plugins to render inline equations, but also a plugin that can connect to Jupyter kernels and visualize the results [1].

1. 2

Well, some people (ie. me) have actually gotten used to the butterfly mechanism and prefer it over the scissor mechanism, including the scissor mechanism used in the Magic Keyboard (it feels close, but not good enough). I now highly prefer Apple’s butterfly keyboards over their scissor keyboards. I bet it will be the same for the Touch Bar for some people (I have the MBP2016 without Touch Bar).

And with that, Apple has put itself in a difficult situation. Retract and people who like the new changes will be upset, don’t retract and there will be a vocal group who will swear by the old MacBooks. However, retracting has another downside to Apple - they have to admit that they were wrong. So, I think that they will just stick to the plan.

Apropos the adapter: Apple’s USB-C adapters are really a scam, not only because of the lack of port (that Marco indicated), but there are multiple versions of the VGA/HDMI adapters and on some of them the USB-A port only does USB 2.0 (!).

The only thing that I absolutely miss in the new MBPs is MagSafe (which Marco calls ‘non-essential’), MagSafe has already saved many of my MacBooks. And I think that anyone with a kid can empathize ;).

1. -1

People use cat in the weirdest ways…

1. 8

I’m aware of useless uses of cat, but in this case I wanted to use it to ensure that wc -c wasn’t relying on the filesystem’s count of the number of bytes in the file - sending it through a pipe ensures that.

1. 3
wc -c < foo


Also, POSIX specifies that wc shall read the file.

1. 5

If you check out the GNU coreutils wc source, if it’s only counting bytes, it will try to lseek in the fd to find the length. wc -c < foo is not the same as cat foo | wc -c in this case, because the seek will succeed in the first case and not in the second.

1. 8

I still prefer cat |. I actually prefer cat | in almost every case, because the syntactic flow matches the semantic flow precisely. With an infile, or even having the first command open the file, there’s this weird hiccup where the first syntactic element of the pipeline isn’t the initial source of the data, but the first transformation thereof.

The main argument against it seems to be “but you’re wasting a process”, which, uh, with all due respect, I can’t see ever being a problem on a system you’d ever run a full multiprocessing Unix system on. If your system were constrained enough that that was an issue, a multiprocessing Unix would be too much overhead in and of itself, extra cats notwithstanding.

1. 2

< foo

This does not guarantee that bytes are actually being read(); redirecting a file to stdin like that lets the process call fstat() on it if it wants. A naughty implementation of wc -c could call fstat(), check st_mode to verify that stdin is a regular file rather than a pipe or something, and then return the filesystem’s reported size from the st_size field without actually reading any bytes from stdin. Having some other process like cat or dd or something read the bytes onto a pipe does prevent wc -c from being able to see the original file & hence prevents it from being able to cheat and return st_size.

Also, POSIX specifies that wc shall read the file.

I guess this does. :)

1. 0

… and this is, indeed, how I would have done it.

2. 1

Interesting. Thank you for the great response.

1. 1

Excellent. I work in academia and have a 39.5 hour contract. I do work more, but that’s because I often have fun/interesting problems that I like to work on. But there would be no complaints if I worked from 9 to 5. There is an occasional trip to a conference (typically once or twice a year). Also, in Europe one typically has plenty of holidays, I think I have 6 weeks per year, which I sometimes use.

1. 2

However, most researches in the know will tell you that Deep Learning is highly problematic because it requires a huge amount of data to train a good system. I have believed that because of how these systems are trained is so different from how the brain learns they simply cannot be evidence of a scientifically correct model.

Why are they not scientifically correct? They may not be scientifically correct models of the brain, but they can be scientifically correct models of some phenomenon that you try to model. Also, what is an optimal way for tackling problems may differ between wetware and computers. From my perspective there are two types of computational models:

• Computational models that attempt to do prediction as well as possible, without aiming to simulate the human brain. You try to get the best possible model and then try to interpret the model (what does it learn?). Such models are not arbitrary, but well-founded. For instance, one of the earliest motivations for RNNs was to capture longer-distance dependencies in natural language.

• Computational models that attempt to simulate the human brain.

Despite their name, for me most deep learning models are squarely in the first category.

Also, as kghose pointed out, for many tasks the amount of data is not large. E.g. in many NLP tasks, supervised training sets are typically tens of thousands of sentences and models are often competitive with non-experts (parsing) and experts (part-of-speech tagging). The problem is more that the current models are not very robust at domain and genre shifts. In NLP, you don’t need to construct adversaries, the average Twitter feed or SMS corpus is adversarial enough ;).

1. 5

Why are you installing to /usr/local? Packages are supposed to go to /usr directly.

1. 1

It’s the filesystem location specified in the GNU Coding Standard

Executable programs are installed in one of the following directories.

bindir: The directory for installing executable programs that users can run. This should normally be /usr/local/bin, but write it as \$(exec_prefix)/bin.

1. 5

Packages should never be installed to /usr/local

https://wiki.archlinux.org/index.php/arch_packaging_standards

Arch users expect packages to install in /usr, so it makes more sense to follow the Arch packaging standards here.

1. 2

Fair enough, I can make that adjustment. Thanks for sharing that link.

2. 3

GNU expects downstream packagers (“installers”) to change the install location, which is why the prefix variable exists. /usr/local/ is an appropriate default for “from-source” installs, to avoid conflicts with packages.

1. 13

and definitely don’t need deep learning – to find them

word2vec does not use deep learning, just two matrices (word/context matrices), matrix multiplication, and softmax. Since there is a large number of classes (the vocabulary), hierarchical softmax or softmax with negative sampling is applied. There are no non-linearities, let alone multiple non-linearities, hence word2vec is not deep learning.

It’s a hell of a lot more intuitive & easier to count skipgrams, divide by the word counts to get how ‘associated’ two words are and SVD the result than it is to understand what even a simple neural network is doing.

I don’t know where to start:

• First of all, this is nothing new. SVD on word-word co-occurrence matrices was proposed by Schütze in 1992 ;). There have been many works since then exploring various co-occurrence measures (including PMI) in combination with SVD.
• What word2vec is doing is pretty simple to understand: the skip-gram model is a simple linear classifier that predicts the context given a word. In this classifier every word and every context word is represented as a weight vector. The word embeddings are just the trained weight vectors (and/or context vectors) of every word.
• People use word2vec over PMI+SVD because word2vec vectors tend to be better at analogy tasks (see e.g. Levy & Goldberg, 2014).
• Levy & Goldberg, 2014 have shown that word2vec (skip-gram) performs a matrix factorization of a shifted PMI matrix.
• There are newer co-occurrence based methods, such as GloVe that are more well-founded than PMI-SVD. Moreover, GloVe’s training times are typically shorter than word2vec’s.

The approach outlined here isn’t exactly equivalent, but it performs about the same as word2vec skipgram negative-sampling SGNS.

word2vec is O(n) where n is the corpus lenght. SVD is O(mn^2) for an m x n matrix. So, ‘it performs about the same’ is only true for particular corpus sizes/vocabularies.

So if you’re using word vectors and aren’t gunning for state of the art or a paper publication then stop using word2vec.

In the end, the author does not really give much rationale for this. I would argue that PMI-SVD is not much simpler than word2vec, but even if it was, there are good off-the-shell implementations of word2vec (Mikolov’s, gensim, etc.) that one can use. We don’t switch to Minix in production because it’s simpler to understand than Linux or OpenBSD ;). Also, the training time of word2vec is not really a problem in practice - usually a couple of hours (depending on the corpus size) and you typically only have to do the training once.

1. 3

First, nobody forces anyone to buy a macbook. (I don’t want to rant). I can recommend Thinkpad keyboards, especially the one of my X1 Carbon 5h. Gen, except that you then have to deal with HiDPI on Linux which is no fun. Second, the website is not made for fast scrolling, it looks totally broken if I scroll to the bottom of the page, possibly because of some fancy lazy loading.

1. 7

except that you then have to deal with HiDPI on Linux which is no fun

But it is currently improving very quickly. I am running the latest stable GNOME on Wayland on Arch with 2x scaling (fractional scaling is still experimental) and most stuff seems to work now. I am using a MacBook Pro most of the day, but Linux has really gone from years behind to close enough to be usable in just a year.

(I hear that X.org is a different story altogether, no different scaling for different displays.)

Second, the website is not made for fast scrolling, it looks totally broken if I scroll to the bottom of the page, possibly because of some fancy lazy loading.

And the moving wrinkles are as annoying as the <blink> tag.

1. 3

HiDPI worked ok on Xorg at least two years ago, at least until you plugged in a low-DPI screen, because it could not run them in different modes.

And I quite dislike the 2x scaling. In 1x everything is too small, in 2x everything is way too large. I ended setting GNOME to 1x and Firefox to scale 1.6x which worked ok.

1. 3

latest stable GNOME on Wayland on Arch

I’m experiencing quite the opposite, in fact I switched to Cinnamon until this scaling issue is fixed.

Everything is fine unless you use two displays with different scaling (even when you use only one of them). Say for example you have an external monitor with normal DPI and a HiDPI laptop display then the window borders/icons and probably something else is scaled two times on the external display even when the laptop lid is closed, ignoring the scaling factor which is set.

I hear that X.org is a different story altogether, no different scaling for different displays

Yes, this feature will only be available for Wayland.

1. 2

Did Cinnamon fix that problem for you?
I use Cinnamon and still have that issue, but it’s entirely possible I am just missing something.

1. 3

Not entirely, i.e. it can’t scale each display differently but the window borders respect the scaling factor in contrast to Gnome where they follow the scaling of the highest DPI screen (even when it is turned off).

2. 2

idk about different scaling for different displays, but I have one single 1.5x scale (4K) display, and just adding Xft.dpi: 144 to ~/.Xresources made everything look pretty much perfect in Xorg.

3. 2

just got the 5th gen x1c (wqhd) and am very happy with it. I disabled scaling though and use i3. Some stuff still seems messed up (vlc is HUGE, i don’t know if it’s still scaling or the scaling factor reset on reboot or something).

I just increase the font size on firefox and in the terminal and feel fine without scaling.

1. 5

Go doesn’t have exceptions, so the common idiom is for the function to return multiple values, the last one being an error. And, of course, the caller should check that error, and react appropriately.

I don’t use Go much - can someone explain to me why this is preferred over a rust-style Result enum? It seems like returning a separate error value make it more likely that it will be ignored - is there a feature in the language to prevent programmers from ignoring the error?

1. 7

If a Go function returns a value and an error (e.g., (T, error)), then in order to get your T, you would need to explicitly ignore the error with _ as other posters have described. However, if a Go function returns only an error, then you can call that function without acknowledging the error at all. Note that the comparison with unwrap in Rust isn’t quite the same, since an unwrap will panic if there is an error, where as using _ to ignore an error in Go just swallows it and the program continues.

The trade offs between Rust-style error handling and Go-style error handling basically boil down to where you stand on the type safety spectrum. More ceremony in Rust, but less likely to drop errors on the floor. There is also the “the type signature of the function more accurately describes the behavior of the function” aspect of it. On the Go side of things, error handling is enforced basically by very strong convention: if err != nil { return err } and that’s that.

1. 4

Using nullable tuples for something that is semantically a disjoint union? Bleh. Just seems like a really awkward way to do it. :/

1. 3

Yes and no. This argument has been litigated a billion times on the Internet already. You won’t drag me into it. :-)

2. 3

You will get a compilation error if you fail to ignore the returned error.

You can experiment here: https://play.golang.org/p/9XSOZFGbzT

I don’t know Rust, and I have only been using Go at work a little bit, so I can’t really compare and contrast their respective error handling methods.

1. 6

Rust uses sum types, so you do not need a separate return value for errors. Instead, you use the Result type, which has two variants, Ok carries a successful computation, while Err communicates an error. You can then pattern match the result in the caller using a match or use the ? operator to let errors bubble up.

2. 2

You have to explicitly ignore it, doing something like x, _ := getX()

1. 1

Cool, makes sense. I guess the part that seems weird to me is that you are able to return both an error and a value. I also suspect that it’s slightly more common to ignore the error using _ than match or unwrap, but I don’t know if that’s actually true or not. I should probably just use go and figure out which one I actually prefer :)

I am very glad that error handling is something that language designers are thinking about now though - this is far better than in languages like C and Python.

1. 1

That all sounds fine, but there are definitely features missing (or at least not mentioned here) which I look for in a lightweight markup language. Those include:

• Footnotes/endnotes/sidenotes (I think org-mode actually supports at least one of these, though it’s not mentioned in the article)
• Embedded images/other media
• Embedded other markup - math markup is very useful (to me, at least), and I know some people have been keen on embedding graph diagrams (e.g. graphviz). This sort of feature usually translates into the ability to use plugins.

Of course, the more of those features you support, the less “lightweight” the markup ends up being. But that doesn’t make the bits I need any less necessary.

1. 2

I’m a happy preferrant on reStructured Text.

Some may complain about backticks, but it gets everything done.

Markfown feels like a simplified version and this orgmode contraption like weird NIH-CADT of that.

But the world being a mountain of shit, RST requires page-breaks to be embedded separately for each output type. I hope I’m wrong on this, but I don’t think I am.

1. 1

It’s easy to embed latex for math and graphviz for pictures in org-mode, along with a pile of other plugins. One cool feature is embedding your programming language of choice and having a following block show results for that code.

1. 2

It’s easy to embed latex for math

And not just LaTeX math, also LaTeX environments. And if you use GUI Emacs, you can preview the equations and LaTeX environments inline in Emacs with C-c C-x C-l. E.g., here is some inline TikZ in my research notes, where the TikZ fragment is rendered and previewed in Emacs:

https://www.dropbox.com/s/t18zqabwg14bl2n/emacs-latex-environment.png?dl=0

When exporting to LaTeX, the environment is copied as-is. For HTML exports, I have set org-mode to use dvisvgm. So, every LaTeX environment/equation is saved as SVG and embedded in the resulting HTML (you can also use MathJax, but it obviously doesn’t render any non-math LaTeX).

One cool feature is embedding your programming language of choice and having a following block show results for that code.

And the result handling is really powerful. For example, you can let org-mode generate an org table from the program output. Or you can let the fragment generate an image and include the result directly in the org mode file. This is really convenient to generate and embed R/matplotlib/gnuplot graphs. You can then decide whether the code, the result, or both should be exported (to LaTeX/HTML/… output).