I remember when the tmux version bump broke my configuration as well. Too bad I just ranted about it and fixed it, without leaving a note to others who might encounter the same issue.
This post has been a really useful for me. I did not want to invest time in reading the new tmux configuration.
The need for incremental compilation is one of the worst things about programming in C/C++ and a constant source of WTF moments and tedium. No modern language should make the same mistake of relying on it for fast compilation.
I’m disappointed to see this for Rust, because if it’s aiming to replace C/C++ this is one of the things it should really get right. As a C++ programmer, I am not excited by the idea of switching from a language that’s a nightmare to build to another language that’s a nightmare to build. Honestly this might even be a step back - now it’s even less likely that anyone will put serious work into optimising the compiler.
(That said, this is the first time I’ve looked at ccache and it seems to get it as right as possible: hashing source files/includes/compiler args/the compiler itself is good and takes no time. I did some tests at work on this and reading + hashing the entire codebase takes less time than compiling a single file)
How are you supposed to avoid the need for incremental compilation? Modern compilers perform lots of complicated transformation to get good performance. This takes time.
I don’t know anything about anything but it just feels like there’s no way compiling rust is an inherently as slow as rustc is
The vast majority of my code has no impact on the final performance of the program and burning CPU time optimising it a waste. It’s also a waste burning CPU time on code that is performance sensitive, because optimising compilers are not remotely good enough and I have to do all the work to make it fast anyway.
I’d much rather have a compiler that lets me tell it what needs to be optimised, how much effort should be spent optimising it, and how it should be optimised. I should be able to tell it “this loop needs unrolling”, “this loop is super hot and must be vectorised with no peeling and you should refuse to compile if you can’t do that”, etc. (This is largely how the Intel performance tools work btw)
steve/nick: I know the rust team is working on incremental compilation, my point is that they shouldn’t. It’s a large source of sadness in the C/C++ world, and computers are fast enough that it should really not be necessary.
Rust team is working on incremental compilation. They mention it in a lot of updates and threads on forums. It’s probably hard to add to a complex compiler while balancing what’s good for it vs what structuring is good for other features. Not o mention the language itself is complex.
So far, I’d say the progress for money invested is going a lot smoother than “C with Classes” did. ;)
Moreover, this commit enables it per default in recent nightlies - I’ve noticed the speedup as well :)
Glad to know it’s come that far with noticeable benefit. I’ve loved it since LISP giving me per-function compiles at a fraction of a second each. You just flow and flow with the work once automated parts happen at such a pace.
Yes ccache is a really nice tool. I’ve been bit by miscompiles due to stale caches once or twice but it usually works quite well.
now it’s even less likely that anyone will put serious work into optimising the compiler.
We have been putting person-years of work into getting incremental recompilation working, and regularly work on other aspects of compiler performance, and will continue to do so in the future. Our users regularly tell us that compiler performance is important, and we agree! Stuff like this is only one part of that overall picture.
Why are we even considering using a specific service for content hosting? I don’t see any benefit in using dropbox (or any similar service), especially not if it’s supposed to be the preferred option.
Why are workstations getting so seemingly rare? I sometimes get to check out offices in young startups and everyone seems to be working on laptops. I strongly prefer working at my workstation as opposed to using my laptop because of screen space, and overall hardware performance. I probably have a really skewed perspective on this as most of my close friends and colleagues share my sentiments.
So, why prefer a laptop to a proper workstation for your general work horse? I imagine most developers work either at home or at their place of work and not mostly mobile so I’m not sure why so many people prefer a laptop at all times.
I’ve noticed. I think there’s a few things, order may vary:
and, way at the end of priorities:
What amuses me is that nearly everywhere I see that, people attach widescreen monitors or even travel with portable screens (like this https://www.asus.com/us/Monitors/MB16AC/).
I engage in this myself since my work computer is a MBP and I unless I’m traveling it’s on my desk attached to a 24” LCD, a keyboard, and ethernet (though I mostly interact with it via ssh and Synergy keyboard/mouse sharing).
I’m speaking from a different perspective here, but I think this is related: If you look at students and workers at universities and colleges, most of them (at least here) spend their day on the go and have to take their computing with them - the laptop is the obvious, classical solution, not many replace it with tablets or some hybrid device. I for one spend my entire day that way, so I’m used to the limitations, but I enjoy the benefits as well. I guess it’s a thing of habit. Most drawbacks (as usual) can be worked around, too: computationally expensive tasks can be run on a server you connect to remotely, and screen estate can be used very efficiently and with great comfort once one finds a good keyboard interface to a window manager of one’s choice. The mobility aspect can’t be recreated another way, however.
We are a dying breed. I’m the only one in my office with a workstation. I built it myself (on the company’s dime) when I joined three years ago and it has served me well.
I’ve observed that the people with laptops like to bring them and work on them everywhere they go. They clearly like the mobility. While I also have a laptop, I hate using it, especially when I’m mobile. Not only do I dislike the limited screen real estate (my workstation has three 1920x1200 24” monitors), but I psychologically hate working when I’m mobile. It’s uncomfortable and hard for me to focus. I would much rather sit quietly with my thoughts, pay attention to the other humans that are with me or read something on my phone.
I suppose probably I’m talking about the high-end of the spectrum as is fair since we’re also likely talking about high-end laptops in comparisons to these workstations.
Mainstream midrange (like i5, since Sandy Bridge) hardware got good enough, and gaming took over HEDT for most high end applications.
You could simplify a bit and eliminate the two calls to jot(1)
and the two for
loops by doing this:
if [ $HM -gt 720 ]; then
S=$(( $S + $INC * (1440 - $HM) ))
else
S=$(( $S + $INC * $HM ))
fi
Running once a minute with an increment of 2
is a bit much since sct
rounds the number to a precision of 500
, so the sct call in the while loop will only do something once every 250 runs, i.e. roughly every 4 hours.
Finally, since it wasn’t mentioned yet: jung@ polished it a bit and created an OpenBSD port of sct about a year ago.
Now sct is packaged for void linux as well, thanks to @duncaen. So it should be equally easy to get as the port.
Very small nitpick: this only works if /bin/sh
is bash (or something that supports $((...))
). Works like a charm after telling it to run with bash.
I missed the original sct thread, and redshift felt heavy, so this little script + sct will be of great use to me. Thanks!
Piling on the nitpick wagon, error messages should go to stderr rather than stdout:
- echo "Please install sct!"
+ echo >&2 "Please install sct!"
exit 1;
FWIW, dash handles this properly as well:
% file /bin/sh
/bin/sh: symbolic link to dash
% sh
$ echo $((1440 - 720))
720
OpenBSD’s ksh, same goes for zsh:
$ echo $((1440 - 720))
720
So I guess on the majority of systems you’d run X on, this won’t be an issue ;)
Now, my own nitpick - I’ve ran shellcheck and fixed some warnings:
#!/bin/sh
# Copyright (c) 2017 Aaron Bieber <aaron@bolddaemon.com>
#
# Permission to use, copy, modify, and distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
S=4500
INC=2
SCT=$(which sct)
if [ ! -e "$SCT" ]; then
echo "Please install sct!"
exit 1;
fi
setHM() {
H=$(date +"%H" | sed -e 's/^0//')
M=$(date +"%M" | sed -e 's/^0//')
HM=$((H*60 + M))
}
setHM
if [ $HM -gt 720 ]; then # t > 12:00
for _ in $(jot $((1440 - HM))); do
S=$((S+INC))
done
else # t <= 12:00
for _ in $(jot $HM); do
S=$((S+INC))
done
fi
while true; do
setHM
if [ $HM -gt 720 ]; then
S=$((S-INC))
else
S=$((S+INC))
fi
$SCT $S
sleep 60
done
Also note that on linux systems, the jot
binary is often not available (I just discovered it today, it’s an OpenBSD utility). At least on void linux one can install it with the outils
package.
Either way, very useful - no need to run sct by hand now :)
Odd, my dash at home barfed on the script. I now checked again, and it errors out on function setHM { .. }
: it wants setHM () { ... }
instead.
Not quite sure why I remember $((...))
being a problem…
function
, even though technically a bashism, originated in ksh
.
In terms of $((...))
being a problem, you are most likely referring to ((...))
which is indeed a bashism. Alternatively, you might have come across ++
or --
in $((...))
, which are not required by POSIX
.
$_
holds the last argument of the preivously run command. Overwriting it does no harm here and in similar places, so I occasionally make use of it (both to silence shellcheck and to signal the value won’t be used). It’s possible though some folks might object :)
Also note that on linux systems, the jot binary is often not available (I just discovered it today, it’s an OpenBSD utility).
You can use seq
instead of jot
on linux.
Its a myth, you can check it with type [
, both are builtin, [[
is only available in larger shells, [
is POSIX.
The [
should be preferred for #!/bin/sh
portability.
$((…)) is very much POSIX
.
Both Tor and the Internet Archive are threats because they promote an open Internet.
That’s not how it works. Carnegie Mellon University attacked Tor, probably to do FBI’s bidding: https://motherboard.vice.com/en_us/article/carnegie-mellon-university-attacked-tor-was-subpoenaed-by-feds
Are they the baddies now? Is CMU the big bad wolf supporting Le Pen and hating our way of life?
There’s too much fiction and too little research in this article. Also:
They ignore robots.txt
Oh, the irony - https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/ :
A few months ago we stopped referring to robots.txt files on U.S. government and military web sites for both crawling and displaying web pages (though we respond to removal requests sent to info@archive.org). As we have moved towards broader access it has not caused problems, which we take as a good sign. We are now looking to do this more broadly.
There’s too much fiction and too little research in this article.
Right-on.
The title of this post is ridiculous and misrepresents the issues at stake; the content is even more shady.
Perhaps a fakenews tag would be appropriate for lobsters?
Both Tor and the Internet Archive are threats because they promote an open Internet.
That’s not how it works. Carnegie Mellon University attacked Tor, probably to do FBI’s bidding: https://motherboard.vice.com/en_us/article/carnegie-mellon-university-attacked-tor-was-subpoenaed-by-feds
Are they the baddies now? Is CMU the big bad wolf supporting Le Pen and hating our way of life?
This does neither follow, nor is it implied by the article. Just because one specific motive for attacking a service is given, it doesn’t follow that all attacks on said service follow the same motive.
While one can disagree with the statement the article makes in the first place, this is still faulty reasoning and shifts the burden of proof to unrelated grounds.
Just because one specific motive for attacking a service is given, it doesn’t follow that all attacks on said service follow the same motive.
Have you read the article? It’s filled with causation jumps based on assumed motives and opportunities. It boils down to a giant “wink-wink, nudge-nudge”. I just used the same mechanism with CMU to show the absurdity of it all. What fault do you find in that?
The article points out most of it is speculation. You don’t do that, making your statement a strawman.
The article points out most of it is speculation. You don’t do that, making your statement a strawman.
It’s obviously a reductio ad absurdum and most people don’t need it spelled out for them. You need to read some more.
Rust is not a panacea, it makes many trade offs and being as easy to use as python for small tasks is not one of them. I don’t see why you would bother using rust unless you have performance requirements that python/clojure/ruby/javascript/whatever can’t handle.
Safety requirements aren’t handled that well by dynamic languages. In many such cases, using a language that eliminates errors at compile time, like Haskell, Rust, dependently-typed languages,… makes a lot of sense and it seems this is a usecase people have. I’d also distinguish between ressource usage and performance: If my code does a mundane thing, I want it to spend a mundane amount of my computing resources. This can get harder than necessary with some of the options you listed (albeit Haskell is notorious for being hard when it comes to reasoning about performance).
This doesn’t contradict what I was saying. Rust is far more annoying to use than python for me - But i wouldn’t use python in places where rust has other benefits that better fit the needed trade offs.
Well, the source of inspiration for this certainly has plenty.
Well, I like it. I’m exposed to enough meanness on the internet anyway. And there’s certainly enough in the Linux kernel space.
Depends. If you want to build it from source you get to fight with their build system, which seems to be made with debian (and thus a slightly older GCC) in mind. The big benefit is IMO their payload building which might be closer to what you want than only GRUB or only SeaBIOS with less effort spent. And then there’s stuff they include that you might not want: GRUB background images, a preconfigured setup for GRUB, which might break easily depending on what you use as an OS, etc. Essentially, libreboot is more than just coreboot.
The author also mentions that said GRUB setup fails to boot his OpenBSD installation, so to use Libreboot, you have to get through most of the same steps anyway.
Also, Libreboot doesn’t work well with *BSD. E.g. you cannot use full-disk encryption without serious modifications.
I made a how-to at https://lists.gnu.org/archive/html/libreboot/2016-09/msg00010.html , but it’s easier to just install Coreboot.
I thought keybase.io might make it so that I used PGP more often but it really hasn’t. I don’t know if this says more about my lack of want for secrecy or the technology though.
For me it’s a network effects issue at the core. I got a keybase.io account, sent a couple messages and files to the two people I know who also have accounts, and then never used it again.
However, I’ve spent some time thinking about this, and I realized that I just don’t care that much about electronic privacy or identity verification. If I don’t want people to know about something, I don’t talk about it online. Plain and simple. I also don’t worry that the people with whom I’m communicating are not who they say they are.
Of course this is a form of privilege. I live in a country that (for now) isn’t going to kidnap and torture me for complaining about politicians, etc. I also don’t deal with sensitive information or anything terribly valuable, so I’m not a lucrative victim for any kind of targeted attack.
That’s the fun thing - it isn’t. It seems like the person stating their position in this manner is perfectly aware of the fact that they might have something to hide, and their choice is to not take this online (as that seems to be possible). As pointed out, that’s a privileged position given political and social circumstances, but IMHO it seems to be coherent. At that point it’s a matter of whether you want the hassle of using systems like PGP and get the benefits, or whether you avoid their usecases altogether.
EDIT: I should add that such a position obviously has a drawback: it gets increasingly difficult to maintain it comfortably while not complicating your communication with people.
In a way, maybe. But the important bit of the “nothing to hide” argument, in my mind, is that other people only need encryption if they have something to hide, and that “something to hide” necessarily means something sinister.
I am perfectly happy for others to use encryption. I signed up for keybase.io partly so that I would be able to communicate with people in the event that someone wanted to tell or send me something and keep it hidden. And I recognize that there are countless things that perfectly honest people might want to keep hidden.
I should have added that I’m totally in favor of using encryption and other means to frustrate dragnet surveillance. I minimize my social networking footprint, run TLS on my web site (though I didn’t do this until Let’s Encrypt made it dead simple), use an email provider that doesn’t scrape my messages, run HTTPS Everywhere in my web browsers, etc.
I just don’t get enough value out of PGP to warrant the annoyance.
While this provides a lot more functionality than just the bare minimum it says on the label, I’ve had great results with something like build-something-here && notify-send "check your build"
. Sure, it’s pretty simplistic compared to the automatic shell integration and the other advanced features ntfy provides, but it serves the majority of usecases (for me at least).
I also mostly use Rust for personal projects - for lack of much other stuff I do.
That is, I have a few projects written in it, and I am contributing (small) improvements to the tooling and library ecosystem. Now onto the reasons I chose it:
As evident from the previous paragraphs, I also write Haskell, and I just pick whatever language of these two I think fits best with a given task, though the actual decisions are mostly based on quite subjective preference in most cases, as both would be suited. Other problems, again, get handled with different tools, for different reasons. Okay, the last few sentences weren’t all that helpful.
Regarding the last question: The compiler is seeing a lot of work (and I hope I’ll get around to do some of it, too) and constantly improving. Some rough edges, however, are still being worked on. I guess getting rid of those would make my life indeed easier in some cases. I also await a time when some features only available in nightly builds will hit stable as well, but that’s a mere symptom.
Hope this makes my position clear.
Even more relevant: Railroad switches do a form of routing, too.
I’m using my own window manager together with bartender and lemonbar. This allows me to have a dwm-like configuration in Rust with a slightly different approach to tagging of windows, as well as a nice bar showing info from both the wm and the rest of the system. All in all, it fits my needs, even if it’s a sick form of NIH.
Since I mostly use terminal applications, dynamic tiling as pioneered by dwm et al is a very nice solution. I have implemented some details to handle popups and similar things gracefully, as I view a lot of information through dunst.
Considering Vim 8 just came out, is it worth messing with Neovim?
I run a really light Vim config, mostly because all of Vim’s plugins that do useful things make it slow (Syntastic). Seems like I might as well wait for them to support Vim 8 at this point.
There is a nice asynchronous Syntastic
alternative for neovim: neomake. That seems like what you need. IIRC vim got some form of asynchronous plugins now as well, so you might want to wait for plugins to use that functionality or see if something like the thing linked above gets ported. Since the transition is really uneventful (it was for me at least), you might as well try it out.
Investigating free monads and tangential approaches and tools to improve the code for my AST querying tool (github here, written in Haskell, contributions etc. very welcome, though currently lacks a readme). The goal is to get rid of as much code in the IO
monad as possible and get it ready for tests, benchmarks, and plugins for other languages.
Other than that, I will be changing some of the keybinding configuration machinery in my window manager to allow for a shorter configuration.
Then I’ll probably have some real-life stuff to get done, planning for work, a vacation and other things.
Tangentially related, but for some reason I want to share:
This reminds me of an issue I faced in my own window manager just this week:
On some user actions in some applications that were spawned as children of the WM, the whole X session froze. After attaching gdb
to the process, I discovered that it received a SIGTTOU
… somewhere in the polling code of libxcb
(in case someone isn’t aware: XCB is a library binding the X protocol to C). This could be reproduced in a reliable fashion and seemed rather odd. After some pondering I simply redirected stdout
and stderr
“away” from the TTY and the issue went away.
It turns out that the browser (a child of the window manager’s process) generated some output that went to the TTY, as it’s stdout
was inherited from it’s parent. This, together with the infrastructure starting the window manager (xinit
, startx
etc.) caused the SIGTTOU
to be sent as it normally would be. The fact that the signal was received at the same address consistently across reruns turned out to be simple to explain once I got the right idea: The signal was caused by an application run by the user, which obviously meant that the window manager was waiting for events from the X server at this time, and waiting for the kernel to be woken up after successfull completion of a syscall reading from the UNIX domain socket used to connect to the X server.
That’s how, with a distance of multiple decades, two very similar issues around very similar software are found and fixed.
While this is pretty handy, I must admit that I have a problem with the connotation made by the background coloring of the complexities: On the most superficial level, the ordering is a bit weird at times (O(n+k)
is “fair”, but O(nk)
is “good” for instance). Moreover, the main takeaway from my algorithms and datastructures course wasn’t the complexity of an algorithm or how to understand where it comes from, which are both important facts, but the meta-problem of understanding the implications, which do not boil down to a simple ordering, since many other constraints, properties and - at a later stage - implementation details have to be considered. This is something a chart like this doesn’t capture and in fact, it might even be misleading.
I recently switched to light backgrounds everywhere and an extreme redshift/sct setup (2000°K on the low end). Given that I have a terrible old screen on my laptop, this is the optimal solution for situations when I have to use a strongly dimmed backlight. And another, quite unexpected, effect is that even at night, I prefer a white screen with a strong redshift and slightly dimmed backlight to any other option. This might be connected to the fact that I find most dark color schemes to be less versatile and mostly way too jarring when it comes to contrast.
This is what I do myself; agreed on all points. There have been studies composed on the subject over text and background color - here is one, for example.