Looks really nice, clicks the right buttons… But sadly I need a split keyboard for health reasons ;-(
I highly recommend gboards.ca for split keyboards, but I’ve also heard people say good things about the Moonlander (from the ErgoDox EZ folks).
I’m one of those Moonlander boosters. Using it right now!
But I do have a caveat: I had to relearn how to type on the ortholinear layout. My typing speed plummeted at first. I had never learned touch typing and was apparently “crossing over” for a bunch of keys like “y” and “6”. Now I’m faster than I was before and back into to realm of “think about the words” rather than “think about the fingers”.
Would love to see a direct comparison between the moonlander and Ergodox EZ. I have a (self built) Ergodox and I’m a big fan. But I wish I could have one that is just a tad smaller with keys being a bit closer to each other.
I saw this comparison video on YouTube (15 minutes, but indexed so you can skip to parts you are interested in) a few days ago and found it helpful. A friend of mine got the Moonlander and I have an Ergodox, so I hope to compare them for myself in the near future. I have loved my Ergodox and actually don’t find that the thumb clusters cause any trouble for me, so I doubt I will get a Moonlander anytime soon. I do envy the foldable wrist rests though. The ones for the Ergodox could pull double-duty as wheel chocks for a passenger plane and tend to wander away through a day of use.
Let’s add to the question “what is the quality of code review process in Linux?” an other one “what is the quality of ethical review process at universities?”.
I think there should be a real world experiment to test it.
Seems quite close to Python’s type hints. Not mandatory to use at all, but if used correctly, it massively helps you find bugs.
It’s proponents claimed it superiority for years… And now all the mainstream dynamic languages try to add at least some static types. But you can’t just put it there easily, it needs to be baked into the heart of the language.
Languages like C++, Java and C# have been getting welcome additions like var, auto and polymorphic lambdas. There is virtually no modern language that requires you to specify the type of your iterator and good riddance, too. Let’s say that there is a convergence.
Var, diamond etc. means that your code is still statically typed. You just don’t need to write the type by hand because it’s obvious for the compiler.
It can look similar to dynamic languages… But type inference is static typing to it’s core (it isn’t by chance this comes from ML family).
Moving static typing to tooling and giving hints to help that tooling is part of the convergence just like statically typed languages losing boilerplate is. There is no pedestal you can climb to say “they were wrong we were right all along”.
No. They were wrong. And that’s why they now need to add boilerplate.
You need to be statically typed to be safe and you need a good type system to be safe and remove boilerplate.
Lenovo simply makes the best Laptops and now I don’t need to order one without an operating system anymore :D
One aspect that makes ThinkPads better is that they tend to maintainable for the long term. A friend of me has a XPS15 that’s barely two years old with a bulging battery and Dell no longer manufactures or sells batteries for this model, and the third-party battery he bought refuses to charge because it’s “non-authentic”, leaving him with a dysfunctional 2-year old €1000 laptop; just because of a comparatively small issue like this.
Other aspects are probably a bit more subjective; personally I like the discrete trackpad buttons (I really dislike the integrated ones so many have), that I can open the screen at a large angle (my old XPS13 didn’t tilt back far enough in some conditions), that there are little “gaps” between the function keys to make them easier to use without looking, that I can disable the power LED, and some other small details. These may sound like small issues, but I really missed them when I used the company-issued XPS. Good design is all about small details like this.
I’ve also never had a ThinkPad that didn’t run Linux flawlessly out of the box without any mucking about. I think the “X1 carbon” models are a bit trickier in some cases though; I’ve only had X and T series.
They sometimes don’t run flawless on day 1, but as the safest choice you have a near 100%-chance that everything works a few months after it comes out.
In general I‘m more happy with Lenovos I‘ve used than from any other vendor, but here are some things I really like:
Hope that helps :) More than anything it is probably the robustness, which I really like. They are other Lapatops with better battery life or displays.
If you buy one with Linux installed, maybe you’ll have good support. If you buy one with Windows, you are on your own.
I have a Lenovo ideapad 320-15IKB and they are pretty complacent about:
fwupdinstead of packaging as Windows executables, besides being just a Inno Setup executable with an EFI file inside: https://forums.lenovo.com/t5/Lenovo-IdeaPad-1xx-3xx-5xx-7xx-Edge-LaVie-Z-Flex-Notebooks/Bios-Update-Failed-6JCN32WW-Lenovo-ideapad-320-15IKB/m-p/5042215?page=1#5154026
I hope that this Fedora/Lenovo partnership will push Lenovo into making their laptop ecosystem more Linux friendly. But i have little hope that this will affect their Laptops that are already being used.
This seems interesting, I’ll definitely give it a try. I’m currently using Regolith linux which seems similar to sway as an i3 alternative. I personally enjoy it over i3 since it’s super convenient to use out of the box. Curious how this compares.
Regolith is just nicely configured i3 with gnome session (and compton).
It’s pretty good IMO, the gnome session integration makes it much more usable out of the box to sway. On the other hand… it’s still X with all it’s problems.
The main benefit IMO:
Did all this stuff need to be a shell? I mean it replaces
ls, a command that lists items in the current directory (in my experience, sorted by name by default), with a command that needs to be piped into
sort-by name, which means you need to know “name” is a field, making
ls equally or more complicated than piping it into
sort. And instead of a series of rows you get an unnecessary ascii table. A lot of what this shell seems to do with
get and the like are simply achieved with
awk. Can someone explain this project to me? I really must not get what it’s trying to achieve, what is being done that a
moreutils-style package couldn’t do?
The idea is precisely to not have to use languages like AWK to do stuff that requires structure. Traditional Unix shells only work with plain text, however some shells like PowerShell work with objects instead of plain text (plain text is just a special type of object), which are more composable on its own.
I totally get structure, which is achieved by tools like relational pipes, there is an actual structure transferred from process to process; this appears to be as simple as column names
It’s certainly not limited to column names - it’s a stream of nestable, typed values. Take
du for instance:
nushell(master)> du crates ───┬────────┬──────────┬──────────┬───────────────── # │ path │ apparent │ physical │ directories ───┼────────┼──────────┼──────────┼───────────────── 0 │ crates │ 2.0 MB │ 2.3 MB │ [table 23 rows] ───┴────────┴──────────┴──────────┴─────────────────
path is of type
PathBuf, not just a string,
physical are of type
directories is a nested table with 23 rows of its own:
du crates | get directories ────┬─────────────────────────────┬──────────┬──────────┬──────────────── # │ path │ apparent │ physical │ directories ────┼─────────────────────────────┼──────────┼──────────┼──────────────── 0 │ crates/nu_plugin_sys │ 13.1 KB │ 15.4 KB │ [table 1 rows] 1 │ crates/nu-plugin │ 13.7 KB │ 14.8 KB │ [table 1 rows] 2 │ crates/nu-test-support │ 19.6 KB │ 43.0 KB │ [table 1 rows] 3 │ crates/nu_plugin_str │ 28.7 KB │ 24.1 KB │ [table 1 rows] ...
But what’s the difference between a “nested table” with 23 rows and a regular file with 23 lines? UNIX had relational database style operators like
join over 40 years ago. They are still in
coreutils today. I have literally no idea what this achieves
But what’s the difference between a “nested table” with 23 rows and a regular file with 23 lines?
The nested table is an explicit data structure with fields of structured data types, including other tables, and your regular file with 23 lines is a flat ad-hoc blob of bytes that isn’t going to safely, cleanly, or sensibly encode something as trivial as a filename.
You can’t safely or easily extract directory names from
-% du 1033 foo bar lolwtf
And doing anything with these looks like, erm, “fun”:
-% ls foo\ bar\nlol\001wtf\n/ -% gls 'foo bar'$'\n''lol'$'\001''wtf'$'\n'
But this works just fine:
> du | get path | rm --recursive $it deleted /home/freaky/code/nushell/x/foo bar lolwtf
And so does this:
> ls | get name | rm --recursive $it deleted /home/freaky/code/nushell/x/foo bar lolwtf
The best is no highlighting, because everything is actually important. If anything is wrong, it’s wrong, and will give you bad output - thus everything is important.
The syntax highlight is not for the computer, it’s for the human.
Some parts are quite trivial and it’s enough when IDE gives them spotlight if there is a typo in them.
The syntax highlight is not for the computer, it’s for the human
I did not say otherwise. And humans who need syntax highlighting is ok, but to me, it’s a lot of visual noise.
The best is no highlighting, because everything is actually important.
I prefer to have syntax highlighting for comments, so that if for whatever reason I’m in a large comment block I can tell, and so that I can differentiate easily between what’s a description for code and what is code itself.
I don’t like how so many colour schemes make comments really faint. If you shouldn’t notice them without looking, I see little purpose to them being there at all.
# useradd -c "& & & & & & & &" -m buffalo # finger buffalo | awk '/Name: /' Login: buffalo Name: Buffalo Buffalo Buffalo Buffalo Buffalo Buffalo Buffalo Buffalo
My favourite question so far - both as a candidate or as an interviewer: “what you did at your last job and what have you learned, what was hard about it?”
When this question is answered you know a lot about the candidate. And the interviewer can follow with a more technical question that can test something about the first answer.
Lambdadays conference in Kraków. Workshops took place today - I have chosen an elixir workshop.
I recommend you to have a look at videos when they come out. John Hughes was awesome as usual.
OnePlus 7 Pro with original SW (but several apps disabled). Nova launcher.
The HW is nice and fast, screen is really fast, camera could get quicker.
The price was tolerable for the spec. I would enjoy a smaller size. Updates every second month only.
All the software I search at F-Droid first but honestly Play is much better place to find the results in the end.
That’s not entirely true, but it’s definitely different. see: https://hardenedbsd.org/content/freebsd-and-hardenedbsd-feature-comparisons
At freeBSD ASLR is off by default, AFAIK.
sorry, I missed the ‘by default’ part of your comment :)
Lots of stuff is off by default in FreeBSD, because they take a very conservative stance with default settings. If the community tests it well, it will usually get turned on by default. At least that’s my experience with FreeBSD.
HardenedBSD takes a different approach, security is turned on by default, which tends to break things(and it really does break things). But when I reach for a BSD, I generally install HardenedBSD, despite the breaking.
If I had a high tolerance for using an OS where security mitigations are prioritised very highly, I’d personally use OpenBSD?
OpenBSD is pretty great security wise too. It depends on the purpose of the machine. There are many things OpenBSD just can’t do because of their keep everything small perspective(which is a plus for security, but has it’s downsides)
ASLR makes debugging a massive pain for the benefit of a layer of security by obscurity. ASLR hasn’t stopped or lowered the frequency of security vulnerabilities on any platform that’s implemented it, as far as I’ve seen at least.
The correct process: create a merge request and put the original author as a reviewer.
Anyway the change looks much more like a process smell then a code smell.
This should have been the moral of the story, the author rewrote someone’s code and merged it to master in the middle of the night. It’s a short sighted and mean thing to do, and probably the reason the author was told to revert their work. This isn’t a story about clean code, it’s a story about working with teammates.
Correct-er process: privately speak to the original author and mention that you would like to rewrite and give them an overview of how & why. Sometimes they may be able to talk you out of it!
Just simple statically generated web pages. Partially converted from a previous wordpress blog.
Great reference, thank you. There is nothing new under the sun.
I think we only need the most basic of logic checkers or algorithms. The main need is for widely available and useable datasets and transparent reductions. I think we could get very far with “good enough” heuristics.
An interesting metric if you want to make a short-living utility with low latency to output.
Pretty useless if you are interested in throughput or latency of a long living process.
If you’re interested in latency, the relationship between syscalls and latency is so scattered that you’re better off just measuring the latency. Otherwise, you might conclude that JIT compiled Java is as good for quick command line programs as go.
It also provides a datapoint on unnecessary complexity and bloat.
Pity Nim is not in the article (yet).
Well, the more unnecessary syscalls, the more of a runtime there is. Those additional syscalls never go away.
If the runtime is long enough this syscalls at start get negligible.
Once again - this metrics are useful in one context and totally useless in others. The important stuff is to know if they are relevant to your situation.
Not all of these syscalls are strictly startup-related, though.
Part of Rust’s overhead is from stdout locking, and that means additional syscalls every time you print, not just at startup.
If Rust generates syscalls for an uncontested lock, that’s bananas. Every decent lock implementation uses an atomic instruction in userspace, and only falls back to the kernel when it finds the lock held by another thread. For example, pthread_mutex_lock in musl libc tries an atomic compare and swap before resorting to the syscall implementation.
I meant the language runtime. As we can see, the “slower” and more abstracted languages make more syscalls. The more control you have, the less syscalls are called.
A commenter on HackerNews made a test with Nim: https://news.ycombinator.com/item?id=21957476
A lot of people in this thread seems to be focusing on startup time and ignoring the point of the article hinted more by the amount of disk space used and, secondarily, the number of syscalls:
These numbers are real. This is more complexity that someone has to debug, more time your users are sitting there waiting for your program, less disk space available for files which actually matter to the user.”
This was not an objective test, this is just an approximation that I hope will encourage readers to be more aware of the consequences of their abstractions, and their exponential growth as more layers are added.
Unnecessary complexity translates into cognitive load for those who want to understand what happens under the hood.
Especially when contributing to the compiler or porting it to a different architecture.
I read the entire article and understood the point.
I don’t think the point is valid, that’s all - at least not when it comes to “real-world” software development.
For example, you can’t throw a pebble on this site without hitting a comment decrying C’s lack of memory safety. “But my users will thank me when they count the low number of syscalls my code is using!” isn’t much use when your program is crashing or their box is getting rooted because you messed up memory management.
Likewise, if your code is spending most of its time waiting for data to come down the wire, or for something to be fetched from a database, why optimize for syscall count?
Umm, you do realize the database fetching the data is a program using syscalls, and the routers transmitting the data also use syscalls. If everything in the chain is slower, you will be waiting longer…
Any database system can be coded as lean and mean as possible, and still be brought down by someone mistyping a query and performing a full-table scan.
A power outage can knock out a datacenter, forcing traffic to go via slower pipes. So users will be waiting longer, despite routers being lean and mean.
More syscalls contribute to slower performance, but they’re generally dwarfed by other factors.
A month or two ago there was a spate of posts where people “beat” GNU
wc using a plethora of languages. It would be interesting to see the results of a program that read a 1MB Unicode text file and reported number of lines, bytes, characters etc, and compare using this metric.
And thank you very much.