Great writeup!
I recently made the jump from Postfix to OpenSMTPD and have been very happy. It was on my wishlist for years but I really needed filters to work. I loosely followed the blog post by Gilles himself: https://poolp.org/posts/2019-09-14/setting-up-a-mail-server-with-opensmtpd-dovecot-and-rspamd/ although it is quite lengthy and aimed at absolute beginners.
nitpick:
disable_plaintext_auth = yes
That’s the default so you could shorten the text with one explanation less ;-)
Thanks! That blog post also helped me a lot, and I gave him a shoutout in the OpenSMTPD section.
I think I’ll leave that Dovecot setting as-is, because I prefer to be explicit about such an important setting… even though the text could definitely use some shortening. ;-D
Thank you very much for the writeup, I might actually make the jump now, I was very hesitant to use all the docker all-in-one solutions.
Unlike SPF and DKIM, DMARC doesn’t really do anything
This is technically not true, as far as I know, as it changes the behaviour of DKIM and SPF if you have it enabled. For example, pure SPF will only check the sending envelope for a valid sender but ignore the <FROM:> header, which is actually what the user gets to see. With DMARC enabled, this behaviour is extended.
https://media.ccc.de/v/36c3-10730-email_authentication_for_penetration_testers
DMARC in fact does so much that mailing lists have to change their behavior.
I still don’t fscking understand whether setting p=reject would be okay with (e.g. freebsd.org, freedesktop.org) mailing lists or whether that would send all mail forwarded from me by the mailing lists (in case of freebsd.org at least, with the list’s added “signature” and me in From
) to trash/spam everywhere.
I’ve moved my own setup from Cyrus to Dovecot a while ago, and never looked back. So much easier to setup and operate. Not to mention that migration was prompted by a long-standing but in Cyrus that caused message database corruption and went unfixed for over a year!
I had a similar experience with Postfix and Exim versus OpenSMTPD… the first two are clearly much more flexible, but a nightmare to set up for a poor hobbyist like me. Then I found OpenSMTPD, whose documentation fits into a couple of manpages instead of an endless series of HTML pages, and I never looked back.
Maybe I should look into it. I’m a sendmail survivor, so I don’t find Postfix all that hard to use (by comparison), but if it can be made simpler, I’m all for it.
For me, it’s quite important that I can write a config file from scratch using the available documentation; IMO just changing a few things in a template makes it hard to have a clear idea of what’s going on. I realize that’s a lot to ask from a program, but I think OpenSMTPD has nailed it whereas Postfix hasn’t… not to mention Exim.
That’s exactly why I switched. When tools have endless configuration options I always get the feeling that I’ve missed something, or I haven’t done something right. I just can’t feel confident that my setup is secure and I haven’t made some rookie mistake.
Great write up! Especially liked how you explained every nitty-gritty detail. I’m currently using docker-mailserver; it works for the most part, but Docker is so bloated. I might consider switching to your setup—I’ll just have to rework it for multiple users, across multiple domains.
Thanks! Once I finished writing this post, I was shocked at how long it had turned out… Yes, I included a lot of details, but still: email is a horrible system that needs a lot of explaining.
Adding more users and domains shouldn’t be too difficult, as long as there aren’t too many. Just™ update the user databases of OpenSMTPD and Dovecot, and add some domains in the former’s config file, and you should be good to go.
Collabora does a lot of good open-source work, and that costs money. If they can get a big fistful of cash for implementing an optional feature such as HDCP then I’m not too concerned.
Is DRM a good thing? No, of course not. But as he says: it’s totally ineffective. It’s never been an obstacle for me, so it’s pretty low on my “to worry about” list.
HDCP isn’t an optional feature if its presence means the proliferation of media that’s only viewable through video HDCP.
DRM is totally ineffective when it comes to preventing the media from being distributed illegally, but it necessarily involves limiting a user’s control over their own computer.
Very, very well written.
This excellently summarizes my own experience in the last few months, where an extremely challenging course forced me to develop a form of self-discipline that has made me a significantly happier person.
Also, this reminds me of a quote by US President Calvin Coolidge:
“Nothing in the world can take the place of persistence. Talent will not; nothing is more common than the unsuccessful man with talent. Genius will not; un-rewarded genius is almost a proverb. Education will not – the world is full of educated derelicts. Persistence and determination are omnipotent.”
I found the history of that quote to be quite interesting: https://quoteinvestigator.com/2016/01/12/persist/
The TL;DR summary:
In conclusion, a family of closely related passages evolved from a text written in 1881 by Theodore Thornton Munger. The initial passage was focused on the importance of “purpose”. By 1902 Edward H. Hart employed a variant in a speech that stressed the primacy of “persistence”. By 1929 Calvin Coolidge was being credited with an instance about “persistence” that closely matched the words of Hart.
(Coolidge is one of my favorite presidents, and is depressingly under-acknowledged. But I hadn’t seen this quote before!)
Thank you!
I’ve had similar experiences where I feel much happier working towards a challenge and similarly, unfulfilled when I am not pushing myself in some way. The quote from Coolidge is excellent as well - particularly this segment: “nothing is more common than the unsuccessful man with talent”.
Presumably to raise themselves above the law even more. Or maybe they just want to see all transactions their users make, because they definitely have the right to do that /s.
It depends. Purely looking at the programming field, then yes, I agree: in practice you will usually work on a specialized software component that communicates with components made by other teams over an agreed protocol or interface. So, the better you are at that specific thing, the better your career.
I’m studying to become an engineer, and I think it’s a bit different in that field. I confess that I haven’t actually entered the job market there yet, but from the stories I hear from people who have, it seems large amounts of time are wasted on a lack of mutual understanding between multiple disciplines. Therefore I think having at least a basic insight into other specializations would be very helpful.
Or maybe I’m just looking for excuses for the fact I’m going in a fairly broad direction myself ;-)
It pains me to see how happily people are getting herded around by GAFAM with dirty tricks like this, but I honestly don’t know what to do. I haven’t managed to convince even a single person to use “freer” software in years.
Once upon a time I would’ve tried to spread the Holy Word of open-source (or at least user-respecting) software like Firefox and Linux, but I’ve mostly given up on that. In my experience, “normal people” associate these things (yes, even Firefox) with “technical people”, so they’re too scared to try. (And actual technical people are too stubborn ;-D)
My last job was a shop of 20 or so developers. We all had the same standard-issue macbook. I could tell mine apart because I put a Firefox sticker on it. I evangelized quite a bit with my teammates. By the end of my three-year tenure there, I had convinced nobody that at least supporting Firefox in our web applications was neccesary, and nobody to use Firefox as their main browser.
I left that job a few weeks ago, and I learned from my old boss that he and two others started using Firefox because they missed me and my ridiculous evangelism.
It’s okay to give up, or take a break, from evangelizing because it’s kind of exhausting and puts your likability with others at risk. But I think it’s hard to tell when you’re actually being effective. There’s no real feedback loop. So if you’re tired of it and don’t want to do it anymore, by all means stop evangelizing. But if you want to stop just because you don’t think you’re not being effective, you may be wrong.
You can’t convince “normal” people by saying it’s “freer” and open source (unfortunately, not even more private) - they don’t care about this stuff.
I convinced my wife to try FF once quantum came out saying “it is much faster”. She got used to it.
I managed to convince a teammate to use FF because of the new grid inspector.
People only care about “what’s in it for me”.
Of course that’s what they care about, I know. IMHO not having a multi-billion-dollar corporation breathing down your neck is a pretty big plus, but I guess that’s just a bit too abstract for most people, as you seem to be implying.
I’m a young person starting my adult life, and I’m deeply troubled by this. Education has never been better, average intelligence has just peaked, and people of my age have never had so many opportunities. On top of that, there are some urgent problems in the world, most notably global warming, whose heat (no pun intended) my generation and our offspring will feel the most.
And what do I see my generation doing? Looking at cooking videos on Facebook. And when I talk to them about cooking, they say they barely know anything, and they can’t be bothered to learn. (Ok, that’s not all they do, but most of the other stuff isn’t any more constructive)
Mainstream technology has made us used to getting simple, pleasant stuff spoonfed to us, without any challenges or confrontation. And that’s what we’re programmed to prefer. But the world is a difficult and complex place, so I don’t feel comfortable with how many people are growing up with such a simplistic mindset.
When we discuss social media and games in this context, it’s really not all bad. Instant messaging is great, and to an extent these platforms really can bring people together. Games can be stimulating and sociable. But currently it’s working out to be more of an intellectual sedative than anything else, as this essay says.
In my teenage and early 20s, when this was first published, I had a problem with World of Warcraft exactly the way the author describes. I also struggled with nicotine addiction. The problems he describes are very real - and they continue today.
Indeed. Several years ago I also had a gaming problem, which eventually got “fixed” by my university workload. I recently sold all my virtual items, and was shocked by how much they were worth… I dread to think how big a loss I made. They got me, they got me good. And I don’t want the same to happen to others.
Yes, pleasures are an important part of life, and a lot of tech revolves around that. But our brains aren’t programmed to moderate positive impulses.
Having known people on heroin, there’s definitely parallels between that addictive behavior and how people use modern tech. They can even look similar. The heroin addicts would lay around staring at TV’s doing nothing but getting high. Had trouble interacting with people, kind of zoned out, or not liking bright lights. Folks motionless staring at a screen with a controller in hand getting highs on meaningless level ups for hours on end often look and act kind of similar. Both decline creative skills. Unless it’s a creative game which is its own debate.
With social media, there are interactions between people by design. They seem like a degraded mode of what social interactions or writing can be vs what focused authors or stage debaters do. They’re optimized to quickly throw out something small (talking points) to get Internet points in terms of upvotes, likes, shares, etc. The rewards reinforce the social behavior that works in those apps. The more people do that, the more their brains get optimized for receiving and delivering that. I’d argue that combo is hugely bad for society given the complexity of most issues we face. The corporate media was doing it first. Now, we get to do it to ourselves on social media, too.
To top it off, it’s extraordinarily hard to get people to stop using these products once they start. They also go through withdrawls. I doubt they’re as severe as heroin for most people. They’re there, though. The analogy fits if we keep in mind heroin is a more extreme version of the current phenomenon. Author even states that themselves.
Yes. Addiction to social media has been normalized. Nobody bats an eye at the teenagers sitting around all staring at their phones together, or the couple at a restaurant both lost in their screens.
It’s just downright weird. Everyone seems mostly okay with their addiction.
I grew in a time when I kept my computer use to myself, taking away the lesson that a balanced life was good. Now the general public (who’d have judged me for my nerdy tinkerings) acts socially inept in public due to said technology, and I’m supposed to pretend that it isn’t profoundly strange.
Absolutely, and it’s all designed to suck people into addiction. Social media, YouTube, many online games… their business models want people to spend as much time as possible on their platform, to show more ads, collect more data, sell more virtual items, etc. So from a (greedy) economic perspective it’s the “right” thing to do.
See also this older Lobsters post: The Tech Industry’s Psychological War on Kids (yes, it’s Medium, but this one is pretty good). It’s by a child psychologist describing how his profession’s knowledge is being used in unethical ways.
Yeah, that’s one of the articles I was thinking about when replying to voronoipotato. Thanks. Relevant quote:
“This alliance pairs the consumer tech industry’s immense wealth with the most sophisticated psychological research, making it possible to develop social media, video games, and phones with drug-like power to seduce young users.
These parents have no idea that lurking behind their kids’ screens and phones are a multitude of psychologists, neuroscientists, and social science experts who use their knowledge of psychological vulnerabilities to devise products that capture kids’ attention for the sake of industry profit. What these parents and most of the world have yet to grasp is that psychology — a discipline that we associate with healing — is now being used as a weapon against children.”
Technology or really any pleasure in life isn’t a replacement for treatment of mental health issues. Pointing out escapes as the problem is part of the problem, imho. While you might have an opinion formed on anecdotes making a parallel between the two is dangerous for several reasons.
It’s not an anecdote when a wide chunk of society are as absorbed, hooked, non-productive, and anti-social as folks taking heroin. That’s more like empirical data in the making. There’s also my anecdotes on top of it. We’re also saying tech is causing mental health issues (addictive behavior), not a replacement for treatment.
Actually it is an anecdote and mental health professionals are rightly cautious about what they consider an addiction. It’s not things that people simply enjoy doing more than you.
I’m one of the anecdotes. I’m capable of both enjoying things and knowing I enjoy them too much. Being able to detach oneself for introspection is important. Health professionals have already defined properties and negative effects of addictive behavior. Some of what people are doing with technology matches some of them. I mean, I’d love to see a large study of it by experts to see their side of it. They might be doing it.
Until then, I have to combine existing terms and methods with the behavior of millions of people to call the trend something. Getting absorbed in meaningless activities that take up more and more of their time while diminishing their mental capacities and wallets seems like an addiction. Even many of them say they’re hooked even if they didn’t want to be. No surprise given people designing the games intend for them to be addictive. Some even hire psychologists or leverage prior work on that (esp conditioning).
If it wasn’t addictive as they wanted, why are so many people hooked on it mainly benefiting the supplier? And giving up more of their benefits all the time like being forced to watch ads on game platforms?
So you’re trying to claim something is habit forming, I won’t debate that. There’s a medical definition of addiction and again it’s not strictly wanting to do a thing. I don’t want to make claims because I’m not a mental health professional. However as I understand it the danger of labeling it as an addiction is that it leads people to think it’s a root cause and not symptomatic of other mental health issues. This is the danger of labeling technology as Heroin. You can try to claim all you want that it’s heroin, but it’s imho pretty insulting to people who have had to been there for real addicts. I have no trouble setting aside my phone for a day or a week or a month, yet I use my phone pretty regularly. I don’t know that I could say the same for actually addictive things.
“There’s a medical definition of addiction and again it’s not strictly wanting to do a thing.”
I’ve been describing how people feel like they have to do a thing whether they really want it or not. They get hooked in, it causes them problems, maybe those around them, and has negative effects on their mind and body. Let’s compare it to medical definition. Fits a-e for quite a few people.
“but it’s imho pretty insulting to people who have had to been there for real addicts”
I just ran it by one who got off heroin. They saw the comparison long as it’s hyperbole given heroin being on extreme end. We agreed small amounts of marijuana is probably better comparison in terms of actual effects. Author is trying to make a different point, though, that ties into how opium was introduced into society, modified their behavior negatively disguised as a positive, and eventually we had to legislate it. On that end, it fits better than marijuana given it’s about societal impact more than the strength of the actual drug.
“I have no trouble setting aside my phone for a day or a week or a month, yet I use my phone pretty regularly.”
Then you’re not addicted to it. A lot of people can’t seem to get off it. They even ignore their jobs or children to do non-achievements in virtual worlds. They might be addicted to it. Different effects for different people. Like recreational drugs.
The argument you just gave implies that there’s either no constructive use for the technology or that there is a constructive use for heroin. In this way there is likely a much closer analogy that still communicates the habit forming nature of social media for reward seeking individuals and the predatory nature of the corporations involved. That’s really my whole beef. Talking an individual who survived is perhaps not as constructive as talking to a parent of one who didn’t in this context.
The argument you just gave implies that there’s either no constructive use for the technology or that there is a constructive use for heroin.
Where do you get this strange framing? No, my argument is comparing a known property of one thing, addiction, to another thing. The broader article also talks about how there’s a high which people thought was beneficial to both. Then another connection. Those are a bit more abstract given the high and ramifications of heroin are stronger than Farmville. Your framing is arbitrary. Addictiveness of this, of that, and consequences is what I’m mostly doing. I need no other properties or arguments for their existence to compare this single property.
“Talking an individual who survived is perhaps not as constructive as talking to a parent of one who didn’t in this context.”
It’s interesting you bring up social media. I’d not normally think of them as a survivor as you said. The next generation after me has to be connected to friends to achieve things in life. At least, that’s what they think. They follow each other on these outlets. The reinforcement mechanism is so strong as to possibly become part of their identity. Trying to quit social media might be really, really hard for these kids with the few that achieve it or dodge it considering themselves something like survivors. There’s definitely going to be a high cost for many of them.
Still, this is an abstract comparison. The magnitude of heroin rewards and withdraws on individual is much higher than most of these other addictive things. There’s a partial, but not full, comparison.
I mean I grew up on social media, and I quit some of them and didn’t quit others. However like totally quitting all social media is like quitting talking to your friends.
Back when I made my (now long abandoned) Linux From Scratch installation, I used s6 and execline for my init system. Overall it was a very pleasant experience; Skarnet’s software is good at what it does.
That said, the execline syntax is rather verbose, especially for loops, variable substitution and if-statements.
The foreground {}
and background {}
system was great though.
I more often use if { … } instead of foreground. That replaces the use of sh’s set -e
flag and takes up less space. I also set symlinks of foreground
to fg
and background
to bg
.
On the one hand, this makes sense if you’re writing code for a machine that has many gigabytes of RAM and a CPU with a clock speed of several GHz, and your code doesn’t have to touch the hardware directly.
On the other hand: if the hardware doesn’t allow for such luxuries, several of these points don’t make much sense (multi-variable return through tuples, iterators, ..), so the only languages that still make a fair comparison are probably Forth and Fortran.
I’ll note some of my thoughts:
C is fairly old — 44 years, now!
HTTP turns 30 this year, and TCP/IP is more than 10 years older than HTTP. It’s a bit weird that people think that anything that has a double-digit age is necessarily bad.
Alas, the popularity of C has led to a number of programming languages’ taking significant cues from its design
Of course, stupidly copying already-existing things isn’t a good idea (and it’s especially hard to notice them if they’re the only possibilities you know of), but then again, if you can afford it, you aren’t forced to use C. (But don’t overdo it, Electron programs are very unusable on my machines.)
Textual inclusion
That’s an artifact of the hardware it was first developped on. (Although the compiler could be made to read symbol information from already-built object files, I guess.)
Optional block delimiters
Braces and semicolons
Same thing, as it makes the parser much easier to implement.
Bitwise operator precedence
Increment and decrement
!
Assignment as expression
Switch with default fallthrough
These are quirkynesses or legacy cruft indeed. (Although, somehow, chained assignments like a = b = c = d
result in better optimized code on some platforms[citation needed]
)
Leading zero for octal
That made sense for PDPs, which had 18-bit words.
No power operator
Integer division
Another artifact of the PDP hardware: there was no hardware instruction for pow
, nor did it have an FPU, so you’d still have to have something if you wanted to divide numbers. (The majority of the hardware I wrote code for doesn’t have an FPU either. And yes, most of those are made after 2000. Then again, some of them don’t have a division — or sometimes even a multiply — instruction either.)
C-style for loops
As iterators would generate too much cruft (and LTO-style optimizations weren’t really possible), this was the most expressive construct that enabled a whole range of iteration-style operations.
Type first
I doubt it’s the ‘type first’-part of the syntax that causes the problems, but rather how pointer and array types are indicated.
Weak typing
Again, if you’re working close to the hardware, you want to be sure how things are actually represented in memory (esp. when working with memory-mapped IO registers, or when declaring the IDT and GDT on an x86, or …), as well as type-punning certain data.
Bytestrings
Single return and out parameters
More instances of a mix of the need to know the in-memory representation, and legacy cruft.
Silent errors
Exceptions require a lot of complex machinery (setjmp/longjmp, interrupt handling, …), which mightn’t be feasible due to a number of reasons (CPU speed, need for accurate/‘real time’ timing, …). The “monadic” style seems to be implemented with a lot of callbacks, which isn’t that useful either. (Of course, there could be a better way for implementing those.)
Nulls
On some platforms, dereferencing a null sometimes does make sense: on AVRs, the general registers are mapped at 0x0000
to 0x001F
, on the 6502, you’d access the famous zero page (although C doesn’t work that well on the 6502 to begin with), for some systems, the bootloader/… resides there (and is not readable in normal operation mode), and even on Linux, you can do this:
// needs root --->
int fd = open("/proc/sys/vm/mmap_min_addr", O_WRONLY);
write(fd, "0\n", sizeof("0\n"));
close(fd);
// or echo 0 | sudo tee /proc/sys/vm/mmap_min_addr
// <---
// or create an ELF file whose segment headers map data to address 0.
void* map = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, 0, 0);
*((size_t*)map) = 0; // works!
(EDIT: re: ELF file that maps something to 0: see also, note the ORG 0
)
And that’s why it’s considered undefined behaviour.
No hyphens in identifiers
That’s another choice in syntax when using infix operators. Whitespace is unimportant or hyphens in identifiers, pick one. (Or use a different symbol for subtraction, but that’d very probably result in something silly.)
“this makes sense if you’re writing code for a machine that has many gigabytes of RAM and a CPU with a clock speed of several GHz”
There were people using more memory-safe, non-C-like languages in the 80’s. Modula-2 was bootstrapped on a PDP-11/45. Amiga’s had Amiga-E. Ada and Java subsets are used in embedded systems today. People used Schemes, Ocaml, and ATS with microcontrollers.
You’re provably overstating what ditching C requires by a large margin.
You’re right in that aspect, although I still have to resort to hand-coding assembly now and then (avr-gcc isn’t that great, so I doubt OCaml, let alone Scheme, would be faster). But then again, it’s not always the case that 128k of data needs to be processed within a millisecond (because of weird memory timings).
Meanwhile Forth lets you code even closer to assembly than C does without forcing you to give up on interactivity.
, so the only languages that still make a fair comparison are probably Forth and Fortran.
Hence what I wrote in the original comment:
[…], so the only languages that still make a fair comparison are probably Forth and Fortran.
I usually tend to roll my own Forth compiler (or ‘compiler’) when I need to write a lot of boilerplate code and there’s no (good enough) C compiler available.
I usually tend to roll my own Forth compiler
Well, I do admire that you straight up roll your own on the platform. The fact that it’s easy to do is one of Forth’s design strengths. I do wonder if you just do it straightforward like an interpreter does or came up with any optimizations. The Forth fans might be interested in the latter.
There aren’t much optimizations in it (because all the code that needs to run quickly is written in assembly), hence the quotation marks. It is compiled (at least to a large extent), though, there’s no bytecode interpreter.
Also, before you lose your sleep: this is definitely NOT for products that will be sold, it’s only for hobby projects (demoscene).
Oh OK. Far as sleep, all the buggy products just help justify folks spending extra on stuff I talk about. ;)
Optional block delimiters Braces and semicolons Same thing, as it makes the parser much easier to implement.
S-expression parsing is far easier to implement — the sort of thing a high-school student can do in a weekend. S-expressions always seem like such a win that it’s remarkable to me how few languages use them.
That’s also true. (I once wrote a Lisp interpreter in TI-BASIC. In high school, indeed.)
EDIT: although, it’s still easier than whitespace-sensitive syntax, which the article was comparing it to. (I should’ve been more explicit.)
S-expressions always seem like such a win that it’s remarkable to me how few languages use them.
Yes, they’re extremely easy to parse, but there’s a reason Lisp is said to stand for Lost In Stupid Parentheses. Obviously structuring your code nicely can alleviate most of the pain, but I definitely see the appeal of a less consistent structure in favour of easy readability. IMO some of the parser programmer’s comfort is a price worth paying to improve the user’s experience.
(For serialization, on the other hand, S-expressions are indeed very underrated.)
On some platforms, dereferencing a null sometimes does make sense: on AVRs, the general registers are mapped at 0x0000 to 0x001F, on the 6502, you’d access the famous zero page (although C doesn’t work that well on the 6502 to begin with), for some systems, the bootloader/… resides there (and is not readable in normal operation mode), and even on Linux, you can do this….. And that’s why it’s considered undefined behaviour.
Dereferencing the macro NULL
or (void *)0
is as far as I know always undefined behavior. Even if you have something at address 0x0000
, the bit representation of NULL
doesn’t have to be identical to that of 0x0000
. According to the abstract C machine NULL
simply doesn’t point to any valid object, and doesn’t have to have any predefined address other than that expressed by (void *)0
.
This. A null pointer isn’t “a pointer to address 0”, it’s “a pointer to nothing valid that can be used as an in-band marker for stuff”.
Well, technically, your response is true.
On the other hand, NULL is pretty much always defined as (void*)0
, and dereferencing it pretty much always gets compiled to something like mov var, [0]
or ldr var, [#0]
or … The standard is only a standard :)
, I consider the compiler extensions etc. as part of the language when coding, for practical reasons.
On the other hand, NULL is pretty much always defined as (void*)0, and dereferencing it pretty much always gets compiled to something like mov var, [0] or ldr var, [#0] or … The standard is only a standard :), I consider the compiler extensions etc. as part of the language when coding, for practical reasons.
This is simply incorrect and you’re missing the point completely. (void *)0
is a valid definition for NULL
because (void *)0
is the C language’s null pointer. If 0x0666
is the actual address of the null pointer, then it’s your compiler’s responsibility to translate each (void *)0
to 0x0666
. In math you often use 0 and 1 to represent neutral elements of operations, even if the operations don’t actually happen on numbers (for example: the zero-vector (…), the identity function, etc), and specially not on the actual 0 and 1 we know.
Here’s what GCC 8.2 x86-64 does, on the probably most used architecture right now?
P.S: I keep using (void *)0
but that’s equivalent to 0
which is equivalent to NULL
in a context involving pointers.
write(fd, “0\n”, sizeof(“0\n”));
sizeof("0\n")
is three, not two.
re: ELF file that maps something …
45 bytes: https://www.muppetlabs.com/~breadbox/software/tiny/teensy.html
sizeof(“0\n”) is three, not two.
$ printf '0\n\0' | sudo tee /proc/sys/vm/mmap_min_addr
0
$ hexdump -C /proc/sys/vm/mmap_min_addr
00000000 30 0a |0.|
00000002
It’s not a disaster :)
45 bytes: https://www.muppetlabs.com/~breadbox/software/tiny/teensy.html
I’m very familiar with that, I’ve made some programs that misuse the knowledge presented there (a, b), and furthermore, this trick makes the program linked from here work again:
Here you can find an even smaller hello-world program, apparently written by someone named Kikuyan (菊やん). It is only 58 bytes long, […]
I’ll be there too this year, on both days, and I’m really looking forward to it. I’ll be all over the place on Saturday, while on Sunday I’ll spend some time in the LLVM and DNS devrooms. I’m also especially hyped to learn about quantum computing, eBPF, the RISC-V SBI, and full-text-search.
Yes! I’ll be attending on Saturday together with @eloy. Also interested in both the RISC-V and especially the Rust talks. Haven’t really looked at the schedule in detail yet, but I will probably be in the Rust room most of the time.
Edit: too bad, it seems that the Rust talks are on Sunday.
I’m very curious about RISC-V track and want to go the Rust, Security, Infra Management and Container talks.
Does anyone recommend a talk/speaker? I heard that some rooms are small and you need to arrive early if the room is full you can’t sit on the floor because of security stuff so basically you will lose the talk.
True, I remember from my 2017 visit that it can get very busy, depending on how interesting the talk description is. Most rooms follow the “nobody sits on the floor” rule, but I also attended an introduction to open-source FPGA programming where literally every square meter of the floor was filled with people. But yeah, show up early to be safe.
(I can’t help you with the speakers though)
Yes, FOSDEM is notoriously bad at predicting turnout for smaller rooms. They don’t seem to base room allocations on current demand or “hotness” of a topic, but base it on historical attendance, meaning newer or newly popular topics get a small room and are often at capacity all the time, with queues building up in the hallway.
Not trying to disrespect the organizers - I’ve attended FOSDEM since 2004, and I love it. Organizing an event like this is hard and they only have unpaid volunteers to organize.
Some rooms are extremely crowded and some are not. If you can’t find a spot there’s live streams that you can watch. Instead of doing that though, I prefer to just find something else that sounds interesting. The streams are online shortly after FOSDEM ends, and I prefer seeing talks live while I’m there.
The FOSDEM Companion app is nice for finding talks on a whim, and it has links to streams and a map of the site. Find it at https://f-droid.org/en/packages/be.digitalia.fosdem/
Urgh, damn it. I guess I should download Wikipedia while Europeans like me are still allowed to access all of it… It’s only 80 GB (wtf?) anyway.
That and the Internet Archive. ;)
Regarding Wikipedia, do they sell offline copies of it so we don’t have to download 80GB? Seems like it be a nice fundraising and sharing strategy combined.
I second this. While I know the content might change in the near future, it would be fun to have memorabilia about a digital knowledge base. I regret throwing to the garbage my Solaris 10 DVDs that Sun sent me for free back in 2009. I was too dumb back then.
Its a bit out of date but wikipediaondvd.com and lots more options at dumps.wikimedia.org.
I wonder how much traffic setting up a local mirror would entail, might be useful. Probably the type of thing that serious preppers do.
You can help seeding too.
Actually Wikipedia is exempt from this directive, as is also mentioned in the linked article. While I agree that this directive will have a severely negative impact on the internet in Europe, we should be careful not to rely on false arguments.
To be explicit, this is not a “modern systems are bloated” thing. The English Wikipedia has an estimated 3.5 billion words. If you took out every single multimedia, talk page, piece of metadata, and edit history, it’d still be 30 GB of raw text uncompressed.
Oh that’s not what I was implying. The commenter said “It’s only 80 GB (wtf?)”
I too was surprised at how small it was, but them remembered the old encyclopedias and realized that you can put a lot of pure text data in a fairly small amount of space.
Remember that they had a very limited selection with low-quality images at least on those I had. So, it makes sense there’s a big difference. I feel you, though, on how we used to get a good pile of learning in small package.
30 GB of raw text uncompressed
That sounds like a fun text encoding challenge: try to get that 30GB of wiki text onto a single layer DVD (about 4.6GB?)
I bet it’s technically possible with enough work. AFAIK Claude Shannon experimentally showed that human readable text only has a few bits of information per character. Of course there are lots of languages but they must each have some optimal encoding. ;)
Not even sure it’d be a lot of work. Text packs extremely well; IIRC compression ratios over 20x are not uncommon.
Huh! I think gzip usually achieves about 2:1 on ASCII text and lzma is up to roughly twice as good. At least one of those two beliefs has to be definitely incorrect, then.
Okay so, make it challenging: same problem but this time an 700MB CD-R. :)
There is actually a well-known text compression benchmark based around Wikipedia, the best compressor manages 85x while taking just under 10 days to decompress. Slightly more practical is lpaq9m at 2.5 hours, but with “only” 69x compression.
What does 69x compression
mean? Is it just 30 GB / 69 = .43 GB compressed
? That doesn’t match up with the page you linked, which (assuming it’s in bytes) is around 143 MB (much smaller than .43 GB).
From the page,
enwik9: compressed size of first 10e9 bytes of enwiki-20060303-pages-articles.xml.
So 10e9 = 9.31 GiB. lpaq9m lists 144,054,338 bytes as the compressed output size + compressor (10e9/144,054,338 = 69.41), and 898 nsec/byte decompression throughput, so (10e9*898)/1e9/3600 = 2.49 hours to decompress 9.31GiB.
OpenSMTPD had two separate remote code execution security issues in 2020. Maybe that should get a mention in the section named Security.
Thanks, I’ve added a paragraph to the end of the OpenSMTPD section mentioning that it must be kept up-to-date to fix potential vulnerabilities.
I don’t believe that the alternatives are fundamentally better though, and I still definitely prefer OpenSMTPD’s manageable configuration syntax. And, to their, credit, those vulnerabilities were fixed quickly; I trust the OpenBSD project to take security seriously.
When I set up my mail server a couple of months ago, I picked Postfix. It is a very mature project, a lot more popular than OpenSMTPD (which I hope can mean more eyeballs), and it also advertises itself as having a focus on security. As far as I have been able to find, it has never had a remote code execution hole: https://www.cvedetails.com/vulnerability-list/vendor_id-8450/product_id-14794/Postfix-Postfix.html
Its configuration is not as nice as OpenSMTPD’s, and there is more documentation to be read, but I managed to set it up with no previous experience. I guess I should have written a blog post describing the process too!