That property is called disjunctive, and, while a widely held belief, it’s unknown whether π is in fact a disjunctive number.
We don’t know for sure: pi is believed to be a normal number. Normal numbers have the pattern that all single digits are equally likely to appear, all pairs are equally likely, all triplets, etc. If this holds, any sequence of digits of length n, represented in base b will appear with probability 1/b^n
.
An example of a normal number formed by construction is Champernowne’s constant. In base 10, it is 0.12345678910111213(...)
- a concatenation of all natural numbers. As you can see, any natural number you can name is guaranteed to appear inside this constant.
Normal numbers are interesting because it’s been proven that almost all real numbers are normal; however, outside of numbers explicitly constructed to be normal, very few reals have been proven to be normal.
e: u/sinic is actually more correct than I; all normals are disjunctive but not all disjunctives are normal.
Thanks to this thread I understand today’s SMNC! https://www.smbc-comics.com/comic/normal
That’s an open mathematical question. No one has proven it, but I think the consensus guess is yes.
I only skimmed the article, but the first animation, while looking nice, doesn’t seem correct to me.
The new process (ls
, in this example) is described as sending its output to the shell, but in reality the forked process sends its output directly to the inherited TTY. The shell never sees any of it, which is also the reason it can’t do anything about background jobs messing up the output of the foreground job.
It is oversimplifying, which is fair given the introductory nature and all that; it is whatever is mapped to the file descriptor slots for STDIN/STDOUT/STDERR in the new child. That can be be some PTY, other files, or “in the olden days” even empty (not as in /dev/null, as in not allocated).
It was a fun trick to mess with SUID binaries in this way, as the spare allocation requirement means that if you say, close(2); exec(); the next open() will also become STDERR, so find a suid binary where you had partial control of fprintf(stderr, “%s went wrong”, “exploit goes here”) there were some privilege escalation opportunities. For that reason, kernel or libc these days tend to make sure /dev/null go there unless set. Even non-maliciously it cause some terrible bugs (whatever you logged also corrupting the file you were working with etc.).
The sparse allocation requirement for open() absolutely sucks.
So in reality it can be both, some or neither. It is a chain of trust thing - the shell can run new jobs in whichever way it seems fit (close/dup/open other things for stdin/stdout/stderr), including nesting new terminal emulators (tmux, screen and so on).
This is really a big pain-point: the shell isn’t certain of what it is running and feeding it instructions. The terminal emulator isn’t certain of what the shell is running. The thing that is running isn’t certain of what it is supposed to be, interactive or in a pipeline – and have few options for being both. It can only guess (isatty() style shenanigans testing various “benign” pty- dependent ioctls and checking for failure).
Now with this premise, try and get buffering, synchronization and signal propagation “right”…
Yea I also question this part:
The user types in text which is buffered in the PTY’s STDIN line buffer. The user presses and the pty’s STDIN line buffer is sent to the shell
How would tab completion work? I suspect some really old terminals and shells would buffer the entire line, but modern tooling likely works on a per-character basis. Another example is Fish Shell, which has native type-ahead history/search that appears as you type (I think there are extension that can do this in Zsh too?)
The article discusses non-canonical input processing (aka “raw mode”) and how it differs from the default processing that occurs for a tty where the line discipline (usually in the kernel) provides extremely basic line editing capabilities. Indeed, for better control over the display and for advanced features like tab completion, a modern shell will have the tty in raw mode all the time and will switch it back to canonical mode when starting a process at the request of the user.
Historical question:
a user on the network we ran together found out that he could make
rakaur
‘s IRC bot runrm -rf --no-preserve-root /
, and did so.
When did --no-preserve-root
get added to rm
? I thought it was reasonably recent, but I couldn’t find it in the NEWS
file of coreutils.
Good find. I suspected that reference to --no-preserve-root
in the article was an anachronism, but it would actually fit the timeline.
A nice property of Netpbm is that you can pretty much dump raw image data into an Emacs buffer and have it rendered directly: https://nullprogram.com/blog/2012/09/14
While we don’t have any specific evidence for this, a possible explanation is that the user database of master.php.net has been leaked, although it is unclear why the attacker would need to guess usernames in that case.
To me, guessing usernames is an indication for some kind of credential stuffing.
[the
man
directory] contains the generatedman
pages. Because it’s adjacent tobin
(which is on my path) theman
program automatically finds theman
pages as expected.
Whoa! I just tried this: my $PATH contains ~/dotfiles/bin
, so I creating a simple text file ~/dotfiles/man/man1/mytest.1
, and typed man mytest
, and it worked! Finally I can easily create & install man pages, rather than implementing a response to a -h/--help
flag.
I just discovered this poking around in the manual page for manpath, or whatever the man config file is. It’s pretty neat! You can also set MANPATH.
So, I actually also tried, yesterday, to find the docs for the ‘$PATH -> man path’ auto-mapping in the manpage; but unlike you, I couldn’t find it. If you have an occasion, could you perhaps post the man page / excerpt you found, so I can know what I overlooked? No need to reply if you don’t have an occasion.
I ran man 1 man 1 manpath 5 manpath
and read through those manpages; and I read through the comments in /etc/manpath.config
; but the closest I could find was set of directives like MANPATH_MAP /bin /usr/share/man
, which only map specifies specific man dirs to specific PATH dirs. I couldn’t find anything documenting the rewriting rule ‘for any directory in $PATH, look in a sibling directory called man
.’ Pointers welcome!
Linux man pages do not describe this, but freebsd manpages do.
On Linux, old man (1.6g) documents this as well, only modern man-db does not.
I have no idea what I was thinking about with the PATH -> MANPATH conversion, lol
However, I do know that you can set $MANPATH with a leading :
to have it append directories to the auto-generated MANPATH, or with a trailing :
to have it prepend them. So that’s what I do.
For what it’s worth, unless you authored more than half of your submitted stories, you won’t be labelled a heavy self promoter: https://github.com/lobsters/lobsters/blob/9c0e2c03e2c14a2fba39298e76258cc460f6f003/app/models/user.rb#L478
IIUC, that also includes ask
stories. Shouldn’t those be excluded from “heavy self promotion”?
Also, what is this function used for? Moderation?
Also, what is this function used for? Moderation?
As far as I can see, it’s only a flag shown to moderators, but until one of those chimes in, that ratio is the closest thing to an authoritative answer for what the administration considers acceptable.
This is likely what you want:
If you want to go even deeper:
And as others have said the cryptopals exercises are great, and I’ve heard the Schneier book recommend elsewhere too.
Dan Boneh, one of the authors of the latter book, offers an online course I have fond memories of.
So my mental model of commands in vi is that “nC” is the same as executing “C” n times.
There are other commands that invalidate this model: 3dd
, when positioned on the second line of a buffer with three lines, deletes only two of them; dddddd
deletes all three.
What is odd is that 3dd
does nothing at all when positioned on the last line. This appears to be a long-standing issue in Vim (present all the way back to 1991, as far as I can see) and also in Neovim, but not in Evil.
0-indexing is better because 0 is a natural number, dangit
Yeah I know lots of people find it unintuitive, but so much math works out so much better when you start counting from 0. I will fite people over this
There is no consensus of 0 being part of the natural numbers. There is an ISO standard, but some definitions of natural numbers specify that natural numbers contain all positive whole numbers, and since 0 is neither positive, nor negative, it’s not part of the natural numbers.
Peano arithmetic, which I think can be considered fairly standard, is defined in terms of a Zero and a Successor function.
The existence of a zero among the naturals means they can be considered a group, as zero is the identity for addition.
Integers are generally defined as two-tuples of natural numbers where (m,n) represents the difference m-n. The existence of a zero among the naturals means that every integer has a canonical representation as (m,0), (0,n), or (0,0); the first form is simply m, the second is -n, and the third is 0 (which satisfies both, as a bonus showing clearly that 0=-0). This is clearly more elegant than the alternative, where the integral representation of natural m is (m+1,1).
Defining S_0 as the base case of a sequence means that S_n results from n applications of the inductive rule. The simplest example of this is actually peano arithmetic itself, where the number n self-evidently results from n applications of the Successor function to the Zero.
These are just a few examples. I’m unaware of any arguments in favour of excluding zero from the naturals.
The existence of a zero among the naturals means they can be considered a group, as zero is the identity for addition.
Not a group but a monoid. Without zero they’d be a semigroup, i.e. lacking identity (or maybe you had some group in mind but with respect to addition the naturals are a monoid, still if people think multiplication is somehow more natural then starting at 1 also gives you a monoid so this argument can go both ways).
I agree with the rest of your comment.
Edit: I guess I’ll put some thoughts here.
The difference between counting the position of something (what is the first number?) and the quantity of something (what is the smallest quantity?) is the difference that matters w.r.t. this indexing question. The reason 0-indexing is more natural is because when you have a positional number system (such as base whatever
then whatever
is the number of symbols in the system, the first symbol will always have the meaning 0
because when you run out of symbols like suc(9)
in decimal, then you start again in the next position (a positional
number system) and get 10
).
Just to drive the point home: A list in a programming language is a base for a positional number system, the elements in the list are the digits you are using, if you ever find yourself doing index arithmetic of the form x / (length list)
combined with x % (length list)
then you are working with a base (length list)
number system.
Another way to define the natural numbers is in terms of homotopies; if you imagine \mathbb(C) \ {0} then a path that doesn’t circle around the origin is contractible
and defined as 0
(a path that has a homotopy to the point it originates from and ends at is contractible) which allows you to prove that 0 = -0, while other numbers can be obtained by looking at how many times you circle around the origin and in which direction the spiral is (you define one direction as positive and the other as negative but I said we’re defining the naturals so you’d ignore directionality, however you’d have have to start at 0 because otherwise extending to the whole numbers would make your earlier system incompatible). This may seem kinda ridiculous but the point is that this kind of inductive number system with a base case is often bidirectional and so it makes sense for the base case to be 0.
Peano arithmetic, which I think can be considered fairly standard, is defined in terms of a Zero and a Successor function.
While this is commonly the case today, Peano himself originally defined it for positive integers only.
In addition (pun intended) to the rest of the thread, I’ll point out that the Von Neumann ordinals give a correspondence between natural numbers and set cardinalities. The number 1 corresponds to the sets with exactly one element. Similarly, the number 0 corresponds to the empty sets.
Think of the natural numbers as the numbers for counting discrete things. (This is a decategorification!) Counting sheep in a field ala Baez, we count zero sheep, one sheep, two sheep, three sheep, etc. The fact that a field can have zero sheep is still countable with natural numbers.
Think of the natural numbers as the numbers for counting discrete things. (This is a decategorification!) Counting sheep in a field ala Baez, we count zero sheep, one sheep, two sheep, three sheep, etc. The fact that a field can have zero sheep is still countable with natural numbers.
Yep.
This is what I meant (elsewhere down-thread) when I said that zero is a generalisation of magnitude. Without zero, the question ‘how many [sheep in a field, for example] are there’ has two different kinds of answers: ‘there are n’ or ‘there aren’t any’. That is, Either None PositiveInt
. If zero can be a magnitude, then the answer always takes the form ‘there are n’, with the former ‘none’ case replaced by n=0.
(There are additionally generalisations to be had: integers let you reason about deficits, and rationals let you reason about fractional quantities (‘how much’ instead of ‘how many’. But those generalisations come at the context of added complexity, where zero is effectively free.)
Where’s the critical-severity vulnerability they announced a week ago?
duh, reading helps. Thanks!