Don’t trust any of them. Ditch your mobile or make sure battery is removable. Don’t have anything mobile-enabled in your PC. Just avoid wireless in general unless its infrared. Even then it needs to be off by default.
Let me see if I understood this summary correctly. It’s better not to use a mobile phone (whether it’s a smart one or not). If you have to use one:
That’s a good start. Might need to physically disable… however one would do so… the chip that connects to cell towers. That’s assuming there’s one for WiFi and one for cellular in the phone you use. Also, if no baseband is needed, a secure phone is easier to accomplish with homebrew or open components like this thing:
I also got a little inspiration from those VoIP phones using USB at Walmart & the JackPair I crowdfunded. The USB phones just need a few components that there’s already open cores for plus a board, buttons, shell, etc. Might even convert a popular one into an open one by replacing the CPU or USB chips. Put mediation in it so computer can’t attack the phone. Then, JackPair style, you do all the audio & encryption on that device with computer just doing transport.
The JackPair problem was the phone might still record your voice or image while you’re using it. So, the microphone and camera must have a physical, power switch on the smartphone. If you’re VoIPing through a PC, you just have to have no microphone or camera in it. The USB phone or JackPair-type devices work fine there outside emanation or analog risks. You’d have to be really targeted for those attacks to happen, though.
The next step I had was how to add cellular. Well, there are the baseband chips or add-on boards. The trick is putting them on a dedicated board with octocoupler/infrared connection to main board. The main board doesn’t trust it in that it mediates the I/O. The interface API & data format between the two must be simple with use of finite, state machines in an isolated process. All incoming data gets input validation. This collectively blocks protocol, data, DMA, and some electrical attacks. One can add EMSEC shielding on top of that if needed to prevent wireless from illuminating the main board to bounce its keys or plaintext out. An example of what those look like is Harris’s secure, Wifi device in the upper-left pic:
https://www.cisco.com/c/dam/en_us/solutions/industries/docs/gov/SWAT1_OV_v2.pdf
Correct me if I’m wrong, but weren’t the crecord
/record
extensions and the --interactive
option for hg commit
originally intended to simulate Git’s staging area? The design was derived, IIRC, from darcs, but the intention was to essentially allow users to select which parts of the changes should be included in the next commit (which Git’s index does).
Maybe this password strength checker might help?
I have my doubts – some serious concerns, actually – about the value of that password checker.
I put in five words, randomly chosen from a dictionary, with spaces between them. It generated a very mediocre score: 56%. I was under the (possibly mistaken?) impression that randomly selecting five words from a dictionary would be an excellent password. (Corrections welcome.)
In addition, I wouldn’t want anyone to be encouraged to enter any of their real passwords on a web site like this, as it could very well use that information maliciously. I’m not saying this particular web site does this, but I don’t think putting passwords into “password checker” web sites is something we generally want to encourage people to do.
How big was your dictionary? It’s actually fairly easy to compute how many passwords a given generation scheme can produce.
For example, my /usr/share/dict/words
has 99,171 words in it. Picking five at random (without replacement) with cat /usr/share/dict/words|sort -R|head -n 5|tr $'\n' ' '
allows for (99,171 choose 5) different passwords, which is 79,927,903,812,879,014,029,704, or about 76 bits or 13 case-sensitive alphanumeric characters. (Choosing with replacement makes for a significantly easier to calculate but only marginally bigger 83 bits/14 characters.)
I generated a couple of 13-character alphanumeric passwords and got an average score between 80 and 90, and a couple of five-word passphrases, which mostly got 100, so that seems in line to me. However, it heavily penalizes passphrases that consist entirely of lowercase letters and space, and my dictionary has lots of proper nouns and possessives. Filtering those, the passphrase scores were much worse—50-60-ish. (Interestingly, a 5-word passphrase generated from this shorter dictionary—66,005 words—is still worth a 12-character password. This is why experts advise you to concentrate on length over alphabet/dictionary size.)
So it’s safe to say this checker isn’t consistent with the actual amount of entropy in a given password. But it looks like it’s trying to penalize the sorts of habits that result in bad passwords, even if that results in a very skewed “good” password space. It’s much more important to it that “password1” get a bad score than that “signals constriction punchy rejoinders titanic” get a good one. I’m not sure it’s biased in the best way (“signals constriction etc.” is far more likely to be remembered and used than the otherwise-equivalent “8inHpcw47jUdD”), and I don’t think I’d recommend it for that reason, but the premise is probably sound.
Well, let’s do the math. According to a quick search there are about 171,476 words in current usage–that’s about 2^17.387647.
So, assuming that you pick each word at random and allow duplicates, getting your password is:
P(5word_pass) = 1/171476 * 1/171476 * 1/171476 * 1/171476 * 1/171476
P(5word_pass) = ( 2^-17.387647 ) ^ 5
P(5word_pass) = ( 2^-86.938235 )
So, we’ll setup the same trick using uppercase letters (26), lowecase letters (26), digits (10), and other characters (33). So, at random, we can choose a character from those sets, and that’s a 1/95 chance of any particular character being picked.
Let’s see how many characters we need to match the 5-word password!
P(5word_pass) = ( 2^-86.938235 )
( 2^-86.938235 ) = (1/95)^N
( 2^-86.938235 ) = (2^- 6.569856)^N
( 2^-86.938235 ) = 2^(- 6.569856N)
-86.938235 = -6.569856N
N = 13.232898
And to double check:
(1/95) ^ 13.232989 = 6.7422567e-27
2^(-86.948235) = 6.7422567e-27
So, it looks like you’d have to use about 14 characters from that class defined above to get the same strength as 5 dictionary words.
Intriguing that 5 random words be roughly equivalent to a 14 character password. The space of 5 word memorable phrases that most people will choose is going to be quite small in practice so for my family I need to reinforce the idea that they should choose random words, e.g. by flipping through a book.
Of course, the search space become much bigger if they also include proper nouns.
they should choose random words, e.g. by flipping through a book.
People are terrible at “choosing” randomly. They’re going to pick words they like and discard words they don’t.
That’s why it’s called “diceware” ;) The EFF recently created a list of words to use for creating passwords like this, and then you use dice to pick a word for you.
we recommend a generating a six-word passphrase with this list, for a strength of 77 bits of entropy.
https://www.eff.org/deeplinks/2016/07/new-wordlists-random-passphrases
I’ve gotten largely the same conclusion about Lua after working on extensions for the Textadept editor.
Granted, I haven’t done a lot of work involving advanced concepts like metatables, but for most purposes, I found Lua much better to deal with than Vimscript. Thankfully, I’m not so used to Vim yet I have no difficulty switching text editors when I need to.
I was thinking of suggesting Higher-Order Perl, but considered this was written with Perl 5 in mind, so it may not fit.
Fossil got one very important idea right: the repo, wiki, bug tracker, and website are really all part of the same package.
Canonical tried to do with this launchpad and bzr, and Mercurial has a serviceable built-in webserver, but no one else really decided that all of these things were part of the same deal. Nowadays I guess gitlab comes closest, although it does keep git as a separate component, sort of.
I completely agree that having a one-stop web address for everything is incredibly useful, but honestly, the main benefit is deep cross-linking—and there are lots of other ways of achieving that. E.g., Phabricator is actually my favorite in this space: it provides a one-stop shop for bugs, boards (think Trello), asset management, password management, pre- and post-commit code reviews, CI, and more—and supports Mercurial, not just Git. (I’ve been incredibly impressed by its very low maintenance burden, too—something I think is really underrated when people consider their development tooling.) GitBucket, Redmine, Trac, and others would also fit the bill.
But Fossil’s insistence on making everything distributed causes some unique problems. E.g., what’s it mean to merge a bug if I edit it and you close it? Is it closed? Still opened? Reopened? Does your answer change if I tag a commit to the case? Etc. I think this is part of why Vault failed: the user model gets too complicated.
One-stop shops are great, but I’ve never been a huge fan of Fossil-like designs.
But Fossil’s insistence on making everything distributed causes some unique problems. E.g., what’s it mean to merge a bug if I edit it and you close it? Is it closed? Still opened? Reopened? Does your answer change if I tag a commit to the case? Etc. I think this is part of why Vault failed: the user model gets too complicated.
If bugs were a tracked object like code changes, the answer would be easy: user action would be required.
Also, this brings some nice advantages: if a dependency between the patch and the closing of the bug can be expressed (making it impossible to close the bug without merging the patch), at any point, your bug tracker is in sync with your code state.
Fossil got one very important idea right: the repo, wiki, bug tracker, and website are really all part of the same package.
I agree. It is very convenient to have all that features in one binary. I learnt about Fossil in 2010 listening to BSDTalk podcast #194. If I remember well, Richard Hipp talked about his intentions with Fossil: not to become the most popular VCS but serve as an example to others (maybe the popular ones) with its most innovative ideas. He created Fossil to scratch his itch: SQLite development version control. Curiosly enough, Fossil also uses SQLite under the hood.
Here are two more recent interviews (2015):
Too bad the Bugs Everywhere project didn’t catch on. I wanted to see where it would go, especially seeing where Fossil went.
Nah, SD is a far better model. Much like Fossil, it manages distributing a database to everyone, so it “feels” centralized, and the longer you’re offline, merely the more out of date your information and changes are.
I think the “put text files in git” model for bug tracking is a completely whacko way of tracking bugs, and produces really weird side effects. Software can definitely handle the syncing easily for information with this strict of semantics.
Is SD still active? Also what are your experiences with it? The ability to sync between different existing bug tracking systems seems appealing to me.
I was curious too, SD sounds like a good idea at least on the surface. However, it looks pretty dead, their mailing list is empty since 2013, and the repo is dusty: https://github.com/bestpractical/sd/tree/master
It still works, but the “connectors” have mostly bit-rotted (except for a version of the jira one I hacked up about 6 months ago to get working). It’s a very unfortunate end, for it could have been the chosen one :(
That’s unfortunate (I was also interested to see how it worked out), but thanks for pointing SD out.
I’ve used the b extension for Mercurial a few years ago, before eventually switching back to a regular TODO file.
My biggest gripe with be
was that it just dumped its database into git, in a format that wasn’t easily usable without the be
tooling. Also, it didn’t use any of the particular features of the VC systems, e.g. making a patch a prerequisite for the closing of a bug (as described in my other comment), would have been easy in Darcs.
Here’s my perspective. I think the adoption of git skyrocketed with github. I know that some hardcore devs have been using an SCM for a while, but the average web dev was largely just copying files locally and uploading files up to their servers via FTP. SVN was around and I had used it a bit, but branches were a pain, and I remember the tooling wasn’t great at the time either. When github came along and was a really slick collaboration platform that happened to use git exclusively, all of my peers just picked up git because github was so good.
I never have used mercurial, but I think regardless of the benefits of mercurial, it just didn’t have a shot at the same popularity because there wasn’t a platform that really changed the way people collaborate like github. I would guess that if a platform like github was built around mercurial at the same time, it may have had the same level of adoption.
I know that the github folks were really into the benefits of git at the time, but I believe the magic was in the platform not the SCM.
Long answer short, I don’t use mercurial because none of the companies I have worked for ever have, and on my personal stuff I just use git because I know it.
When github came along and was a really slick collaboration platform that happened to use git exclusively, all of my peers just picked up git because github was so good.
That’s one of the most interesting points about the popularity of git - it’s used by many people who would never have thought of using an SCM before, from web developers to ops people. Even stranger, to me, is that for all its power git has a worse UX than what was there before - CVS, Subversion, even RCS, are all relatively quick to learn and easy to use. git isn’t either of those (and I say that someone who likes it, for certain values of “like”), yet it’s seen huge adoption. This is due in no small part to Github, for sure.
IMHO, the influence of git on software development cannot be understated - I think it’s made SCM something every developer needs to know and master, rather than a set of niche concepts. And that’s a Good Thing.
I feel like the opposite question is equally valid.
I don’t. Mercurial is so tiny compared to git that it needs justification. You don’t need to justify git, since it’s just the default. Everyone uses git, there’s tons of things built on top of git, and git is just the way things are. There’s no reason to convince anyone to use git instead of using hg, since they are way, way more likely to already be using git instead of hg.
It’s not like we have to tell GNU/Linux users that they really should be using Windows/macOS or tell BSD users that GNU/Linux is what they really should be using. The minority users are almost certainly already aware of the advantages of the majority software but decided to stay with the minority software regardless. It’s only the other direction that really requires a defense.
You don’t need to justify git, since it’s just the default
I think you always need to justify the default. If you can’t justify the default then you’re literally just cargo culting without actually evaluating the tool or its alternatives.
But following that argument, there are almost certainly people who use mercurial because it’s the default where they work, and where they first worked, and have never felt the need to use git, or tried it and didn’t like it because it was mildly different from what their default was.
You don’t need to justify git, since it’s just the default.
Is this the default mode of thought around here? New to the site, just curious.
Sure, you have a point. In my case I use git because most other people I have worked with use git. jordiGH said a few months ago that “the userbase is dwindling, but the development, if anything, is speeding up”, so being proficient with git is a must for me since I am a freelancer. I need to know how to use tools my clients use, even if I don’t love them.
In 2006 or 2007 I moved from SVN to mercurial and I really liked it. However, almost no client or job I could find used mercurial, so I had to learn git. In addition to that, the boom of github made me stay comfortable with git. Nowadays, being picky about tools I am checking if other devs that have a similar mindset to mine are using mercurial instead of git.
Just thought it would be good to show that BitKeeper is still getting significant updates after it was open sourced. We are trying hard to take care of issues so it will meet the packaging guidelines of more strict distributions.
Thanks for this. I hope someone would take up the mantle of providing viable hosting for projects using BK soon.
The “Direct source download” link on the download page is broken. It should be https://www.bitkeeper.org/downloads/7.3ce/bk-7.3ce.tar.gz (without the .src
).
Although I loved MUDs, text adventures could never keep my attention. I always found the puzzles annoying, and had no desire to spend time trying to solve them. On the other hand, ZZT, TADS and Inform were my first introductions to object oriented programming (and helped me understand MOOcode as well). This article is an excellent overview of both the good and bad of interactive fiction… Although I think the “unnkuul” bronze plate puzzle may constitute some kind of crime.
I had the same experience growing up, and I was dabbling in TADS even as I started to work as an adult (when I actually started to own a computer, not just renting from nearby computer shops).
Incidentally, while I was still playing around with TADS, I had also brushed against Ruby, which was used along with a GUI framework called “Fx,” IIRC, to create an interactive fiction mapper application. Since I could never get Ruby to be “compiled” into some sort of binary, I dismissed learning it at the time. Little did I know it could be used as it was, as I had no idea what scripting languages looked like.
I had the same experience when I first saw Python: it “couldn’t even compile!” D'oh. Of course, thể Z-Machine helped me understand Java when I found that…
That would seem to fit the code phrase “little toy non-UNIX systems”.
Edit: removed some content that may not have been appropriate. Apologies.
Weird to think that a commercial C shell is still available and being sold today? I guess it wasn’t that unusual in the late 80s/early 90s on DOS, OS/2 and Windows. I remember using 4DOS (and later 4OS2 and 4NT) during that timeframe.
Thanks for the edits - I’m happy to see people were mindful of a stranger’s privacy, and I do think that’s the right thing to do.
(To anyone wondering, no, I didn’t tell anyone they had to.)