It would be nice if there was an option to never establish a data link over the lightning port unless the device is unlocked. 7 days is a long window if your device is seized at customs or something.
Indeed. If that’s the bypass mechanism being used, I’d even want it to be a PIN-controlled setting. I can’t remember the last time I used data over the USB cable.
I find it hilarious that people suggested both “more marketing” and “less marketing” as ideas for improvement.
Note that both were “strong themes”! A few contradictory responses is natural, contradictory “strong themes” are not.
Are there any details of their design goals? Is POWER 8 the architecture they’re working with? Do the designers and project leads have any experience bringing a product to market? I might have missed it but I didn’t see answers to these questions here.
FAQ says NXP e6500, which is indeed POWER8 compatible (both implement Power ISA v2.07).
The motherboard comes from a company called ‘Acube Systems’ which has a ilst of products, most of which use powerpc SoCs from another company called ‘Applied Micro’. It looks like this latter company was recently acquired and is mostly listing ARM SoCs at the moment.
Perhaps they have stockpiled a sufficient number of SoCs? In the end, someone will have to build powerpc CPUs for this machine, and who would be doing that is indeed not obvious.
Loved seeing this at Maker Faire. It’s such a satisfying build. Working on CPUs in HDL is fun, but it’s hard to get a feel for the physical shape of them.
I’ve had Windows Update make me lose unsaved work.
This isn’t just “annoying”, it’s completely unacceptable. As a university student my “active hours” are highly variable, and an unexpected reboot could mean throwing away half a paper or corrupting a running VM. Either way, I’d have to spend that much time setting up my workspace again.
I’ve had it sitting there pending while waiting to rush out the door.
This could mean losing hours of work and damaging my grade because Windows wouldn’t let me print or upload an assignment for the deadline.
This guide has allowed me to leave Windows Update enabled with its auto-reboot functionality disabled, so I can choose to reboot when I know I have the time. I think this is the way it should work.
There’s a joke that’s probably older than you are, but it contains a useful lesson.
Ha. I do compulsively mash Ctrl+S or :w whenever I’m working on something, but still, the OS unilaterally deciding to toss my state just shouldn’t be a thing. And “saved” doesn’t always mean “safe”, for example, if VMware is in the middle of background writes to a virtual disk image. Random reboots put long-running tasks at risk, too, like running a test suite or neural network training overnight.
Yes. MacOS X is a nice middle ground for people who want commercial support and want a UNIX shell underneath (I know about the new Bash thing - it’s got a long way to go :)
Compatibility with third-party software. I do most of my development in a Linux VM, but some things just require Windows unfortunately.
Not the OP, but I’m stuck with having Windows around for
And this list is by no means comprehensive, it’s just what comes to mind right now. I’ll also mention that I regularly use machines with Linux and MacOS, and often run VMs on each of them as I find it convenient to do work on one machine versus the others. I had completely cut everything but Linux out of my life for a couple years, but since then I’ve found that using all the systems insulates me from the annoyances of any single OS.
There’s an even worse version: you’re about to take a plane, so you turn off your computer and suddenly “installing update 1 of 2435, please do not turn off your computer”. But then you say I deserved this for using Windows.
Fun story about ‘active hours’. It claims to not restart if you are using system at the time. Guess if copy process (5tb of backups to new hdd) counts as ‘using system’… Last time I have ‘checked’ it did not.
Download but do not install and when done, confirm their installation. You are obviously a power user who knows that updates are good and you are definitely not scared of clicking the Install button like 80% of common users.
For those, do not care. They won’t be running VMs and it teaches them to save their work.
One concern I’ve had with OpenBSD is that the community as a whole has a pattern of making bad (EDIT: graphic) design decisions in the presentation of information. This article is no exception.
Is there a particular reason why this seems to affect OpenBSD the most out of any given unix-like?
I think he means the garish fonts (Comic Sans MS) and colours chosen by people who associate themselves with OpenBSD thoughtlines. I believe this is intentional, to prevent people from judging the content by appearance, but rather instead content. Also, you can then proceed to filter out people who only cared about the appearance.
For me the only person coming to mind within the OpenBSD community who has modern web design skills is @jcs.
Hmm, I suppose I can see why they think it’s a good idea, if this is really the reasoning. Suffice to say I completely disagree, at least with the degree to which they make their content illegible.
I mean, taking this page for example, it would be trivial to make this page less of an eyestrain: remove the body and link background colors, set the page width to ~800px, and the margins to auto. Voila: an actually readable website that doesn’t rely on pretty visuals to seem credible.
I suppose I just don’t understand why filtering out people who care about appearance is so important that they shoot themselves in the foot…or the eye, as it were.
Comic Sans is actually an accessible font choice - it’s one of the easiest for dyslexic people to read.
But I agree this website definitely is more garish than normal. Compare with the slides on pledge, which are straightforward and legible.
Its funny to note, Simon Peyton Jones uses comic sans for all his presentations too. His reasoning:
This is a very funny question, why I use Comic Sans. So. All my talks use Comic Sans, and I frequently see little remarks, “Simon Peyton Jones, great talk about Haskell, but why did he use Comic Sans?” But nobody’s ever been able to tell me what’s wrong with it. I think it’s a nice, legible font, I like it. So until someone explains to me — I understand that it’s meant to be naff, but I don’t care about naff stuff, it’s meant to be able to read it. So if you’ve got some rational reasons why I should not, then I’ll listen to them. But just being unfashionable, I don’t care.
I can’t say I disagree with the viewpoint.
I’m not sure I’d agree it’s especially legible, if you include implied aspects of communication. I don’t care if it’s “bad” or “unfashionable”, but it does explicitly look like a kind of jokey, “fun” font, that implies you’re doing something lighthearted or jokey. Which is why it’s named Comic! That’s not necessarily always out of place in tech content— you could use it an xkcd-style cartoony introduction to a topic, or on the cover of a “For Dummies” style book. But the first few times I saw it in a completely unjokey “serious” presentation, it threw me off and made the entire talk difficult for me to follow, because throughout the talk I thought the jokes were going over my head and I was distracted trying to figure out what they were.
I’ve now seen it enough times that it doesn’t really distract me anymore, but only because I pattern-match “ah ok it’s a not a ‘real’ use of Comic Sans intended to be actually comic, it’s just that hipstery style that’s using it as an anti-fashion statement”.
In this case, SPJ has been using Comic Sans so long I doubt you can call it hipstery. And none of your arguments amount to anything beyond distaste for the font.
Its named Comic Sans as its intended to be reminiscent of comic book style fonts not to be a joke. n.b. http://www.comicbookfonts.com
I’ve read some studies that say Comic Sans is better for retention and possibly better for dyslexic people. And given some people actually like the font, I’ve also never seen a good argument against it other than “I don’t like it personally” or “its bad design” or “comic sans is a joke” or “comic sans use isn’t serious”.
This powerpoint isn’t particularly hard to read, and i’m not sure why you would be treating the content as a joke were you watching the presentation: www.cs.nott.ac.uk/~pszgmh/appsem-slides/peytonjones.ppt
If you believe reader attention is a valuable resource, then tools that help you conserve that resource are likewise valuable. Typography is one of those tools. Good typography can help your reader devote less attention to the mechanics of reading and more attention to your message. Conversely, bad typography can distract your reader and undermine your message.”
source: http://practicaltypography.com/why-does-typography-matter.html
If you are interested in the subject, read the full book: http://practicaltypography.com/
Also check Why You Hate Comic Sans
“I can spend the rest of my life trying to appease other people or, I can do the the things that make me happy” – Trump, probably
If others didn’t matter, why were they sharing or discussing it with them on non-project forums to begin with? Best to not half-ass an attempt to tell others about something great if it was worth an attempt to begin with.
You will need to be more specific with your criticisms. From what I’ve seen they are pretty careful about avoiding fads and favor simplicity.
I don’t really see that many bad choices, just optimizing for different metrics.
@calvin above hit on what I’m trying to say. I mean graphic design, not technical.
Ah, haha, I think i missed the “presentation of information” part. Probably because the first sentence put me in a defensive frame of mind. Thanks for clarifying.
They use CVS because they didn’t see a point in breaking an existing workflow and git is GPL code.
Could you elaborate on this? I wasn’t aware that that was the setup.
I’m not sure what’s to elaborate. The project uses cvs this year because the project used cvs last year.
Well, GOOG and MSFT and FB and AAPL all have hundreds of people making beautiful pictures.
And their software is buggy and they are evil.
¯\_(ツ)_/¯
Your comment reminded me of this recent story: http://www.osnews.com/story/29811/Which_tech_giant_would_you_drop_
Whereas MorphOS has just a handful doing thd whole thing. Its UX on desktop and part of site are beautiful. Works well enough for small, OS project. Probably other companies’ spending priorities rather than staff numbers are causing problems. ;)
Reading the splat.sh script they used to install Arch on their Chromebook reminded me once again that Arch Linux ARM distributes its OS downloads over HTTP.
Don’t worry though, the script helpfully verifies the download by comparing the md5sum against one downloaded…over HTTP.
Looks like the answer is yes and no. Pacman is apparently able to do PGP package checks but the default is:
SigLevel = Optional TrustedOnly
which translates to:
Optional:
If set to Optional , signatures will be checked when present, but unsigned databases and packages will also be accepted.
TrustedOnly:
TrustedOnly (default) If a signature is checked, it must be in the keyring and fully trusted; marginal trust does not meet this criteria.
So if you are installing a signed package the signature will be checked but an unsigned package will just get accepted blindly. Guess it’s a better check for corrupted downloads than md5 checksums but that’s not really any additional security.
[1] - https://wiki.archlinux.org/index.php/Pacman/Package_signing
[2] - https://www.archlinux.org/pacman/pacman.conf.5.html#SC
The postage stamp space reserved for content on mobile was immediately covered by a pop up asking me to bookmark the site. Unreadable.
In 2002, what really was searchable? Google existed, but wasn’t great. Directories were even still a thing then.
Yeah I think Go should be an example on how NOT to choose a name for a language (I call it “Golang” all the time because of that)
What language would you suggest? This library targets embedded platforms. Correct me if I’m wrong, but I haven’t heard much about embedded Rust.
Coming, but we need LLVM targets and LLVM isn’t the best toolchain for a wide array small embedded targets currently.
There is usage of it, for example an operating system written for CortexM targets: https://github.com/helena-project/tock
Check out the rust-embedded GitHub org for a bunch of stuff related to embedded rust. It’s still early days but there’s tons of smart and motivated people working on it.
SPARK 2014 http://www.spark-2014.org/about
Rod Chapman at Altran/Praxis found an error in reference implementation of Skein just recoding it from C to SPARK. Ada and SPARK were invented for embedded systems. There’s also DSL’s like Galois' CRYPTOL, Ivory and Tower for embedded work that can generate correct, C code. Finally, COGENT is a functional, systems language that’s already been proven in a filesystem implementation with certified translation to imperative code. Dependently-typed languages like IDRIS and ATS are possible with ATS demo’d in device drivers and an 8-bit microcontroller.
Nah, it’s Ironclad C++ or SaferCPlusPlus if you’re talking a subset that’s actually immune to all kinds of memory-related attacks with little work + has C++’s benefits.
There is embedded OCaml, ready to use (if you have a supported embedded platform, but this approach can certainly be extended to other platforms).
You could, but you would be silly to do so. There are tools that mitigate nearly all of the stupid of C…people just need the patience and discipline to use them.
That’s precisely the attitude that continues the status quo: “users are to blame, they need to be more disciplined”.
Those two comments are basically equivalent.
If somebody isn’t disciplined enough to follow best practices in C, they won’t be disciplined enough to follow best practices in any other language, either.
And so can C compilers, with the added benefit that there are tons of static analysis tools like Lint and Coverity, etc., and it’s portable to far more platforms than anything else, and can easily be called by every other language.
If you want to nitpick other people’s language choice, at least make concrete complaints by pointing out the bugs in their code. “C can be unsafe, so this is bad,” doesn’t help anything and is just nitpicking for the sake of nitpicking.
And so can C compilers
Actually existing C compilers play an ad-hoc game of whack-a-mole with the most commonly exploited issues, that’s all. A C compiler that offers actual safety guarantees is vapourware. (There is principled tooling for languages that are supersets of subsets of C and are represented as C files subject to particular restrictions plus additional information, but these languages lack most of the advantages of C, e.g. they don’t tend to have a wide library or developer ecosystem).
with the added benefit that there are tons of static analysis tools like Lint and Coverity, etc.
They’re not a benefit, they’re a red flag that the language proper is inadequate. And again, all they offer is ad-hoc checks for the common cases. You can’t retrofit principled language design.
“C can be unsafe, so this is bad,” doesn’t help anything and is just nitpicking for the sake of nitpicking.
It’s not a nitpick. The language really is unsuitable for the project, and the project will fail as a result. I wish this weren’t so, but pretending it isn’t isn’t constructive.
Difference being one mistake in a common construct can lead to full, code injection in C where that’s rarely the case in the safe languages. The mistakes will happen. They’re usually more severe with C when they do. It’s intrinsic to how it was designed (or more accurately wasn’t) to handle safety of primitive functions & structures.
That’s just scare mongering, though. Mistakes can happen in any language.
The safety issues of C are well known to anybody who’s paying any attention at all. If a person chooses it for a new project anyway it’s safe to assume they know the downsides but have other reasons for using it. If you want people to use other languages, focus on those other reasons rather than harping on how they may hypothetically make a security mistake one day.
It really isnt scare mongering if even experts make these mistakes as regularly as they do. It means the average case will be much worse than it has to be. Putting an upper bound of damage from mistakes can prevent that. So it’s a good idea.
I’m not patient or disciplined enough to follow best practices if not following them would silently succeed (and on the available evidence neither is anyone else, even C experts). So I use languages that enforce best practices.
This position is undermined by that fact that I know of no security-sensitive C project in widespread use that is actually free of memory safety bugs. OpenSSL’s failures are well-known, but consider OpenSSH: it’s widely held as a high-quality security system, and yet had the roaming vulnerability earlier this year and several others before. If these were written in a safe language, there would still be bugs and crashes, but you would never run into the scenario where an attacker could leak arbitrary data out of a process, and you would never have to deal with with remote code execution vulnerabilities (barring things like web browsers where running attacker code is considered a feature).
A common argument against not-C in libraries is interop, since one of C’s legitimate advantages is it serves as a simple way to describe interfaces. This argument doesn’t hold much water though: consider this library binding, which lets you slot in an OCaml TLS implementation for any program that links against libtls. The only difference you’ll notice is the lack of panicked key-switching when Heartbleed 2 rolls around :).
Coreboot can somehow disable ME if you tell it to by some configuration option, it does that in a first few opcodes after enabling RAM.
Do you have any reference on that? All I can find on that is that libreboot can turn off versions of ME prior to version 6.0, but that versions after that will turn the computer off after 30 minutes.
I am using Libreboot on a Thinkpad X200.
As I understand it, the X200 comes with ME out of the box, but it’s possible to flash your own replacement firmware by grounding GPIO pin 33 which disables the flashing protections.
Would it have been that hard to use infer.fb.com?
You’d think a tech company could get that right. What really ticks me off is how the banking and financial industry seems to find subdomains so intolerable. It seems like every bank expects you to just implicitly trust any domain with their name in it.
Or at least get the certificate right.