I have a TODO file in my home directory for generic tasks and separate TODO files in the target project directories.
I have tried surf and ran for for quite some days, but there were a number of problems I had with it.
There were probably more points which I don’t remember anymore.
Now I’m back on Firefox and have turned on the start-search-by-typing option. This gives me the required level of keyboard navigation I need – I can just type in the text of a link and Firefox will select it. There is a surprising amount of useful keyboard shortcuts in Firefox that is a little bit hidden (for example, by typing ‘ [single quote] with the start-search-by-typing option enabled you search only the links of a page, very useful).
The SSL woes are more than likely because Firefox caches sub-CAs it sees in the wild to handle all the badly configured webservers that do not serve the whole certificate chain when connecting.
I really wish browsers did not do this as it masks a problem that to the sysadmin running the site looks just like a temporary glitch in the matrix that they can ignore.
I hate the web.
For adblocking, you could use http://git.codemadness.nl/surf-adblock/ with surf-webkit2.
surf rejects a number of SSL websites Firefox accepts for no obvious reason (especially bad with lets-encrypt sites). In contrast to what the article says, surf does support SSL, though. Just not in the stable version I have found.
There are some issues with TLS, at least on my Fedora system (visiting the badssl.com dashboard). For example, no host matching, no check for expired certificate, etc.
I really like these kind of writeups, both tedu’s but also the post mentioned from poolp.org. I do think it’s an unfortunate trend that all these lovely things are buried away from openbsd.org or undeadly. Maybe the world needs a ‘Planet OpenBSD’ where all the developer’s blogs are syndicated?
planet.openbsd.org doesn’t appear to currently be a thing.
Argh, no full text RSS feed. Why do people persist in doing that (and making me jump through [minor] hoops to work around it)?
In my case, because it would push tons of unnecessary traffic.
I’d rather your feed had a single but fulltext entry than 10 but abbreviated ones. (At least as long as you don’t post twice within half a day or so… which I don’t remember seeing.)
Do you happen to know which readers replace content when it changes? That was my other concern, that i update something, but readers cache a frozen version.
Don’t all of them? I can’t remember seeing one that doesn’t. No doubt they do exist, but I doubt they are at all common. I can remember ones all the way out at the opposite extreme, where they version they content and offer diffs in the UI. NewsBlur has that in some capacity, and there was a desktop one on the Mac that did this – probably old NetNewsWire.
Frozen caches really happen when items get updates after falling off the bottom of the feed. Obviously aggregators won’t see content you didn’t put in the feed… so item inclusion for the feed must be based on update date rather than creation date, if that’s a concern.
(Btw, while we’re here… could you use proper <category>s in the item, instead of putting a line with <p>tagged: at the bottom of your description and then me having to sed your feed to fix that?)
Oh? category is a thing? that seems doable. the perils of writing everything from scratch.
Yup. I recommend http://www.rssboard.org/rss-profile for reference, which is lamentably difficult to stumble upon serendipitously. It includes recommendations based on surveys of publishers and aggregators in the wild… well, from 10 years ago, but still.
Hm, if that peril is also the reason you don’t have a <guid>… that would be nice, because in absence of it, aggregators must guess how to identify an item as being the same one throughout edits. For flak you can just switch the <link> to <guid> I think (you never change those URLs, right?)… or have both if you worry about edge-case aggregators. For inks, I’ve noticed you number the blocks in the HTML, so you already have an identifier to reuse – keep the <link> and add a <guid isPermaLink="false">, probably with a tag: URL, maybe tag:www.tedunangst.com,2016:inks:37 (where only the trailing number varies; the date is just any point in time you controlled the domain, it can be constant). That would go a long way to ensuring that your updates to items do come through as updates, rather than showing up as dupes. (That’s part of the reason I sed your feed – I’d get dupes all the time when you edited your inks tags, which you do quite a bit, whereas metadata doesn’t figure into the deduping in Liferea, so now I only get dupes anymore when you actually update the item description.)
Ah, cool. My understanding of RSS readers is heavily influenced by the one I wrote, which is also odd in its own way.
Hey, thanks for all the fixes! Much appreciated.
I’m still disappointed that VAX support is no longer present, but pulling it was the right decision. I guess we’ll always have 4.3 Quasijarus!
I just hope SPARC isn’t next to go…
sparc was just removed on OpenBSD -current. sparc64 is still there.
I missed that announcement :( I had heard rumours that it was nearing the door, but didn’t realise it was going to be so soon. Guess it’s the passing of an era as it was Theo’s massive patchset for NetBSD/sparc that was key during the lead up to the fork (for those who’ve never read it, coremail is a fascinating read - lobsters story).
I still have a few 32-bit SPARC systems (not used for anything productive - I’m a huge fan of the SPARCstation 20) - I guess NetBSD is the only viable option now.
Keep them. My best recommendation for dealing with potential NSA subversion was putting root of trust on old, esp ancient, hardware that likely predated subversion. One can put a trusted interface in front of them to force simple, mediated communication to the app. Yet, gotta make sure hardware itself isn’t bacdoored. Odds strongly against that on a SPARCstation 20 or a VAX. ;)
Got a list of them here:
Note: Another benefit is in chasing the holy grail of automated generation of correct, secure, and portable software. Need lots of ISA’s and machines to test such tooling on. A tool with 10 implementations running full coverage testing on 50 machines from mutually-suspicious countries with same, correct output for every input inspires much confidence. For me at least.
Note 2: Intel’s i960 should be on that list. It’s still available in watered-down form. The original was one of their best designs. They’re the assholes that locked up Alpha’s, too. Briefly licensed by them and Samsung. They need to FOSS the last Alpha implementation if they still have it given OpenPOWER and OpenSPARC. I wan’t PALcode damnit! :)
is this an old text? This fact should be mentioned in the title, shouldn’t it ?
I’ve amended the title to include the date.
Looks too complicated to be a useful starting point for anyone not comfortable writing their own Makefile. I think make(1) should be studied in the same way one studies sh(1), yacc(1) etc. Once the main points are understood it is fairly trivial to write a minimal Makefile that gets the job done.
Yep. The whole mess was started at the time I was not quite enjoying makefile. easymake has gave me some sweet time. It’s not quit generic or extensible, though.
“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.” - Edsger W. Dijkstra
QuickBASIC, of course, has almost nothing other than a common set of keywords and sigils to do with the language that Dijkstra was railing against. It’s structured in exactly the sense he supported. It still supports GOTO, but you’ll find it rarely if at all in idiomatic QuickBASIC code.
One of the hazards of quoting a bon mot without pausing to understand it.
no more, no less, just a pager
Cool. What do you use to host your git projects?
That’s stagit: http://git.2f30.org/stagit/
On OpenBSD I use ksh. On my workstation at work, I use mksh.
Just blindly installed it yesterday on a old PC (after I failed trying to reinstall 9front.. 3 times), I was not expecting to find 5.9 but I haven’t been keeping up with the release cycles so I thought I’d just have forgotten about it.
Sadly it’s old enough to miss both VT-x, amd64 and UEFI, so I don’t get to try any of the new goodness.. thought I guess “pledge” works?
vmm(4) is not enabled in 5.9 so you are not missing out on that.
Skipping signify verification I see. What’s the point in that?
Follow the recommended security practices. This guide is only useful to show the details of performing the installation on that particular machine without attempting to replicate information found in the OpenBSD manpages or FAQ.
The author should consider hosting his own server and using whatever markdown implementation is comfortable with instead of relying on github.
In case you are interested in the Xen support: http://www.openbsd.org/papers/asiabsdcon2016-xen-paper.pdf
From a quick look, it appears that the authors use C++ instead of C so the tag is misleading.
The c tag is described as “C, C++, Objective C programming”.
“It’s the world’s tiniest open source violin”
So what’s the alternative to GitHub that we should be using?
Phabricator. It’s used successfully by Wikimedia, LLVM, FreeBSD, Blender, and many more communities. A bot to help bridge would be great (e.g. submit a pull request on Github, the bot creates a Phabricator review and directs the submitter there).
Side note: anyone using Phabricator know of a good Not Rocket Science testing system? I’m a little new to it still and am not sure how to make Revisions work how I want.
Gitlab. Open-source, with a hosted option if that’s the service you need, but open-source so you can run it yourself, or pay someone else to, and contribute changes if you need them.
I’ve run a small/mid-sized project on here for the past few months, and I’ve been quite happy with it. Does everything I need, except the primary gitlab.com instantiation does not allow commenting over email, though this can be enabled for private installs.
IMO, BitBucket is superior to GitHub in every way except for CI/CD integration. Which I believe they are working on. It’s still possible to at least kick off jenkins jobs and what not but it’s a bit janky and there is no feedback yet. Otherwise, I find BitBucket to be very well done.
EDIT: I’m responding to the above from a feature/quality perspective. Not based on the xkcd cartoon.
Bitbucket recently got CI status integration. As an Atlassian employee I’ve seen some really cool Bitbucket and CI integration being used internally. I’m sure some of this slickness will be shown using public projects soon.
you can’t even search in repositories in bitbucket online.
why do you prefer it?
i use both, and find bitbucket mostly worse in most web user experience: no searching, can’t see sources vs forks easily, dashboard shows repos and not activity of people you follow as primary thing (i use this on github a lot).
The two things you mention are two things I basically never use. Most of the repositories I interact with are ones I’m using locally and have in my various tooling already and most of the programming I do is in organizations where forks aren’t really useful at all. BitBucket has robust branch permissions which I make more use of.
The Pull Request system, which is my main use for any tool like this, is significantly superior to GitHub’s for my usecases. It has Reviewers, real Approve buttons, and Tasks, all of which I use a lot. I don’t really care about the social/activity aspect that GitHub is aiming for, I mostly care abotu a tool around development, which I find BitBucket does a lot better. I also have to use GHE at work which I find very aggravating to use.
Set up your own server. Use a mailing list for reviews.
I used self hosted gogs for a bit, but ended up returning to github because I missed the social/community features. Sure, they technically exist on gogs too, but who’s going to sign up for my gogs instance just to say post an issue, or star/what/whatever it?
One can use cgit and use email for reviews. No need to create an account. Although the barrier of entry may be a little bit higher as not many people use git format-patch/apply-patch, this is more an issue familiarity than something inherent to the process. I like it more than github’s pull requests as it is easier to go back and forth.
For open-source projects with outside contributions/contributors, dead right. For my purposes though gogs is ideal. I’ve been using it for personal projects for a few months. Works well enough that I moved all my private repos from Github onto it and saved myself cash money. Fast, simple and regularly updated, often with nice new features that so far have all seemed pretty well-tested and working. For my v low-complexity requirements, natch. YMMV.
If it’s for private repos, why not just have bare git repositories on an ssh server?
Well, sure, in terms of raw git operations, no reason - but private repos can still have multiple contributors, and even single-contributor projects can benefit from organisational tools like the issue tracker, milestones, wiki for notes, etc. Mostly though I just like the UI, the graphical, easily-click-through-able display of a range of projects at a glance, and the visual diffs are simple and easy to get at. Sure, none of this is anything Github/Bitbucket/etc doesn’t do, but it does all the bits that I need and like, well enough for me, for free, on my server.
I agree that there’s no shortage of OSS GitHub alternatives out there, and most of them work really well.
What kills me is the lack of a hosted free-software alternative to Google Groups. I have a couple projects on librelist.com, but it’s been down for almost a month now, and I haven’t gotten a response about what’s up. Hosting your own mailing list is really easy to screw up.
Well you did not host your own mailing list.
Kallithea, although it desperately needs a larger community of contributors to add features like pull requests and CI integration.
I see no one has mentioned Launchpad yet. Launchpad supports git repositories now, and they’re improving it steadily. The Launchpad blog has info on their progress.
Keep in mind that I work for Canonical, who started Launchpad and who employ everyone I know of who works on Launchpad development (I’m not really up on who’s doing what, though). There are other organizations who use LP, e.g. Openstack.
My own opinions of LP are mixed. I like it, and I used it heavily for a couple of years, but eventually moved to git, and moved off to mostly use GitHub, back before LP added git support.
LP’s bug tracking is more featureful than github’s issues. There are lots of other features that may or may not be useful, such as PPAs, translation support, blueprints, etc etc.
inetd is pretty unused these days, isn’t it? I can’t even recall the last time I used it.
Maybe it should be removed from a few base installs, and put into ports or something?
Aside: That is one thing I really like about the OpenBSD project. They actually remove crufty old things sometimes!
OpenBSD has inetd(8) in base. It is just disabled by default.
One handy trick I’ve used it for recently in production is wiring-up port redirections using nc as the server spawned by inetd.
OpenBSD still includes a CGI daemon too.
Like CGI, you just wish that fork, exec, and reap were faster. :-)
I don’t want to be Debbie Downer, but is this page new? heavily updated? or was it just now discovered by someone and deemed interesting?
Regardless of which it is, I’m not complaining, just curious if I missed something.
It looks like a tutorial that appeared on BSDNow a few years ago. According to the 2 days ago commit message, it was written by the same person.
Are these pages in a CVS repo somewhere? Where did you find a commit message?
This seems to be more of a guide of what not to do.
I would be more likely to point to this with the disclaimer “You see this guy’s opinions? Do the opposite of what he says.”
Some of the compiler features he mentions are non-standard. This matters for me. I actually use a C compiler that isn’t GCC or clang on a regular basis (pcc). -march=native is often unacceptable for downstream distributors, and generally I’m annoyed when programs ignore my CFLAGS in favor of their own ridiculous optimizations. Usually I value a fast compilation far more than non-hot parts of the code being sprinkled with magic. As others have mentioned, “#pragma once” is also non-standard, and variable size arrays (i.e. alloca) can be a security risk.
No specific comments on types (though you should certainly use char to refer to utf-8 octets, otherwise people who have to use your libraries or read your code will be annoyed). I use “unsigned” when I want a integer that’s at least 16 bits and don’t care about specifics. That’s in line with the standard.
There are valid arguments for separating declarations from code, especially when you have resources you want to allocate and free. for loops are perhaps a case when this rule can be broken - not sure I have a strong opinion here.
“You see this guy’s opinions? Do the opposite of what he says.”
That’s exactly what I meant. My sentence was obviously ambiguous.
Ah, ok, that makes more sense. Thanks.
Isn’t that at least half of any effective programming guide? Knowing how to write a program that compiles and runs in a given language is easy. Knowing how to write a good program that minimizes errors, maximizes readability, performance, security, and refactoribility is hard.
It should be mentioned that this is a re-implementation of the Plan 9 argument parsing interface with some minor extensions. See the original manpage: http://plan9.bell-labs.com/magic/man2html/2/arg
I like to compare CPUs and GPUs to TCP and UDP. While with TCP, you get a guarantee your packet will arrive (more or less), there is none with UDP.
The situation has improved, but GPUs (especially older kinds) are not great coprocessors. And even though you can multiplex a lot of stuff with them, you just don’t get the guarantee the result is exact.
In the end, GPUs are supposed to do shader-calculations fast. A little error here and there is not that much of a problem.
If you use GPUs for financial mathematics, it’s a whole other story…
Modern GPUs are compliant with IEEE-754 (both single and double precision) so you can use these just like you do on the CPU side.