I object!
Just kidding, thanks for the hard work @alynpost
You should add Paperkast to the list of sister sites.
How is https://github.com/lobsters/lobsters/wiki not a standardized directory?
I also wish I’d discovered it sooner. However, other than nesting it under the “Wiki” link at the bottom page, I don’t see a solution that wouldn’t start cluttering up the site with information most people won’t need.
It’s linked from the about page.
What would you prefer it to use as the underlying storage? (I am trying to understand what people actually want.)
I was thinking of storing everything, including the comments in a git instance, which would work independently of what git frontend you are using, but then I would have to speak git protocol from the browser which sucks. I may have a look at git.js
Looking at git.js documentation :(
“I’ve been asking Github to enable CORS headers to their HTTPS git servers, but they’ve refused to do it. This means that a browser can never clone from github because the browser will disallow XHR requests to the domain.”
Anything self-hosted would be viable, but everything on git would be even better, although probably more complicated. We use gerrit at work (which sucks at several levels), and mostly anything third-party is very much disallowed. Maybe you could create an abstraction that would speak Github API to github and git protocol to other servers where this would work?
The other possibility could be a sort of optional backend/proxy, so, if the git server doesn’t have CORS, you could spin that optional server.
After thinking about it some more, there’s a lot that GitHub offers that I would have to reimplement myself. Authentication, for one thing. If it was used in a stand-alone mode in enterprise, some kind of authentication would be still needed. People would probably want SSO. Then there are notifications. GitHub sends you an email when you are mentioned in a bug. I would have to somehow interact with company’s mail server. And so on. This is my hobby project and I don’t really have time to go into that amount of complexity.
The only issue that I have with it is sharing my organization details. Although you could do it manually, I’m always a bit annoyed about this.
Well, the same thing as the last time: porting my IRC daemon from C to Go. I’ve had some problems with motivation, though that has sorted itself out and now I have before me the task of rewriting about 4000 lines of fairly straight-forward “business logic” code. It’s mind-numbingly boring and fairly time-consuming.
Since this is part of an over-ambitious project where I replace most GUI/TUI applications that I use, this rewrite being a warm-up exercise for Go in a problem domain that I’m comfortable with, I am considering starting a blog-of-sorts. I’m not sure if I could keep it alive for long as one needs to remember to describe the steps he takes and put them in context for readers which, needless to say, takes its time, but also as a side effect often provides interesting insights. There’s definitely a lot to write about.
What does one use to share a stream of short updates? I don’t feel like spamming an aggregator with them would be very productive and summarizing events at fixed time intervals seems like a hassle.
I’d recommend http://jrnl.sh/ if you want to quickly do streams of updates directly from command-line.
I personally like my fork which has one additional feature: native exporting directly to HTML https://git.timetoplatypus.com/timetoplatypus/jrnl
Keeping a log/record of things you have learned, wanted to share, or ran into in an issue tracker for the project would work probably. Possibly just a markdown file? Makes it easy to at a later date write about the process from beginning to end.
Looks good. Definitely interested to hear if the project is open source. I like the snappiness and minimal amount of JavaScript
I ran across a pretty interesting talk about zig here.
Note: I got a good laugh at the end of the talk where he said he made tabs in the source a hard compile error. Well played Andrew. ;)
I personally loved the whole “Ya I know saying Zig is faster than C is a big statement since programming language performance is measured as a fraction of C, but I’m telling you it’s an improper fraction”
Been using jrnl to keep notes of things I’ve done or ideas I intend to execute on.
Yep, version control can provide that, but I didn’t want to rely on it being present. I think from a developer standpoint, it’s convenient to know then and there who wrote a particular test (compared to perusing a changelog to find who wrote it). I used this at work and that was my experience, at least.
Thanks!
I’m certainly aiming for a minimal front-end. As I write new features, I try to hold to a quote I once heard: “if it needs a manual to work, it’s not ready for production”. Obviously there are exceptions to this, but the spirit of the saying is that features should be as intuitive as possible. So I’m comfortable with the backend code getting sophisticated (and ideally not complicated) as long as writing the tests remains straightforward. I measure straightforwardness by how easy it is to explain a new feature using an example. If it’s difficult to explain using an example, it’s not ready
This would be a useful extension to tldr
Already posted: https://lobste.rs/s/oife5f/dots_do_matter_how_scam_gmail_user
Love finding writings like this where the author has clearly worked on something very specific, and can articulate nuances that you’d either never think about or wouldn’t think existed (for example, buildings that are numbered zero)
We need something like that for every category in one place. Plus, premade components in common languages that enforce their best practices by default with escape hatches for some stuff where it makes sense. minimax’s link is a good start.
Tarball of all the videos: https://timetoplatypus.com/static/das.tar.gz
“First, everything is free all week”
He’s encouraging people to grab his videos by giving everything away for free. All he required was a login which may have monetary value later that timetoplaytypus’s share negates. It’s possible, though, he thinks they can only grab a small amount of videos with some portion of people paying for the rest after deal expires. That’s on top of new, recurring revenue from it on future videos. Maybe this hurts him on at least gap between what he though could be shared and what would be. In that case, he’d have made a gamble that may or may not pay off vs offering a limited number of videos with a clear prohibition on sharing them.
On ethical side focusing on results, I don’t think there’s a huge difference of someone here sharing his videos all at once in convenient form for free vs him saying grab as many as you want after you log in for free. Given freeloading users vs type and number that would pay him, I don’t think he’d have many losses in that scenario if any at all. The kind of people that would pay him would probably mostly still pay him. Hopefully, no effect.
He’s encouraging people to take a free look at his work and see if they think it would be worth for them to pay for more of it in the future. Shitty people that don’t care about anything else but themselves might interpret this offer as an invitation to take advantage of someone’s work, and even actively undermine this someone’s livelihood. I think these people are at least half of what is wrong with the word and they should all go live in a cave and never interact with anyone else ever again.
I hear you. It’s a sensible perspective. I prefer he keeps getting paid for doing good work, too. I also agree that this should be the norm instead of pervasive parasiting.
I think you see the situation a bit radically.
On one hand when someone publishes a free software and people use it for their benefit without any pay then they are shitty? When someone decides to publishing something for free, then the factor that some people may not pay for it must be calculated into that decision.
I believe that the ad-supported word is a bigger threat, as makes the feeling that stuff are for free a norm.
Neither of those examples apply. OP is publishing something for free for a LIMITED amount of time, with the very obvious intention of giving people a preview of his product. Free software and free content are very different propositions.
I still think that the possibility had to be factored into this offer, and it likely was. The style and language are still harsher than I think the situation justifies.
let’s be real here. the first thing i thought of when i saw this was “can i write a script to download everything before the deadline” and im pretty sure 99% of people here thought something along that line.
given the target audience of his screencasts, you kinda have to expect this.
Everybody thinks stupid thoughts, but not everyone acts on it. And since we’re a big part of Gary’s target audience, wouldn’t it be nice, if it turns out he overestimated the amount of dicks among us? By the way, first thing in my head also was “Hmm, can I download it?”, but then I remember the guy has to eat.
The swearing you demonstrate in your comments is disturbing. I hope it will not become the norm in the comments section.
I believe you could also communicate your point very well without using words like “shitty people” and “dicks”.
I come to comment on this because I remembered this tweet he posted on the matter, a while ago: https://twitter.com/garybernhardt/status/870721629440983041
I’m glad it’s been taken down already, I think its just fair to the author’s work.
Any endpoint on my site that doesn’t exist returns HTTP 451
Edit: for example, https://timetoplatypus.com/abc
FWIW it looks like the HTTP response is only a 404. is this because many clients/servers don’t respect 451 yet?
DHS also said that its NPPD is “not aware of any current DHS technical capability to detect IMSI catchers.”
“NPPD is aware of anomalous activity outside the [National Capital Region] that appears to be consistent with IMSI catchers,”
These statements contradict each other.
Why?
First statement: DHS says NPPD doesn’t know about DHS’s ability to detect IMSIs
Second statement: NPPD can detect IMSis
Trying to finish a long running project: my e-ink computer.
Amazing! Please keep us posted!
Are you documenting the project anywhere else besides sporadic tweets?
Yes, I document everything along the way. I do not like to publish about ongoing projects as I tend not to finish them when I do that :).
Both the code and the CAD designs will be open sourced once the project will be finished.
I also plan to write a proper blog post about it. I still need to figure out the proper way to do partial refresh with this screen and it should be more or less done (the wooden case still needs some adjustments).
[Edit] Typos.
It seems to be this one, same marks on the bottom corners and the shield looks the same:
Is that a raspi it’s hooked up to? Where did you buy the screen?
There is another guy doing e-ink stuff on the internet recently. You should go search for him. He is researching how to get decent refresh rates too.
Instead of creating a laptop-like enclosure, you should make a monitor-like enclosure. It will look way better and more reusable.
So, one of the thigns that annoys me about this world is how we don’t have e-ink displays for lots of purposes that nowadays get done with a run of the mill tablet. You don’t need a tablet for things like a board that shows a restaurant menu, or tracking buses in the area. So why can’t I find reasonably sized E-ink displays for such purpses?
Entirely agree with you.
I guess it can be explained by the fact that LCD screens have a better brightness, they are better to catch human eye attention.
The eink technology is bistable on the other hand, making it highly energy efficient for such applications - when no frequent updates are needed.
Energy is cheap nowadays, we don’t really care about energy consumption anymore. But I guess this might change past the peak oil.
I guess these techs will start developing as soon as energy becomes scarce and expensive.
Just an FYI, you can also install the bash completion script to /etc/bash_completion.d/ so that it auto-loads (and you won’t have to modify your bashrc file). On some distributions, I believe you can also install to /usr/share/bash-completion/completions/ but I’m not sure about that one.
Another aspect of code navigation that’s not often given much consideration is greppability/searchability. Bascially, how powerful of a tool do you have to use to be able to statically (or without running it) get a good idea of where a particular line of code dispatches to. Every time an indirection is introduced, you raise the bar for how powerful the code analysis tool is required to keep from having to guess at where something is, unless you preserve the uniqueness of the name used. The two biggest practices that seem to make this sort of analysis more difficult seem to be Interfaces and using RabbitMQ-dispatched microservices.
This isn’t to say that using interfaces and microservices are a bad things, but that they trades off easy navigability for some other quality (in the cases that I’m thinking of, Interfaces are used to help testability in C#, and microservices are used for, among other things, reducing IL->x86 JIT times in C#, by breaking up the monolith).
On the flip side, how searchable is assembly? You can search for individual instructions but you can’t search for any higher level patterns in the code, which is what abstractions usually name.
It just seems like none of the languages at any level of abstraction lend themselves very well to analysis or exploration. I think it’s partly because of the attachment to representing programs as text, which is limiting.
Note that I said indirection, not abstraction as such. Not all abstractions are represent semantic indirection as such. Function calls, for example, can be static jumps, and are usually pretty easy to analyze, provided that the types in question aren’t crazy.
A grep derivative tailored towards finding specifically patterns in assembly code would make for a really interesting project actually…
I strongly disagree with a CVE tag. If a specific CVE is worth attention it can be submitted under security, most likely with a nice abstract discussing it like the Theo undeadly link or the SSH enumeration issue. Adding a CVE tag will just give a green light to turning lobste.rs in a CVE database copy - I see no added value from that.
I agree. I think it comes down to the community being very selective about submitting CVEs. The ones that are worth it will either have a technical deep-dive that can be submitted here, or will be so important that we won’t mind a direct link.
Although I want to filter them, I agree this could invite more of them. Actually, that’s one of @friendlysock’s own warnings in other threads. The fact that the tags simultaneously are for highlighting and filtering, two contradictory things, might be a weakness of using them vs some alternative. It also might be fundamentally inescapable aspect of a good, design choice. I’m not sure. Probably worth some contemplation.
I completely agree with you. I enjoy reading great technical blog posts about people dissecting software and explaining what went wrong and how to mitigate. I want more of that.
I don’t enjoy ratings and CVSS scores. I’d rather not encourage people by blessing it with a tag.