… or actually use a good realtime graphics tracer instead of that half-baked nonsense - recommending or using google-NIH-Perfetto over Tracy should be criminal.
I think both are useful. I’ve tried Tracy before and it is really neat, but as far as I can tell Perfetto can also do trace recordings (from Kernel events and seemingly also arbitrary “producers”) whereas Tracy always needs a program that has explicit support for Tracy.
I think Apple Instruments (as in the dtrace frontend) is useful. That’s not the point. We have an abundance (harmfully so) of useful tracers (ask Brendan Gregg) and that’s the point (it’s hard to make a bad one actually, try!). Tracy has a clean, portable implementation (and the best manual in open source damnit) that solved some really hard problems in this space and is perfectly apt for the task covered in the post. Tracy is at the point that it will only get invaluably better if the informed mass use it so there is a fighting chance to catch the 1/100.000 cases. I haven’t seen a graphics FOSS tracer close to this level elsewhere, but happy to be told to be wrong.
As for the explicit-vs-generic tracepoints, I don’t know how heavy loads you’ve tried to trace in the space before - but I can’t (and I’ve got some scars to prove having tried) think of a generic solution that would collect necessary and sufficient data without introducing enough observer effect that an instrumented one is warranted. In my experience for graphics performance tracing, you really really need to be selective about what you collect and where, as any collection chews away at shared precious resources.
This seems really cool. I have not tried it, but the vibe of this at least is really nice.
I especially liked this line:
to make life easy, and is friendly with various sizes of networks: one for your organization, one for your project, one for your social circle
I could even imagine this being used to collaborate on a project, using an internal git repository and associated services.
Unfortunately I can’t judge the security-related things, but the project certainly seems neat!
I could even imagine this being used to collaborate on a project, using an internal git repository and associated services.
This sounds like it could be pretty fantastic for workflow.
Nice! This reminds me of a thing I wrote years ago that I rediscovered this week. It is kind of a ShaderToy-but-weird/minimal: https://papill0n.org/shaders.html#docs
Which is to say… raymarching and signed distance functions are really cool! I still don’t really understand them, but that does not need to keep people from playing around with them!
I am currently writing a Tumblr proxy a la nitter, only for Tumblr not Twitter. It’s basically invite-only for now, because I don’t have the experience to host public-facing infrastructure and am both afraid for my small servers life (cpu load) and livelihood (traffic & susceptibility to being hacked).
It’s pretty fun, has enough bugs to keep me on my toes when I am in the mood, and I am using it daily to read my various feeds on tumblr and other sites.
Technology-wise it is written in Go, has an optional SQLite database, and is mostly generating HTML with some JavaScript thrown in for convenience. It has been up for a few months now, the database has grown to 1.5gb by now which means it tracks 649 feeds and 782148 individual posts by now. As I said, a fun project!
And because I had this architecture in place, it has turned into a more generic feed reader, now supporting Twitter, Instagram, RSS and even AO3 all potentially in one feed.
Oh, and for my next project I want to convert my RaspberryPi 4 into both a pipe organ and a piano, using aeolus and Pianoteq because I recently bought a MIDI keyboard/controller and have had much fun playing it, mostly learning to play it.
It does! I tried it yesterday and it was really interesting to watch back after.
I didn’t get any instant insights from it, but it was still very interesting to see. Knowing that I was recording myself also helped me concentrate, which was a nice benefit on the side.
Concentration is a major benefit I’ve seen as well. Also, for me I’d also say it’s harder to go on side tangents. It has a pomodoroish like effects. It could probably flow pretty well in conjunction seeing how I feel a higher cognitive load during recording sessions and need the breaks.
I don’t recall the major insight (wish I was taking notes!) I found, but it was at the point I was stuck. Its super weird to be recording yourself while developing and even more awkward when you’re stuck on a problem.
I do find that when I’m on camera I do feel a lot of pressure to move fast. When I get stuck, I’ll sometimes pause and poke around to feel less pressure to solve immediately and then figure it out off camera and then summarize my findings when I return to recording. Its tempting to pause a lot during this so I’ll try to talk out loud a bit and ponder for uncomfortably long periods of time in order to take advantage of recording my problem solving on video for insights. Its often cringy watching my self and often I feel silly in retrospect but have found it useful from time to time.
I especially love this when working on side projects that I totally don’t have the time for. I can give the ol’ college try and make a little progress and when I forget about it, I can revisit the project and pick up where I left off sometimes years ago with the proper context. I have recorded about 10 sessions mostly on different projects.
I don’t really do in depth analysis or annotations or anything like the author does but I’ll scrub through and watch portions of it and take mental notes and have gained real insights. I think note taking would be awesome to do. The one time that I picked up a project from three years prior and scrubbed through it to figure out what I was thinking (I’ll try to talk through what I can) was priceless.
I figure if I were more disciplined, consistent, and put more effort into this process it could be a lot more useful.
I am running this locally since a few weeks. It’s pretty neat (reroute any audio from any app to any plugin and back) and still has some bugs. The tradeoff is having to restart it sometimes when things are too broken, which is fine for me.
I wrote a little bit about my experience with it so far recently: https://papill0n.org/blog.html#id:pipewire-works
This is great!
While I completely understand the sentiment of having that one tool that everyone uses and knows about I think that it comes at the prices of approaching problems in fewer ways, maybe from a technological point of view missing out. I don’t think that’s the case for Prometheus, but sometimes this also makes APIs specific to single client implementations.
Having more than one option and therefor also more than one interest group can be very beneficial, so I really appreciate this project.
That’s not to say that creating many Grafana clones is the right approach, but having some alternatives is certainly a good thing. Also this very much doesn’t seem to be a clone, as mentioned in the Readme.
Really nice. I think this could also be very useful for projects that measures things that aren’t your server instances, but to be used in single instance applications, like SoCi projects, where one simply wants to visualize some time series, without user management, etc. I also really like the straight-forward minimalism on the server side.
Just one nitpick, maybe it would make sense to specify the protocol as part of the source instead of enforcing HTTPS?
Thank you, I’m glad you find this useful!
I do like having more options, and I was pleasantly surprised that it is possible to implement these kinds of different frontends for complex projects with relative ease. There’s a definitive tradeoff with this project (being more lightweight and quicker to write a new board vs being feature complete and easier to use), but I think that’s a neat thing to exist.
In that vein, we also wrote an alternative frontend to ElasticSearch with similar tradeoffs and design ideas in mind.
Just one nitpick, maybe it would make sense to specify the protocol as part of the source instead of enforcing HTTPS?
Good point! I created an issue to at least default to the protocol the frontend is using.
relative ease
And that it’s possible to write them in modern, plain Javascript. No need for ES6 compilers or too insane browserhacks.
Playing more guitar, having started again last year after a ten year absence. Also trying not to get convinced that I have to buy more gear for it.
Tests, which would improve the working lives of all the people on our team, but there’s always new things to write first, and lots of tasks waiting to be done.
I also find it weird (as the author) for this to be on lobsters – to me if something costs money to read, I’m not sure that it makes sense for it to be posted to a site like this, because most folks won’t be able to read it and then it’s not possible to have a discussion about the content :)
Thanks a lot for your work, I am a big fan myself and distributed a bunch of print copies of your CC-licensed work among friends and colleagues! :)
I could imagine that the fact that gumroad seems to support only credit card payments hinders some customers who, like myself, live in a region were credit cards are not as common as they seem to be in the US. I bought a pre-paid card just for your zines, but supporting at least something like paypal would be great.
Gumroad now has paypal support for me. (I live in Europe, maybe it’s a regional thing?) When I click to the payment dialogue, it shows the credit card one, but also a button where I can pay with paypal.
I tried that out last week, when I bought the Bite size linux zine. :)
Surely I’m not going to be the only one expecting a comparison here between go’s. I’m not really well versed in GC but this appears to mirror go’s quite heavily.
My understanding, and I can’t find a link handy, is that the Go team is on a long term path to change their internals to allow for compacting and generational gc. There was something about the Azul guys advising them a year+ ago iirc.
Edit; I’m not sure what the current status is, haven’t been following, but see this from 2012, look for Gil Tene comments:
https://groups.google.com/forum/#!topic/golang-dev/GvA0DaCI2BU
This presentation from this July suggests they’re averse to taking almost any regressions now even if they get good GC throughput out of it. rlh tried freeing garbage at thread (goroutine) exit if the memory wasn’t reachable from another thread at any point, which seemed promising to me but didn’t pan out. aclements did some very clever experiments with fast cryptographic hashing of pointers to allow new tradeoffs, but rlh even seemed doubtful the prospects of that approach in the long term.
Compacting is a yet harder sell because they don’t want a read barrier and objects moving might make life harder for cgo
users.
Does seem likely we’ll see more work on more reliably meeting folks’ current expectations, like by fixing situations where it’s hard to stop a thread in a tight loop, and we’ll probably see work on reducing garbage through escape analysis, either directly or by doing better at other stuff like inlining. I said more in my long comment, but I suspect Java and Go have gone on sufficiently different paths they might not come back that close together. I could be wrong; things are interesting that way!
Other comments get at it, but the two are very different internally. Java GCs have been generational, meaning they can collect common short-lived garbage without looking at every live pointer in the heap, and compacting, meaning they pack together live data, which helps them achieve quick allocation and locality that can help processor caches work effectively.
ZGC is trying to maintain all of that and not pause the app much. Concurrent compacting GCs are hard because you can’t normally atomically update all the pointers to an object at once. To deal with that you need a read barrier or load barrier, something that happens when the app reads a pointer to make sure that it ends up reading the object from the right place. Sometimes (like in Azul C4 I think) this is done with memory-mapping tricks; in ZGC it looks like they do it by checking a few bits in each pointer they read. Anyway, keeping an app running while you move its data out from under it, without slowing it down a lot, is no easier than it sounds. (To the side, generational collectors don’t have to be compacting, but most are. WebKit’s Riptide is an interesting example of the tradeoffs of non-compacting generational.)
In Go all collections are full collections (not generational) and no heap compaction happens. So Go’s average GC cycle will do more work than a typical Java collector’s average cycle would in an app that allocates equally heavily and has short-lived garbage. Go is by all accounts good at keeping that work in the background. While not tackling generational, they’ve reduced the GC pauses to more or less synchronization points, under 1ms if all the threads of your app can be paused promptly (and they’re interested in making it possible to pause currently-uncooperative threads).
What Go does have going for it throughput-wise is that the language and tooling make it easier to allocate less, similar to what Coda’s comment said. Java is heavy on references to heap-allocated objects, and it uses indirect calls (virtual method calls) all over the place that make cross-function escape analysis hard (though JVMs still manage to do some, because the JIT can watch the app running and notice that an indirect call’s destination is predictable). Go’s defaults are flipped from that, and existing perf-sensitive Go code is already written with the assumption that allocations are kind of expensive. The presentation ngrilly linked to from one of the Go GC people suggests at a minimum the Go team really doesn’t want to accept any regressions for low-garbage code to get generational-type throughput improvements. I suspect the languages and communities have gone down sufficiently divergent paths about memory and GC that they’re not that likely to come together now, but I could be surprised.
One question that I don’t have a good feeling for is: could Go offer something like what the JVM has, where there are several distinct garbage collectors with different performance characteristics (high throughput vs. low latency)? I know simplicity has been a selling point, but like Coda said, the abundance of options is fine if you have a really solid default.
Doubtful they’ll have the user choose; they talk pretty proudly about not offering many knobs.
One thing Rick Hudson noted in the presentation (worth reading if you’re this deep in) is that if Austin’s clever pointer-hashing-at-GC-time trick works for some programs, the runtime could choose between using it or not based on how well it’s working out on the current workload. (Which it couldn’t easily do if, like, changing GCs meant compiling in different barrier code.) He doesn’t exactly suggest that they’re going to do it, just notes they could.
There are decades of research and engineering efforts that put Go’s GC and Hotspot apart.
Go’s GC is a nice introductory project, Hotspot is the real deal.
Go’s GC designers are not newbies either and have decades of experience: https://blog.golang.org/ismmkeynote
Google seems to be the nursing home of many people that had one lucky idea 20 years ago and are content with riding on their fame til retirement, so “famous person X works on it” has not much meaning when associated with Google.
The Train GC was quite interesting at its time, but the “invention” of stack maps is just like the “invention” of UTF-8 … if it hadn’t been “invented” by random person A, it would have been invented by random person B a few weeks/months later.
Taking everything together, I’m rather unconvinced that Go’s GC will even remotely approach G1, ZGC’s, Shenandoah’s level of sophistication any time soon.
For me it is kind of amusing that huge amounts of research and development went into the Hotspot GC but on the other hand there seem to be no sensible defaults because there is often the need to hand tune its parameters. In Go I don’t have to jump through those hoops, and I’m not advised to, but still get very good performance characteristics, at least comparable to (in my humble opinion even better) than for a lot of Java applications.
On the contrary, most Java applications don’t need to be tuned and the default GC ergonomics are just fine. For the G1 collector (introduced in 2009 a few months before Go and made the default a year ago), setting the JVM’s heap size is enough for pretty much all workloads except for those which have always been challenging for garbage collected languages—large, dense reference graphs.
The advantages Go has for those workloads are non-scalar value types and excellent tooling for optimizing memory allocation, not a magic garbage collector.
(Also, to clarify — HotSpot is generally used to refer to Oracle’s JIT VM, not its garbage collection architecture.)
I had the same impression while reading the article, although I also don’t know that much about GC.
Wrote a small tool to track how much time I spend at the computer/at work yesterday. (It writes to the same file every day, and then counts the time since then. Very simple, has know bugs, but also covers all I need after maybe an hour of work plus some experimentation before and after.)
Looked into recipes for making (vegan) phô. Have one that is simple enough to hopefully make this weekend, and of course also found lots of other neat recipes to do.
Will also clean my flat a bit and maybe help with a move.
Edit: Oh, and maybe I’ll continue with the overthewire.org games. Got until level 4 of krypton, which was pretty fun. (Also got through bandit and leviathan in the past two weeks.)
Being at home and tired, but also playing http://overthewire.org/wargames/, which are surprisingly fun! (I got through the bandit levels yesterday, and now I’m trying leviathan and natas, both of which turn out to be quite a bit trickier. It’s pretty fun so far, though.)
This looks pretty exciting to me:
Not knowing anything about this topic, I found this interesting. However, the article is from 2014, have there been any recent developments? (I found a few recent news stories, but nothing that stood out to me.)
It seems the commercial applications of this research are still a bit behind Wi-Fi in terms of bandwidth, thus confined to a niche market of non-RF environments. The list of publications from Haas’s group suggests that massive MIMO will be the way forward – every lightbulb an access point.
Would be nice if pressing “next” would display a warning if you didn’t save your input yet. I filled in all the fields, always pressing next, and then I got an empty resumé. (The “next” button was much more prominent to me, it seems that the other button to the left of it almost didn’t register in my mind. I know it was there, but I didn’t look at it until I tried figuring out why my resumé was empty.)
I’ll poke one bit in particular that I disagree with: zero-tolerance policies are bullshit.
Especially given the vague and ever-changing scope of harassment, I cannot fathom a world in which it is fair to take a bunch of young people, encourage them to spend over half their waking hours in some mythical zone of neopuritanism, and then terminate then on a whim when they run afoul of somebody that is higher-functioning than they are.
I very much disagree with that.
I cannot fathom a world in which it is “fair” to take a bunch of often already vulnerable people, make it difficult for them to enter an industry in the first place, and then expect them to stay silent (and possibly quit) in case some kind of harassment happens.
I disagree with two things in particular:
mythical zone of neopuritanism
To me this sounds like “boys will be boys”, which relieves people in privileged positions of the responsibility to deal with the consequences of their own actions. And also, not harassing people is just professionalism.
If you want to date, do it elsewhere, with people who consent to being dated.
somebody that is higher-functioning than they are
This sounds like another get-out-of-jail-free card to me. Typically harassers are in positions of power and have a lot of privilege, but now they are “terminate[d] on a whim” because they’re young?
Additionally, this makes harassers sound like victims, which I find very problematic.
(I actually hope that I have somehow misinterpreted the comment.)
I’m pretty sure you and I are mostly in agreement. The problem arises when you look at the source material I was reacting to. For reference:
Harassment, discrimination, and retaliation are illegal under state and federal law. There are never any excuses for covering up, for supporting, or for ignoring unlawful behavior. If a company has properly trained all employees about harassment (see #4 above), there are never any reasons to give multiple “chances” to employees who harass, discriminate, or otherwise treat their coworkers unlawfully and/or inappropriately. If, after a careful investigation, the company has determined that the employee acted unlawfully or inappropriately, the outcome should be a swift termination.
and:
What companies can do: institute a zero-tolerance policy to protect both the company and its employees from unlawful and/or inappropriate behavior.
The suggested chain of events here is:
Now, there’s a lot of problems with this, right?
First, policies such as these are kinda open-ended. You’ll notice in particular bits about “unwanted advances”…read literally and taken in concert with a zero-tolerance policy, that suggests that asking somebody out on a date either results in a date or–if people are following the letter of the rules–termination of employment. Because zero-tolerance, right? No second chances.
Second, the author makes this appeal to the company to do the investigation. That’s no significant improvement over the existing system; had the answer been “take this shit to court or a commission or labor board”, I could get behind it…but the suggested policy is not really any different from the status quo other than saying “zero-tolerance all the time”. And you can bet your hat that the people already in positions of power doing bad things are probably not going to be found to have transgressed, the victims terminated for unrelated cause shortly thereafter, and so forth.
Third, zero-tolerance policies as a rule hurt more than they help. They have failed with the war on drugs, they’ve failed in schools, they’ve failed in policing. One might even go so far as to observe that being able to dial retribution to match the transgression (instead of instant exile) has always been a feature, not a bug, of competent lawmaking.
Fourth, this works well when you have people that are “professional” (whatever that means) but lacks the flexibility (because zero-tolerance means no flexibility) to handle the weirdness of people. Consider example cases:
Now, what do you think the reasonable action is in each of these cases? If you didn’t answer “fire them immediately” to all of them, you cannot support a zero-tolerance policy as the author suggests.
~
Let me poke at your assertion here a bit:
If you want to date, do it elsewhere, with people who consent to being dated.
I’m not saying that it is okay to be hounded at work by people looking for a date or a spouse. That’s absurd–you’re both (all?) there to work. But do consider:
At least in the US, we’ve lost a lot of progress in tech on the fair work weak and on socialization. It is not fair to single folks to say “hey if you do this thing that comes naturally to you re: social interaction, you will be immediately fired”. There is something deeply, deeply dehumanizing about assuming that everyone in a company is some sexless human resource that needs no romantic companionship.
Perhaps we should require that people be either ace or married and monogamous before allowing them to work for our companies, yes?
Or, maybe, we should just recognize that people are gonna people, and maybe we shouldn’t fire them the second they express anything other than an economic interest in a coworker.
I’ll poke one bit in particular that I disagree with: zero-tolerance policies are bullshit.
Zero tolerance of what?
I agree that firing someone because he says “bitch” is bullshit. It’s making a show at the expense of some powerless fool over a nonexistent issue (a word).
Zero tolerance of actual sexual harassment– not dirty jokes, but persistent unwanted advances coming from a person in power– I would support.
Especially given the vague and ever-changing scope of harassment
Sexual harassment is separate from political correctness (PC). About PC, I imagine that we agree. PC is the fake feminism of the CEO who does nothing to support women but says, “I fired someone last Thursday for making a dirty joke, so I’m Okay With God On The Women Thing”.
terminate then on a whim when they run afoul of somebody that is higher-functioning than they are.
That’s PC culture. Sexual harassment is a great deal more than “run[ning] afoul of somebody”, and it’s a huge problem in the corporate world. Abuses of power, in general, are an issue in the corporate world and for most part companies don’t do anything about it. Nor would they do anything about sexual harassment if there weren’t significant legal and financial incentives (namely, the threat of a major, embarrassing lawsuit) forcing them to do so.
I don’t know what to make of this, but this looks like a fairly well-known developer got their computer rendered unusable by getting malware on a usb stick.
For future reference, in a later post Steve responds to two critiques/thoughts on the first post. One ‘Rust is mostly safety’ by Graydon Hoare and ‘Safety is Rust’s Fireflower’ by Dave Herman.
(All of these are on lobsters right now, but I’ve wanted to make it easier to find them later.)
Company: Picnic
Company site: https://picnic.chat
Position(s): Senior Software Engineer
Location: London, UK - onsite or remote (±5h of GMT+1) - we mostly work remote, but also meet in the office for strategy/planning/workshops that make more sense in person.
Description: Our friends matter to us more than we think. The quantity and quality of our friendships has a bigger influence on health, happiness and even mortality than anything except smoking. At Picnic, we believe the platforms that house these relationships today fall seriously short. Nowhere is this more true than group chats. We’re building a next-gen social app, powered by group chat, to give friends the kind of online space they deserve.
We’re still pre-launch, so check out our about us page for more details about the product, team & funders.
Our engineering team (led by me) currently consists of 3 people, so you’ll have major impact from day one.
Tech stack:
More details and reasoning behind these choices here.
Contact: Our hiring process is described in detail on the job spec, but feel free to reach out to me directly on here or harry@picnic.ventures :)
I was interested in the various links, but the ones to notion (e.g. this one) all 404 for me. Do I need a notion account to view them?
Ah, my apologies! Here are the public links: