While I think a website like this would make sense in a few years, right now I think GDPR is complicated, confusing, and scary enough to a lot of companies that they are going to make mistakes. I’d rather help them do it better than mock them.
As one of the thousands of engineers who had to figure out how to shoehorn a six-month compliance project into a lean team’s packed roadmap, I concur. This wasn’t easy, not even at a company that respects user data to begin with. Lots of the jokes I’ve seen about GDPR right now just lessen my opinion of the teller.
On the other hand, we’ve all had literally more than 2 years to work on said six-month compliance project, and the fact that so many companies try to push on until the very end to start working on it is the actual problem here IMO.
Not from my point of view – who cares if companies just woke up to GDPR two weeks ago, if I don’t use them for data processing? None of my actual pain came from that. But I definitely spent a lot of time working on GDPR when I’d rather have been building product, other deadlines slipped, things moved from To-Do to Backlog to Icebox because of this. We’re ready for GDPR, but that stung.
I was essentially trying to put “People like you don’t get to complain about it being hard to fit something into a certain time period when they had literally 4 times that amount of time to do it.” ^__^
Well, if people like you (who didn’t even do the work) get to complain, then so do I! If someone tells me they’re gonna punch me in the face, then they punch me in the face, I still got punched in the face.
I did our GDPR planning and work, and I’m so glad to see it in effect. The industry is finally gaining some standards. Sometimes it’s time to own-up that you care more about your own bottom-line than doing the right thing, if you complain about having to give up a “rather have been building product” attitude.
Sometimes if you don’t build a product, GDPR compliance becomes irrelevant because you never get a company off the ground. As a one-person platform team until last September, I don’t regret how I prioritized it.
Well, if people like you (who didn’t even do the work) get to complain, then so do I!
I actually did do the work. But either way, complaining about it being a pain overall is just fine, because it is. On the other hand, explicitly complaining that because you had to do it in 6 months you had issues fitting it in, had other deadlines slip, and had to essentially kill other to-do’s is a very different thing. If you’d used the extra 18 months, I bet you’d have had much less issues with other deadlines.
If someone tells me they’re gonna punch me in the face, then they punch me in the face, I still got punched in the face.
This analogy doesn’t even make sense in context…
If you’d used the extra 18 months, I bet you’d have had much less issues with other deadlines.
I’ll totally remember this for next time.
Well, I agree in general, but this article specifically highlights some cases of just plain being mean to your users. I’m okay with mocking those.
I disagree. GDPR is expensive to get wrong so the companies aren’t sure what to expect. They are likely being conservative to protect themselves.
They were not conservative in tracking users, and spending for tracking and spying on users was not expensive?
As a user I don’t care about the woes of companies. They forced the lawmakers to create these laws, as they were operating a surveilance capitalism. They deserve the pain, the costs, and the fear.
and spending for tracking and spying on users was not expensive?
Tracking users is very cheap, that’s why everyone can and does do it. It’s just bits.
As a user I don’t care about the woes of companies.
Feel free not to use them, then. What I am saying is that GDPR is a new, large and expansive, law with a lot of unknowns. Even the regulators don’t really know what the ramifications will be. I’m not saying to let companies not adhere to the law, I’m just saying on the first day the world would probably benefit more from helping the companies comply rather than mocking them.
EDIT:
To be specific, I think companies like FB, Google, Amazon, etc should be expected to entirely comply with the law on day one. It’s smaller companies that are living on thinner margins that can’t necessarily afford the legal help those can that I’d want to support rather than mock.
It’s not like the GDPR was announced yesterday. It goes live tomorrow after a two year onboarding period.
If they haven’t got their act in order after two years, it’s reasonable to name and shame.
Can anyone ELI5 how this works? Does the Fediverse get a new fork in Tor-space? Or will Onion only users still be follow-able by those of us in the DNS driven fediverse?
Also is each Onion instance a Tor hidden service with all the implied security challenges that brings?
Reading through the toots, it looks like HTTPS requests are being proxied into Tor and responses are being proxied out. The Pleroma author says they’ll have a blog post about this shortly.
Ah interesting! So it’s definitely not an island. I’ll look forward to that post!
I admire the Pleroma folks, that project’s existence is a sound refutation of folks dismissal of Mastodon just because it’s a Rails project.
(Which, really, I mean Rails has its problems, and security issues are among them, but doesn’t every other web framework in existence?)
It’s not worth it. I gave Matrix/Riot 2 years to become usable: fix performance, fix resource usage, behave like modern tech they are claiming to replace. It was not worth the effort.
10 years of IRC logs from irssi: 500MB of disk space 2 years of moderate Matrix/Riot usage (with IRC bridges which I ran myself): 12GB Postgres database
Insane. This tech is dead on arrival in my opinion.
At least when XMPP works, it works well; provided you aren’t getting screwed over by server/client inconsistency in support. When Matrix works, it’s slow as a dog, client and server. (Not to mention New Vector seems a bit…. fucky when it comes to competition in homeservers.)
Yeah, XMPP’s weakness are the XEPs and the inconsistent implementation. It should have all been one consolidated protocol, but then it might not have had adoption due to complexity. sigh
I’ll be honest, I looked into contributing to Dendrite (the Golang successor) but found the codebase a mess (and it uses gb, which is not the way the community as a whole has been moving for years, but that’s more of a personal preference I guess). Maybe they’ll get their act together but for now I’m going to pass.
Thats a very odd thing to have an issue with. 12gb is fairly minor in todays terms. If you take a look at the source for a message in matrix you will see they each contain a whole lot more info than an IRC messsage such as the origin server, message type, event ID, room id and a whole lot more. Also riot supports inline media which on it’s own would take up 12GB with some moderate usage.
Matrix doesn’t aim to be a 1:1 copy of IRC, It supports a whole lot more features that users expect of modern software and that necessarily means more resource usage.
The media is not stored in the Postgres database.
The software is slow. It should never have been written in Python, because they’re affected by the GIL. The database is poorly optimized and has lots of issues that require manual intervention like this: https://github.com/matrix-org/synapse/issues/1760
The best summary I can provide is this quote, “[The problem with Matrix ] has everything to do with synapse, bad signature design (canonicalized json vs http sigs) and an overall terrible agent model.”
12GB Postgres database means poor performance unless you have good hardware. Try running it on an Rpi or a Scaleway C1. You’re not going to have a usable experience. Even a Digital Ocean $5/mo droplet won’t be usable.
Not everyone has a Dual Xeon with 64GB of RAM colocated. I do. It was even awful on that.
I previously ran every application I made on crappy hardware to make sure it wasnt overbloated. If it worked there, probably be great on newer boxes. Seeing the $5 droplets mentioned all the time makes me think they might be a nice, new baseline. What you think since you mentioned them?
Quassel manages to store all the same data, also in a PostgreSQL database, in much less than 12GB. If you add fulltext search, it still won’t be even close.
The problem is that Matrix as a project just has a lot of things left to fix, my current favorite is their “database” backend on Android
Matrix could be great, if they actually drop HTTP Longpolling, actually finish a native socket implementation, actually finish their Rust server implementation, replace their serialization format with a more efficient one, and so on, and so on.
In a few years Matrix may become great – for today, it isn’t there yet.
Disclaimer: I’m personally involved with IRC, and develop Quasseldroid, a client for the Quassel bouncer.
finish their Rust server implementation
You mean in go.
I am backing the project on Patreon. Right now, I have completely replaced both XMPP and Messenger and I surely hope that it will improve over time.
Oh, it ended up being go? Last I heard about it, someone was rewriting the server in Rust. Was that abandoned?
Thanks for your feedback. I am yet to use it extensively so I cannot comment on the performance issues as of now.
The author doesn’t mention the popular GUI library that’s the best fit for his use case – TK. (I can’t blame him – TK has poor PR, since it’s marginally less consistent than larger and more unweildy toolkits like GTK and QT, while having many of the drawbacks of a plain X implementation.)
That said, the fact that TK is the easiest way to go from zero to a simple GUI is frankly pretty embarassing. There’s no technical reason GUI toolkits can’t be structured better – only social reasons (like “nobody who knows how to do it cares enough”).
The problem is that TK still has terrible looking widgets. Just because UI fashion has moved away from consistent native look and feel doesn’t mean TK is passable.
TTK mostly takes care of this, by creating a Look and Feel that matches up with the platform in question.
TK ships with TTK, which provides native widget styles for every major platform. It has shipped that way for nine years.
I was not aware of TTK, thank you! I tried out TK a few times and seeing how awful it looked made me leave it really quickly for other technologies.
TTK has been around for a long time, and built into TK for a long time too. It’s a mystery to me why they don’t enable it by default. I discovered it six years after it got bundled!
I tried to look into it a little bit today but it looks like there is pretty much only one getting started guide for it, written in python. Do you know any guides for it in other languages?
Not really. It provides native-styled clones of existing widgets, so if it’s wrapped by your target language, all you should need to do is import it and either overwrite the definitions of your base widget-set or reference the ttk version instead (ex., by running ‘s/tk./ttk./g’ on your codebase).
When he put out the JSON protocol, Tcl/Tk came right to mind. This is exactly how people do UI with Python and tkinter.
The distribution of programming talent is likely normal, but what about their output?
The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.
Still a good article! I think this confusion is the smallest part of what he’s trying to say.
That’s an interesting backdoor you tried to open to sneak the 10x programmer back into not being a myth.
They exist, though. So, more like the model that excludes them is broken front and center. Accurate position is most people aren’t 10x’ers or even need to be that I can tell. Team players with consistency are more valuable in the long run. That should be majority with some strong, technical talent sprinkled in.
Is there evidence to support that? As you know, measuring programmer productivity is notoriously difficult, and I haven’t seen any studies to confirm the 10x difference. I agree with @SeanTAllen, it’s more like an instance of the hero myth.
EDIT: here are some interesting comments by a guy who researched the literature on the subject: https://medium.com/make-better-software/the-10x-programmer-and-other-myths-61f3b314ad39
Just think back to school or college where people got the same training. Some seemed natural at the stuff running circles around others for whatever reason, right? And some people score way higher than others on parts of math, CompSci, or IQ tests seemingly not even trying compared to those that put in much effort to only underperform.
People that are super-high performers from the start exist. If they and the others study equally, the gap might shrink or widen but should widen if wanting strong generalists since they’re better at foundational skills or thinking style. I don’t know if the 10 applies (probably not). The concept of gifted folks making easy work of problems most others struggle is something Ive seen a ton of in real life.
Why would they not exist in programming when they exist in everything else would be the more accurate question.
There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.
Partly it’s because only a small portion of programming is about raw intellectual power. A lot of it is just grinding through documentation and integration issues.
Partly it’s because there are complex interactions with other people that constrain a person. Simple example: at one of my jobs people complained a lot about C++ templates because they couldn’t understand them.
Finally, it’s also because the domain a person applies themselves to places other constraints. Can’t get too clever if you have to stay within the confines of a web framework, for example.
I guess there are specific contexts where high productivity could be realised: one person creating something from scratch, or a group of highly talented people who work well together. But those would be exceptional situations, while under the vast majority of circumstances it’s counterproductive to expect or hope for 10x productivity from anyone.
I agree with all of that. I think the multipliers kick in on particular tasks which may or may not produce a net benefit overall given conflicting requirements. Your example of one person being too clever with some code for others to read is an example of that.
I think the 10x is often realized by just understanding the requirements better. For example, maybe the 2 week long solution isn’t really necessary because the 40 lines you can write in the afternoon are all the requirement really required.
There’s no question that there is difference in intellectual ability. However, I think that it’s highly questionable that it translates into 10x (or whatever-x) differences in productivity.
It does not simply depends on how you measure, it depends on what you measure.
And it may be more than “raw intellectual power”. For me it’s usually experience.
As a passionate programmer, I’ve faced more problems and more bugs than my colleagues.
So it often happens that I solve in minutes problems that they have struggled for hours (or even days).
This has two side effects:
Both of this force me to face more problems and bugs… and so on.
Also such experience make me well versed at architectural design of large applications: I’m usually able to avoid issues and predict with an high precision the time required for a task.
However measuring overall productivity is another thing:
So when it’s a matter of solving problems by programming, I’m approach the 10x productivity of the myth despite not being particularly intelligent, but overall it really depends on the environment.
This is a good exposition of what a 10x-er might be and jives with my thoughts. Some developers can “do the hard stuff” with little or no guidance. Some developers just can’t, no matter how much coaching and guidance are provided.
For illustration, I base this on one tenure I had as a team lead, where the team worked on some “algorithmically complex” tasks. I had on my team people who were hired on and excelled at the work. I had other developers who struggled. Most got up to an adequate level eventually (6 months or so). One in particular never did. I worked with this person for a year, teaching and guiding, and they just didn’t get it. This particular developer was good at other things though like trouble shooting and interfacing with customers in more of a support role. But the ones who flew kept on flying. They owned it, knew it inside and out.
It’s odd to me that anyone disputes the fact there are more capable developers out there. Sure “productivety” is one measure, and not a good proxy for ability. I personally don’t equate 10x with being productive, that clearly makes no sense. Also I think Fred Brookes Mythical Man Month is the authoritative source on this. I never see it cited in these discussions.
There may not be any 10x developers, but I’m increasingly convinced that there are many 0x (or maybe epsilon-x) developers.
I used to think that, but I’m no longer sure. I’ve seen multiple instances of what I considered absolutely horrible programmers taking the helm, and I fully expected those businesses to fold in a short period of time as a result - but they didn’t! From my point of view, it’s horrible -10x code, but for the business owner, it’s just fine because the business keeps going and features get added. So how do we even measure success or failure, let alone assign quantifiers like 0x?
Oh, I don’t mean code quality, I mean productivity. I know some devs that can work on the same simple task for weeks, miss the deadline, and be move on to a different task that they also don’t finish.
Even if the code they wrote was amazing, they don’t ship enough progress to be of much help.
That’s interesting. I’ve encountered developers who were slow but not ones who would produce nothing at all.
I’ve encountered it, though it was unrelated to their skill. Depressive episodes, for example, can really block someone. So can burnout, or outside stresses.
Perhaps there are devs who cannot ship code at all, but I’ve only encountered unshipping devs that were in a bad state.
You’re defining programming ability by if a business succeeds though. There are plenty of other instances where programming is not done for the sake of business, though.
That’s true. But my point is that it makes no sense to assign quantifiers to programmer output without actually being able to measure it. In business, you could at least use financials as a proxy measure (obviously not a great one).
Anecdotally, I’m routinely stunned by how productive maintainers of open source frameworks can be. They’re certainly many times more productive than I am. (Maybe that just means I’m a 0.1x programmer, though!)
I’m sure that’s the case sometimes. But are they productive because they have more sense of agency? Because they don’t have to deal with office politics? Because they just really enjoy working on it (as opposed to a day job)? There are so many possible reasons. Makes it hard to establish how and what to measure to determine productivity.
I don’t get why people feel the need to pretend talent is a myth or that 10x programmers are a myth. It’s way more than 10x. I don’t get why so many obviously talented people need to pretend they’re mediocre.
edit: does anyone do this in any other field? Do people deny einstein, mozart, michaelangelo, shakespear, or newton? LeBron James?
Deny what exactly? That LeBron James exists? What is LeBron James a 10x of? Is that Athelete? Basketball player? What is the scale here?
A 10x programmer. I’ve never met one. I know people who are very productive within their area of expertise. I’ve never met someone who I can drop into any area and they are boom 10x more productive and if you say “10x programmer” that’s what you are saying.
This of course presumes that we can manage to define what the scale is. We can’t as an industry define what productive is. Is it lines of code? Story points completed? Features shipped?
Context is a huge factor in productivity. It’s not fair to subtract it out.
I bet you’re a lot more then 10X better then I am at working on Pony… Any metric you want. I don’t write much C since college, I bet you’re more then 10X better then me in any C project.
You were coding before I was born, and as far as I can tell are near the top of your field. I’ve been coding most of my life, I’m good at it, the difference is there though. I know enough to be able to read your code and tell that you’re significantly more skilled then I am. I bet you’re only a factor of 2 or 3 better at general programming then I am. (Here I am boasting)
In my areas of expertise, I could win some of that back and probably (but I’m not so sure) outperform you. I’ve only been learning strategies for handling concurrency for 4 years? Every program (certainly every program with a user interface) has to deal with concurrency, your skill in that sub-domain alone could outweigh my familiarity in any environment.
There are tons of programmers out there who can not deal with any amount of concurrency at all in their most familiar environment. There are bugs that they will encounter which they can not possibly fix until they remedy that deficiency, and that’s one piece of a larger puzzle. I know that the right support structure of more experienced engineers (and tooling) can solve this, I don’t think that kind of support is the norm in the industry.
If we could test our programming aptitudes as we popped out of the womb, all bets are off. This makes me think that “10X programmer” is ill-defined? Maybe we’re not talking about the same thing at all.
No I agree with you. Context is important. As is having a scale. All the conversations I see are “10x exists” and then no accounting for context or defining a scale.
While I’m not very familiar with composers, I can tell you that basketball players (LeBron) can and do have measurements. Newton created fundamental laws and integral theories, Shakespeare’s works continue to be read.
We do acknowledge the groundbreaking work of folks like Ken Ritchie, Ken Iverson, Alan Kay, and other computing pioneers, but I doubt “Alice 10xer” at a tech startup will have her work influence software engineers hundreds of years later, so bar that sort of influence, there are not enough metrics or studies to show that an engineer is 10x more than another in anything.
The ‘10X programmer’ is relatively common, maybe 1 standard deviation from the median? And you don’t have to get very far to the left of the curve to find people who are 0.1X or -1.0X programmers.
So, it’s fairly complicated because people who will be 10X in one context are 1X or even -1X in others. This is why programming has so many tech wars, e.g. about programming languages and methodologies. Everyone’s trying to change the context to one where they are the top performers.
There are also feedback loops in this game. Become known as a high performer, and you get new-code projects where you can achieve 200 LoC per day. Be seen as a “regular” programmer, and you do thankless maintenance where one ticket takes three days.
I’ve been a 10X programmer, and I’ve been less-than-10X. I didn’t regress; the context changed out of my favor. Developers scale badly and most multi-developer projects have a trailblazer and N-1 followers. Even if the talent levels are equal, a power-law distribution of contributions (or perceived contributions) will emerge.
I’m glad you acknowledge that there’s room for a 10X or more then 10X gap in productivity. It surprises me how many people claim that there is no difference in productivity among developers. (Why bother practicing and reading blog posts? It won’t make you better!)
I’m more interested in exactly what it takes to turn a median (1X by definition) developer into an exceptional developer.
I don’t buy the trail-blazer and N-1 followers argument because I’ve witnessed massive success (by any metric) cleaning up the non-functioning, non-requirements meeting (but potentially marketable!) untested messes that an unskilled ‘trailblazer’ leaves in their (slowly moving) wake. Do you think it’s all context or are there other forces at work?
These are probably the weakest arguments against Bitcoin I’ve seen. But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.
Real arguments against Bitcoin are:
And I’m sure there are others but literally none of the ones presented here are valid.
These are probably the weakest arguments against Bitcoin I’ve seen.
As it says, this is in response to one of the weakest arguments for Bitcoin I’ve seen. But one that keeps coming up.
But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.
When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business.
I would also like to be able to upgrade my gaming PC’s GPU without spending what the entire machine cost.
This is getting better though.
For what it’s worth, Bitcoin mining doesn’t use GPUs and hasn’t for several years. GPUs are being used to mine Ethereum, Monero, etc. but not BItcoin or Bitcoin Cash.
When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business
And yet, still less electricity than… Christmas lights in the US or gold mining.
https://coinaccess.com/blog/bitcoin-power-consumption-put-into-perspective/
When you reach for “Tu quoque” as your response to a criticism, then you’ve definitely run out of decent arguments.
Bitcoin (and all blockchain based technology) is doomed to die as the price of energy goes up.
It also accelerates the exaustion of many energy sources, pushing energy prices up faster for every other use.
All blockchain based cryptocurrencies are scams, both as currencies and as long term investments.
They are distributed, energy wasting, ponzi scheme.
wouldn’t an increase in the cost of energy just make mining difficulty go down? then the network would just use less energy?
No, because if you reduce the mining difficulty, you decrease the chain safety.
Indeed the fact that the energy cost is higher than the average bitcoin revenue does not means that a well determined pool can’t pay for the difference by double spending.
If energy cost doubles, a mix of two things will happen, as they do when the block reward halves:
Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value. This cost is what secures the blockchain by making attacks costly.
Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value.
You forgot one word: average.
Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.
PoS has no such energy requirements. Peercoin (2012) was one of the first, Blackcoin, Decred, and many more serve as examples. Ethereum, #2 in “market cap”, is moving to PoS.
So to say “ [all blockchain based technology] is doomed to die as the price of energy goes up” is silly.
Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.
Hum… are you saying that Bitcoin miners have no brain? :-D
I know that PoS, in theory, is more efficient.
The fun fact is that all implementation I’ve seen in the past were based on PoW based crypto currencies stakes. Is that changed?
As for Ethereum, I will be happy to see how they implement the PoS… when they will.
Blackcoin had a tiny PoW bootstrap phase, maybe weeks worth and only a handful of computers. Since then, for years, it has been purely PoS. Ethereum’s goal is to follow Blackcoin’s example, an ICO, then PoW, and finally a PoS phase.
The single problem PoW once reasonably solved better than PoS was egalitarian issuance. With miner consolidation this is far from being the case.
IMHO, fair issuance is the single biggest problem facing cryptocurrency. It is the unsolved problem at large. Solving this issue would immediately change the entire industry.
Well, proof of stake assumes that people care about the system.
It see the cryptocurrency in isolation.
An economist would object that a stake holder might get a lot by breaking the currency itself despite the loss in-currency.
There are many ways to gain value from a failure: eg buying surrogate goods for cheap and selling them after the competitor’s failure has increased their relative value.
Or by predicting the failure and then causing it, and selling consulting and books.
Or a stake holder might have a political reason to demage the people with a stake in the currency.
I’m afraid that the proof of stake is a naive solution to a misunderstood economical problem. But I’m not sure: I will surely give a look to Ethereum when it will be PoS based.
doomed to die as the price of energy goes up.
Even the ones based on proof-of-share consensus mechanisms? How does that relate?
Can you point to a working implementation so that I can give a look?
Last time I checked, the proof-of-share did not even worked as a proof-of-concept… but I’m happy to be corrected.
Blackcoin is Proof of Stake. (I’ve not heard of “Proof of Share”).
Google returns 617,000 results for “pure pos coin”.
Instructions to get on the Casper Testnet (in alpha) are here: https://hackmd.io/s/Hk6UiFU7z# . No need to bold your words to emphasize your beliefs.
The emphasis was on the key requirement.
I’ve seen so many cryptocurrencies died few days after ICO, that I raised the bar to take a new one seriously: if it doesn’t have a stable user base exchanging real goods with it, it’s just another waste of time.
Also, note that I’m not against alternative coins. I’d really like to see a working and well designed alt coin.
And I like related experiments as GNU Teller.
I’m just against scams and people trying to fool other people.
For example, Casper Testnet is a PoS based on a PoW (as Etherum currently is).
So, let’s try again: do you have a working implementation of a proof of stake to suggest?
It’s not live or open-source, so I’d understand if you’re still skeptical, but Algorand has simulated 500,000 users.
Again I don’t seem to understand your anger. We’re on a tech site discussing tech issues. You seem to be getting emotional about something that’s orthogonal to this discussion. I don’t think that emotional exhorting is particularly conducive to discussion, especially for an informed audience.
And I don’t understand what you mean by working implementation. It seems like a testnet does not suffice. If your requirements are: widely popular, commonly traded coin with PoS, then congratulations you have built a set of requirements that are right now impossible to satisfy. If this is your requirement then you’re just invoking the trick question fallacy.
Nano is a fairly prominent example of Delegated Proof of Stake and follows a fundamentally very different model than Bitcoin with its UTXOs.
No anger, just a bit of irony. :-)
By working implementation of a software currency I mean not just code and a few beta tester but a stable userbase that use the currency for real world trades.
Actually that probably the minimal definition of “working implementation” for any currency, not just software ones.
I could become a little lengthy about vaporware, marketing and scams, if I have to explain why an unused software is broken by definition.
I develop an OS myself tha literally nobody use, and I would never sell it as a working implementation of anything.
I will look to Nano and delegated proofs of stake (and I welcome any direct link to papers and code… really).
But frankly, the sarcasm is due to a little disgust I feel for proponents of PoW/blockchain cryptocurrencies (to date, the only real ones I know working, despite broken as actual long term currency): I can understand non programmers that sell what they buy from programmers, but any competent programmer should just say “guys Bitcoin was an experiment, but it’s pretty evident that has been turned to a big ponzi scheme. Keep out of cryptocurrencies! Or you are going to loose your real money for nothing.”
To me, programmers who don’t explain this are either incompetent enough to talk about something they do not understand, or are trying to profit from those other people, selling them their token (directly or indirectly).
This does not means in any way that I don’t think a software currency can be built and work.
But as an hacker, my ethics prevent me from using people’s ignorance against them, as does who sell them “the blockchain revolution”.
The problem is that in the blockchain space, hypotheticals are pretty much worthless.
Casper I do respect, they’re putting a lot of work in! But, as I note literally in this article, they’re discovering yet more problems all the time. (The latest: the security flaws.)
PoS has been implemented in a ton of tiny altcoins nobody much cares about. Ethereum is a great big coin with hundreds of millions of dollars swilling around in it - this is a different enough use case that I think it needs to be regarded as a completely different thing.
The Ethereum PoS FAQ is a string of things they’ve tried that haven’t quite been good enough for this huge use case. I’ll continue to say that I’ll call it definitely achievable when it’s definitely achieved.
Covert asicboost was fixed with segwit, overt is being used: https://mobile.twitter.com/slush_pool/status/977499667985518592
This is an interesting article. I’m not sure what I think about the points right now since my background is the sub-field between the development chobeat describes and something akin to traditional engineering. I remember it was a LISP book that described to me that software was something organic that grows. That also fits its development style and environment well, though. One could probably stretch this metaphor further to cover business developments by having a shared garden controlled by a hierarchy of people with different goals for what to grow or levels of skill in growing it. I agree with adsouza that the cooking analogy is a lot better, esp since more will have experienced that.
Now the bad. The styles like Cleanroom, Design-by-Contract, or straight-up formal methods all precisely say what you want the thing to do with feedback in form of proofs or runtime checks whether you’re doing it. The program is series of boxes with things they expect, things they do, and things they output. The part where the plants or whatever lead their own lives isn’t necessary in this light: the programmers not using these methods simply couldn’t predict what their software would do in what circumstances because they never specified it! Sometimes, it’s because they didn’t understand something right, too. The overall practice still contradicts the metaphor because the software process is mostly mechanical at that point. The creativity goes into devising the specifications or the algorithms that meet them. The latter, esp experimental prototypes, might still fit the metaphor but still diverges from author’s version: it just informs the design which produces an implementation with more predictability.
The one thing that probably shouldn’t be in there is the guarantee that you won’t develop something similar to what you developed before. There’s whole enterprise markets dedicated to that, esp CRUD apps in 4GL’s or Excel-Driven Development.
Re metaphor for problems. The part on weather, bugs, and disease is another that seems inaccurate. The point is how factors outside the control of the gardener disrupt the work in the metaphor. In programming, it’s usually the programmer or manager doing something to disrupt it that was avoidable. Realistic metaphor would have farm owner, main gardener, and other gardeners all doing stuff hurting the crops even as the gardeners try to grow more. Maybe planting things that use up too much nutrients, pesticides that are too harmful, one director is pivoting into livestock with goats that are eating the crops, or Japanese consultants help them with erosion. At this point, the metaphor is straining understanding instead of helping.
What I’ve found about everyone understands are illustrations of bosses or customers all demanding different, vague things from same employee. The employee vainly tries to get consensus asking questions that result in more confusion or proposing compromises that get partly or fully shot down for questionable reasons. Then, combine that with illustrations of people on teams of different skills, styles, levels of organization, etc working on the same problem with the effects that brings. Then say or illustrate that programming problems in real world often come from a combination of both. Maybe also illustration of positive behaviors in individual programmer, programming teams, and interactions with bosses/customers. Although people don’t always understand tech, they do usually understand the troubles of working with other people.
I think the big difference between software and other engineering fields, or even artisan fields, is that we allow requirements to change. Because there are few fixed costs in software, it’s completely acceptable to change the scope of software we build to capture more customers, satisfy more feature requests, or make your users just a bit happier. Also because software is a recent endeavor, it’s heavily infused with our capitalist (don’t necessarily mean this as a pejorative) idea that the user/customer is always right.
As an aside: The reason why I think we see so many people even today tinker on wood and metal is because it’s not considered the responsibility of the builder to satisfy every need of the customer, the same way the idea of a feature request is built into the idea of writing software. I think a significant portion of computing pioneers like Kay and Chuck Moore saw software being about parts that users customize, like tables chairs and sofas, rather than being ever changing globs of functionality.
The title is a little dramatic.
Despite the self-deprecation, it does sound like they were fixated on this one language and refused to consider others seriously. I’m sure the author could have picked up the other languages, and I’m sure they would have found trade-offs and things to like and not like when compared to LISP and been very good at programming in them too. It’s just that they did not want to.
It should more accurately be “I refused to re-tool when companies changed their tech stack”.
He was fixated but did relent eventually. In his own words:
What’s more, once I got knocked off my high horse (they had to knock me more than once – if anyone from Google is reading this, I’m sorry) and actually bothered to really study some of these other languges I found myself suddenly becoming more productive in other languages than I was in Lisp. For example, my language of choice for doing Web development now is Python
I agree about the title being dramatic, but Kenny Tilton’s proposal makes for a better headline ^_^
“How Lisp Made Me Look Good So I Could Get Hired By A Booming Company and Get Rich Off Options”?
http://coding.derkeiler.com/Archive/Lisp/comp.lang.lisp/2006-04/msg01691.html
J/king aside your point makes sense in the abstract, but can you make it more concrete? Like what trade-offs would they have found in the late 90’s?
Hi, i think it’s there in the article. They found that c++ and python had caught up so I assume they took a look but did not commit, being too enamored of one language. I accept that different languages have tradeoffs but to refuse to use a language others are using productively is not a virtue
They found that c++ and python had caught up
That is not how I read it, they saw people being productive on them in spite of it (in their impression).
Because I don’t have context/information about the state of lisp implementations vs Java or Python back then I can’t really weigh in about the truth of those statements (From what I’ve heard the GC of free implementations was really slow for example). Even today I struggle to see anything that Python, the language, does better than Lisp. There are other factors that may make it a more sensible option for an organization or even a developer’s perspective (Wealth of libraries) but nothing about the language that seems like a trade-off to me. It is less extensible, slower (CPython vs SBCL) and less interactive.
Seems like he tried, but knowing Lisp by heart makes you more aware about all that PLT stuff (I think) and he considered every other language inferior.
You clearly don’t want to work with the tools/languages/concepts which seem to be bad for you, even when companies weren’t enlightened by Holy Lisp paradigms and wanted to use Java instead.
This seems like the attitude he had, but it also seems wrong.
Practical use has a very different set of goals from exploration & learning, and the two have values that conflict. So, one should go about them in different ways. (Of course, when you’re learning for work, this is difficult. I don’t recommend it, unless you can convince someone to pay you to learn things correctly!)
When you’re learning a new technology, you should focus on doing things the hard way – reinventing square wheels, etc. You should burn the bad parts into your memory. (That way, you know how to avoid them, and you know how to make the best of them when you need to.) The easiest way to do this is to specifically seek out the wrong job for the tool.
Once you’ve learned the technology, when you’re on the exploit side of the explore/exploit division (i.e., when somebody has given you a task and a budget), you should be using the right tool for the job.
General purpose languages are good-enough tools for many jobs, and lisp was clearly a slightly better tool for all the jobs that his peers were doing in other general purpose languages. But, it’s not like lisp doesn’t have areas where it falls down and requires awkward code. He could have honed his skills even in lisp by focusing on those areas (and then he’d have better tolerance for languages like C, where many more things are awkward.)
One shoukd know what to avoid but seeking out bad tools is an unnecessary, extra step for an experienced dev. I dont need to see 50 ways to experience buffer overflows to know checking buffers by default should be in the language.
Likewise, I know a few features like easy metaprogramming and incremental compilation that dramatically boost productivity. Using a language without something similar isnt going to teach me anything other than language author didnt add those features. Now, it might be worthwhile to try one that claims similar benefits with different techniques. That’s not what most job languages are offering, though.
One still doesnf havd to learn them grudgingly all the way. They will have useful tools and libraries worth adding to better languages. Redoing them with those languages’ benefits can be fun exercise.
The way I see it, staying on the happy path while learning leaves you blissfully ignorant of unpublicized downsides and awkward corners. (It’s how you get people using hadoop and a large cluster for problems that are 80x faster with a shell one-liner on one core – they don’t really know what their favorite technology is bad at.) The fact that most professional devs don’t know the ugly corners of their preferred technologies makes learning those corners even more valuable: when backed against a wall & locked into a poor-fitting tech, the person who has put in the effort will know how to get the job done in the cleanest & least effortful way, when all possibilities look equally ugly and painful to the casuals.
This doesn’t mean learning these ugly corners has to be grudging. A good programmer is necessarily a masochist – the salary isn’t worth the work, otherwise. Exploring the worst parts of a language while challenging yourself to learn to be clever with them is lots of fun – when you don’t have a deadline!
Facing the worst parts of a technology head-on also encourages you to figure out ways to fix them. That’s nice for the people who might follow you.
I dont need to see 50 ways to experience buffer overflows to know checking buffers by default should be in the language.
I concede this point, but I think it’s irrelevant. I don’t suggest we dive into uninteresting language flaws (or bother diving deep into languages that are shallow re-skins of ones we already know). But, writing a web server or graphics library in Prolog is a real learning experience; writing a befunge interpreter in brainfuck likewise.
I like to learn very different languages so I have a new viewpoint. In Haskell space is function application, in Joy it’s “apply this to the stack”. Unification in Prolog was mind expanding, as well as definite clause grammars. When I find something that’s easy and powerful in one language, and not in another, I try to understand what underlying design and implementation decisions were involved. I’m still fighting with APL, but I’m starting to see the good sides.
I enjoy solving programming katas or advent of code problems in these out of the way languages, there’s so much understanding to gain!
good programmer is necessarily a masochist – the salary isn’t worth the work, otherwise.
I think there’s a delicate balance to be struck between knowing the crufty bits of your language (like JSON un/marshaling in Haskell) and spending so much time banging your head against these parts that you become a specialist in the language/environment and refuse to leave its confines. While I’ve definitely met professionals who choose their language based on flashy tours that reduce cruft and constantly reach for different languages without learning the details of one, I also think you shouldn’t outright reject new and different paradigms by getting turtled into a single paradigm.
Definitely. If your familiarity with a language prevents you from using a better match, you’ve failed. I think it’s important to be deeply familiar with the ugly bits of a wide variety of very different languages.
Building an IRC <-> Slack gateway with some friends after Slack’s announcement that they won’t be supporting the gateway going forward.
I like being aware of this. Sometimes people try to improve things that are perfectly ok already and when you examine why, it’s usually because they feel that they need to do something. It happens outside of software and engineering too,
Isn’t it better though to have an ELF binary which just returns 0, rather than having to start a shell interpreter every time you want to invoke /bin/true? Also, when every other part of a collection of tools (in this case GNU coreutils) follows a convention (i.e that --version prints version information and --help prints usage information), is it really better that /bin/true is the one binary which doesn’t follow that convention?
This seems like a classic case of making the world a little bit better.
Isn’t it better though to have an ELF binary which just returns 0, rather than having to start a shell interpreter every time you want to invoke /bin/true?
I can see an alternate viewpoint where it just seems like bloat. It’s yet another chunk of source to carry around to a distribution, yet another binary to build when bootstrapping a system, and other holdovers.
Also the GNU coreutils implementation of true is embarrassing. https://github.com/coreutils/coreutils/blob/master/src/true.c is 65 lines of C code and accepts 2 command-line arguments, which means that the binary has to be locale aware in those situations.
Yep, I’d call depending on sh to deal with /bin/true bloat, if only due to the overall extra time to exec() sh in general. Times with a warm cache, not cold. Yes this is golfing to a degree, but this kind of stuff adds up. A minimal binary in my opinion is not worse, its seeing the forest over a single sh tree.
$ time sh -c ''
real 0m0.004s
user 0m0.000s
sys 0m0.003s
$ time /bin/true
real 0m0.001s
user 0m0.000s
sys 0m0.002s
Even that though is gnu true, compared to a return 0 binary its also slow due to the locale etc… stuff you mention:
$ time /tmp/true
real 0m0.001s
user 0m0.001s
sys 0m0.000s
I’m pretty sure it’s just a correction to the glut of entry level hires produced in the last 4 years. When I was in college and grad school, software jobs were rare and not really seen as lucrative (well, not any more so than more respectable/traditional engineering jobs such as hardware/silicon). With the boom in tech companies’ hiring, colleges and boot camps responded by flooding the market with entry level talent. Now the rate at which juniors are being produced is more than there are spots in the industry. Eventually “word” will spread about how hard it is to land an entry level programming job, and fewer students will go into software.
This article provides a good path forward when the organization also disincentivizes overwork, but I’ve been at companies that don’t. One of them specifically pushed back against adding more testing, tooling, or process for reliability. This company didn’t want to “waste time on non-features” and basically expected engineers to stay late and deliver code rather than working on tooling.
I’ve been trying this recently but it’s extremely difficult! I’ve always been a really slow-starter when it comes to programming - I take forever to adjust to codebases and digest a problem. I need to spend a while reading through and thinking about things to map out a solution in my head before I start coding. I also take a while to adjust to new workflows and build up a groove for getting fixes out quickly. And sometimes it’s those later hours that give me the room to avoid distractions.
Maybe that’s just a sign that we need better tooling and I need to improve my ability to focus though. I mean, I am on Lobsters right now.
I’m like you and used to also grind pretty long hours in the beginning (when encountering a new codebase, now paradigms, etc), but I tried to do something similar to what this article suggests (for me it was motivated more by carving out personal time for side projects) and developed some skills to help me get into a new problem. Delivering on time with a balanced work-life balance is its own skill and it’s a shame the industry doesn’t try and develop it more.
This is a bold statement, I do quite a bit of ssh -X work, even thousands of miles distant from the server. I do very much wish ssh -X could forward sound somehow, but I certainly couldn’t live without X’s network transparency.
I find it okay for running things that aren’t fully interactive applications. For example I mainly run the terminal version of R on a remote server, but it’s nice that X’s network transparency means I can still do plot() and have a plot pop up.
Compression can’t do anything about latency, and latency impacts X11 a lot since it’s an extremely chatty protocol.
There are some attempts to stick a caching proxy in the path to reduce the chattiness, since X11 is often chatty in pretty naive ways that ought to be fixable with a sufficiently protocol-aware caching server. I’ve heard good things about NX, but last time I tried to use it, the installation was messy.
There’s a difference between latency (what you talk about) and speed (what I replied to). X11 mainly transfers an obscene amount of bitmaps.
I regularly use it when I am on a Mac and want to use some Linux-only software (primarily scientific software). Since the machines that I run it on are a few floors up or down, it works magnificently well. Of course, I could run a Linux desktop in a VM, but it is nicer having the applications directly on the Mac desktop.
Unfortunately, Apple does not seem to care at all about XQuartz anymore (can’t sell it to the animoji crowd) and XQuartz on HiDPI is just a PITA. Moreover, there is a bug in Sierra/High Sierra where the location menu (you can’t make this up) steals the focus of XQuartz all the time:
https://discussions.apple.com/thread/7964085
So regretfully, X11 is out for me soon.
Second. I have a Fibre connection at home. I’ve found X11 forwarding works great for a lot of simply GTK applications (EasyTag), file managers, etc.
Running my IntelliJ IDE or Firefox over X11/openvpn was pretty painfully slow, and IntelliJ became buggy, but that might have just been OpenVPN. Locally within the same building, X11 forwarding worked fine.
I’ve given Wayland/Weston a shot on my home theater PC with the xwayland module for backward compatibility. It works .. all right. Almost all my games work (humble/steam) thankfully, but I have very few native wayland applications. Kodi is still glitchy, and I know Weston is meant to just be a reference implementation, but it’s still kinda garbage. There also don’t appear to be any wayland display managers on Void Linux, so if I want to display a login screen, it has to start X, then switch to Wayland.
I’ve seen the Wayland/X talk and I agree, X has a lot of garbage in it and we should move forward. At the same time, it’s still not ready for prime time. You can’t say, “Well you can implement RDP” or some other type of remote composition and then hand wave it away.
I’ll probably give Wayland/Sway a try when I get my new laptop to see if it works better on Gentoo.
I love the world Peerkeep (née Camlistore) is trying to create, but we’re still too far out, and I don’t see why businesses would want to adopt it.
agreed. What drove me nuts with this project is that I am a technically savvy user, and many of the services I evaluated are designed for typical consumers – yet the solution was still non-trivial. For many folks, when asked about these problems I just say “use Backblaze (the app), Dropbox and Google Photos.”
Alternative 1a is to use Alternative 1 with https://github.com/BurntSushi/go-sumtype /plug
go-sumtype requires the interface to be sealed (which you’re already doing) and one small annotation:
//go-sumtype:decl TheInterfaceName
Then you just run go-sumtype
$ go-sumtype $(go list ./... | grep -v vendor)
and it will do exhaustiveness checks in any type switch in which TheInterfaceName participates. This will prevent the “For example, during a refactor a handler might be removed but a type that implements the interface is not.” failure mode mentioned in the article.
Yes, Patchwork, built on top of Secure Scuttlebutt (SSB), is the best I’ve seen so far.
Mastodon / GNU Social are so-so in terms of privacy, not very good, but better than Twitter.
Patchwork still has to solve the problem of multi-device accounts and it would be nice to have it work inside the browser instead of requiring electron, but it’s definitely the coolest social network around.
The multi-device stuff is being worked on and browser support might be coming soon, thanks to incoming firefox 59 ssb support in web extensions.
There are other SSB clients that don’t require Electron, but none of them are really as polished as Patchwork is. Check out Minbay and Patchbay, among others.
I’m also very happy with the (relative) easy of use OpenBSD.
I missed the existence of Void. Is there any real advantage over Debian besides no-systemd?
To each its own poison. But I like void because
$ fortune -o voidThe tools for package cross compile and image building are pretty awesome too.
While there are more packages for the glibc variant than the musl variant, I would not characterise this as “not many packages”. Musl is quite well supported and it’s really only a relatively small number of things which are missing.
Void has good support for ZFS, which I appreciate (unlike say Arch where there’s only unofficial support and where the integration is far from ideal). Void also has an option to use musl libc rather than glibc.
Void has great build system. It builds packages using user namespaces (or chroot on older kernels) so builds are isolated and can run without higher privileges. Build system is also quite hackable and I heard that it’s easy to add new packages.
Never tried adding a package, but modifying a package in my local build repository was painless. (specifically dwm and st)
Things I find enjoyable about Void:
fish shell package uses Python for a few things but does not have an explicit Python dependency. The system doesn’t even come with a crond (which is fine, the few scripts I have running that need one I just put in a script with a sleep).That said, my go-to is FreeBSD (haven’t gotten a chance to try OpenBSD yet, but it’s high on my list).
I’d use void, but I prefer rc.d a lot. It’s why I like FreeBSD. It’s so great to use daemon_option= to do stuff like having a firewall for client only, to easily run multiple uwsgi applications, multiple instances, with different, of tor (for relays, doesn’t really make sense for client), use the dnscrypt_proxy_resolver to set the resolver, set general flags, etc.
For so many services all one needs to do is to set a couple of basic options and it’s just nice to have that in a central point where it makes sense. It’s so much easier to see how configuration relates if it’s at one single point. I know it doesn’t make sense for all things, but when I have a server, running a few services working together it’s perfect. Also somehow for the desktop it feels nicer, because it can be used a bit like how GUI system management tools are used.
In Linux land one has Alpine, but I am not sure how well it works on a desktop. Void and Alpine have a lot in common, even though Alpine seems more targeted at server and is used a lot for containers.
For advantages: If you like runit, LibreSSL and simplicity you might like it more than Debian.
However I am using FreeBSD these days, because I’d consider it closer to Linux in other areas, than OpenBSD. These days there is nothing that prevents me from switching to OpenBSD or DragonFly though. So it’s about choosing which advantages/disadvantages you choose. OpenBSD is simpler, DragonFly is faster and has recent Intel drivers, etc.
For security: On the desktop I think other than me doing something stupid, the by far biggest attack vector is a bug in the browser or other desktop client application, and I think neither OS will safe me from that on its own. Now that’s not to say it’s meaningless or that mitigations don’t work or that it’s the same on servers, but it’s more that this is my threat model for the system and use case.
I think I mostly agree with the premise here.. I tried freebsd but I hard time being happy with it compared to simply using a systemd-less linux like void or alpine.
OpenBSD on the other hand fascinates me, mostly because of the security focus and overall simplicity, I think part of that idea of focused goals is the same reason I’ve been starting to keep up with DragonFlyBSD development, the drive to do something different than the mainstream can be a strong motivator of interest.
But realistically, I dont see something like FreeNAS dying anytime soon, some of my IT friends swear only by it.
I love running FreeBSD. I run Void whenever I have to run Linux, but honestly running FreeBSD is so much fun. The system makes so much sense, there are so few running processes. Configs are kept in the right places, packages that are installed just work, upgrades almost never broke anything, and in general there was a lot less fiddliness. I want to run Void from time to time to get the new and shiny (without having to build it for a custom platform), but in both Debian and Void (the systems I run), packages are of varying quality, and upgrades are always stressful (though Void’s running release nature makes it less so). FreeBSD’s consistency also makes me feel a lot less scared about opening it up and fiddling with the insides (such as trying my hand at creating my own rc unit runner or something), whereas with Linux I often feel like I’m peering at the edge of a Rube Goldberg machine.
Oh and don’t get me started on the FreeBSD Handbook and manpages. Talk about documentation done right.
“Rube Goldberg machine” is a great description for much of the Linux world. Especially Debian-style packages with their incredibly complex configuration hooks and menus and stuff.
My favorite feature of pkgng is that packages do not add post-install actions to other packages :)
I still can’t get over the fact that installing a deb service on a Debian based distribution, starts the service automatically? Why was that ever considering a good design decision?
I personally run Gentoo and Void. I had FreeBSD running really well on an older X1 carbon about two years back, but the hardware failed on the X1. I do use FreeBSD on my VPS for my openvpn server, but it seems like FreeBSD is the only one supported on major VPSes (Digital Ocean, Vultr). I wish there was better VPS support for at least OpenBSD.
Dont get me wrong, I like FreeBSD, I’ve just never felt the same fascination towards it that I do with OpenBSD, DragonflyBSD, Haiku, ReactOS or Harvey. But perhaps thats a good thing?
I guess the main thing Is I’ve never been in a situation where I didn’t need to use linux / windows and couldn’t use OpenBSD.
FreeBSD seems to do less in-house experimental stuff that gets press. Dragonfly has the single-system image clustering long-term vision, OpenBSD is much more aggressive about ripping out and/or rewriting parts of the core system, etc.
I do feel most comfortable with the medium-term organizational future of FreeBSD though. It seems to have the highest bus factor and strongest institutional backing. Dragonfly’s bus factor is pretty clearly 1: Matthew Dillon does the vast majority of development. OpenBSD’s is slightly higher, but I’m not entirely confident it would survive Theo leaving the project. While I don’t think any single person leaving FreeBSD would be fatal.
I’m not entirely confident it would survive Theo leaving the project
There is no reason to worry about that: http://marc.info/?l=openbsd-misc&m=137609553004700&w=2
FreeBSD seems to do less in-house experimental stuff that gets press
The problem is with the press here. CloudABI is the most amazing innovation I’ve seen in the Unix world, and everyone is sleeping on it ;(
I tried freebsd but I hard time being happy with it compared to simply using a systemd-less linux like void or alpine.
The Linux distro that’s closest to the *BSD world is Gentoo - they even named their package management system “Portage” because it’s inspired by *BSD ports.
As a long time OpenBSD & Gentoo user (they were my introduction to BSD & Linux respectively and I’ve run both on servers & desktops for years), I strongly disagree. If I wanted to experience BSD on Linux, Gentoo would be the last thing I’d look at.
If I wanted to experience BSD on Linux, Gentoo would be the last thing I’d look at.
Then you are way off the mark, because the closest thing to *BSD ports in the Linux world is Gentoo’s Portage and OpenRC is the natural evolution of FreeBSD’s init scripts.
Over the past decade, I’ve used ports once or twice. Currently I don’t have a copy of the ports tree. At this day and age, ports & package management are among the least interesting properties of an operating system (if only because they all do it well enough, and they all still suck). OpenRC might be ok, but the flavor of init scripts doesn’t exactly define the system either.
My idea of BSD does not entail spending hours fucking with configs and compiling third party packages to make a usable system. Maybe FreeBSD is like that? If so, I’m quite disappointed.
Play Minecraft (and maybe some minetest) and Terraria with my five year old, fill up some more moving boxes, and try to get linux on my odd smartphone.
What Minetest mods do you play with?
Dunno, let’s see:
Wow, it’s been longer than I thought since I touched this. The loyall-minetest-mod is locally developed. It implements a “stereo” item that can be placed, which plays music. You can toggle the music on or off by ‘hitting’ the block.
Doing this helped my son understand what a programmer does, I think. He immediately asked me to implement vehicles and other non-trivial things. :)
Since I’m talking about it, I might as well publish it. Note, there’s nothing in here that wasn’t cribbed from other existent mods. Except for the stereo/record player artwork, which was derived from photos of actual equipment in a way that I believe is fair use. You’ll note that the stereo is actually two distinct blocks placed next to one another, one of which is inert… :) It’s really just a proof-of-concept.
Hah, also have big plans for Terraria this weekend ;D
So…we (my friend and me) have conquered almost all bosses ;D
We had a set back this weekend… we lost our flying piggy bank due to a save error and/or cloud sync error. Steam on GNU is not really stable…
oh :( do you play via standalone server? looks like it’s more stable
I start Terraria on two computers with this invocation:
steam steam://rungameid/105600, and then on one I select theHost & Playoption, and I set theSteam Multiplayeroption to ‘disabled’. Then on the other computer, I selectJoin by IP, and I provide a hostname (not FQDN, but it exists in /etc/hosts) and a port (the default).Hm, some games have ‘headless’ servers. Is that what you’re referring to? Is that an option?
I do have the ‘save to cloud’ option selected for our players and worlds.
I may be mis-remembering the specific strings used to identify each menu option, sorry.
I think I caused the inventory items to be lost when I terminated a process. …Because it was just sitting there taking up CPU after we were done playing. My machines to literally nothing, not even update an on-screen clock when they are idle, so you can always tell when something is still running, because the fans become audible at all. So, I killed the Terraria and Steam processes…
yep, I’m referring to headless server. See Dedicated server at the bottom of the page: https://terraria.org/