I believe we are beginning to see the downfall of YouTube as we know it. They are really going way and beyond to ruin their own platform/reputation.
That has been happening for couple years now. All the content that made youtube popular are nowadays shunned and banned by recommendation algos. In short, if it cannot be monetized by US linear TV standards, it cannot be found in search or recommendations. So unless you already have several hundred thousand followers (and ads enabled), your content is family friendly and you have used thousands of dollars worth of equipment there are no new viewers.
This did hit people filming motorcycle related videos pretty hard, as apparently that is very media unsexy content in US. Which happens to most of my youtube subscriptions, from most I watch every video they produce. And my youtube “home”/“recommended” section is full of everything that is not related in any way to my most watched stuff.
yes. This is the straw that breaks the camels back. The blocking of help videos of a 3D modeller is going to be the downfall of YouTube. Unable to learn how to use their 3D modelling software, the masses will wander off to different venues in droves.
/s
(without snark: nobody outside of our little circle here cares about this. Not the advertisers, not youtube, not the general audience, not the press. The is entirely inconsequential to youtube’s future)
You might compare it to gentrification. You cater to the middle ground, the cool stuff around the edges is pushed out, the really creative people abandon the platform, you’re left with the most generic content. Blender is just the latest victim of a broad trend.
Most people may not “care” about Blender specifically, but they should care about an opaque platform that caters to the IP needs of multinationals in overly broad ways and incentivizes some really messed up behavior.
It will be awesome to see what the video hosting landscape will be like when PeerTube reaches its height of popularity!
I was checking peertube yesterday and it’s a huge change from youtube user experience. A lot more involved, and a lot less intuitive. I have hard time imagining mass adoption with what I saw. Are there any good beginner friendly tutorials/intros to peertube out there?
Take a look at https://d.tube/ too. It’s much closer to the youtube experience.
You can always checkout this I guess: https://joinpeertube.org/en/#how-it-works
Standups should not
Try to resolve an issue live
Don’t try to trouble shoot something or hash out details in a standup. Use the standup to report it, then grab someone after the meeting. Otherwise you are wasting everyone time. This one drives me nuts as it seems to happen in my stand ups all the time.
Include a measure of how productive you are.
If you need that measure there are burn downs. Also its generally a good idea to not include any unnecessary management, as that does tend things to devolve into ‘make yesterday sound productive’ as @pab said.
Be longer than 1 minute per person.
You want to report your current position, and where you are going. If it takes you more than 60 seconds to report this you are probably running into the prior two bullet points.
Be longer than 10 minutes in total.
You don’t want to give time for devs to glaze over. If its taking longer either brevity is suffering from loquacious individuals, or the team may be getting too big.
Happen prior to caffeination, lunch adjacent (unless team eats together), or near EOD
Stands ups should facilitate follow up communication between members. You want people be alert enough to help, but during a time where they wanting to wrap something because they have something else to do at that time.
Yup. I think you hit the nail on the head. I did stand ups years ago as project lead. First 5-10 mins of work: 1) what are you doing 2) any foreseeable pain points, connect with your peers that your tasks need you to coordinate on.
Done. Everyone thought they were extremely productive (to my knowledge).
Some tools that we use to achieve these goals:
It is always OK and preferable for someone to say “can we / you continue this discussion after the daily?”. Not everything that is currently interesting and relevant you is that to everyone. It is very easy to forget this when you get excited!
Keep everything short: It is OK if there is nothing peculiar happening or you don’t need input/help!
Always briefly go through every task which has been worked on since last daily. From end of the process pipeline to the beginning (in our current case: task moved to production -> tasks moved to ready -> tasks moved to review -> tasks moved to in progress -> stories taken to in progress -> stories groomed to backlog). There is couple reasons for this “backwards” order. First it gives personal productivity measure (I deployed/did stuff that needs review/etc). Secondly it creates natural pull for people to review and take new tasks/stories to be worked on, which removes insane amount of that fruitless “is this the next thing, or this, or this” kind of conversation. Having physical kanban wall makes this really easy by the way.
First couple items in the tip of the backlog are in strict priority order, so when previous story is done one just needs to take next one to be worked on. No need to converse about this during daily.
Pick a time when daily starts. Be very strict about this. Making others to wait is rude, no one is that important.
Currently our team is 16 people, our dailies take 5 to 15 minutes. Depending on how much churn there has been.
FYI The first known recorded standup from the highly productive Borland Quattro pro team was an hour long. Standups should be mini planner meetings, not status meetings.
Nah.
Libraries have bugs, silicon has bugs, external systems have bugs, but not matter whose fault it is, for a mission critical device I have to follow the bug to wherever it is and fix it.
And no, I can’t wait for upstream to make a new release.
If anything the industry is going the other way.
Yes, we pull in gigabytes of source code from OpenEmbedded. That just means I have gigabytes of source code to go hunting and fixing and enhancing.
I don’t feel in the least bit extinct.
The difference is in the amount of functionality I can offer the customer. Literally orders of magnitude more than the good old bad old days of hand craft it ourselves.
And still almost all embedded houses which I have been contracted were still in the “we do everything by ourselves, we will need something ours down the road”-mentality. There is so much untapped potential to be unearthed with open source that I have difficulty to see embedded developers to die out soon.
I bet if you were in the group
"browsers": [ ">0.25%", "not ie 11", "not op_mini all" ]
it would really look awesome. Time to upgrade! ☺
I’ve used C for a while (and I now work on a C/C++ compiler) and I see it this way: you should be very reluctant to start a project in C, but there’s no question you should learn it. It mainly boils down to point 3 in the article: “C helps you think like a computer.”
But really, it helps you think like most computers you’re going to use. This is why most operating systems are written in C: they are reasonably “close to the metal” on current architectures. It’s not so much that this affords you the opportunity for speed (it does, since the OS or even the CPU is your runtime library), but because you’re not that far removed from the machine as an API. Need to place values in a specific memory location? That’s easy in C. Need to mix in some assembly? Also pretty easy. Need to explicitly manage memory? Also not hard (to do it well is another matter). Sure, it’s possible in other languages, but it’s almost natural in C. (And yes, not all I’ve mentioned is strict C, but it’s supported in nearly all compilers.)
All this doesn’t mean I like it, but that’s the reality. I’d rather see more variety in computer architectures such that something safer than C were the default. I’m always looking for kind of machine that essentially rejects the C model so much so that C would actually be awful to use. Unfortunately, those things tend to not have hardware.
I found that learning C was not very helpful in this regard (though I have no doubt this is partly because I was badly taught in university). What finally made it click was learning Forth. C’s attempt at a type system makes it easy to imagine that things other than bytes have reified form at runtime, whereas Forth gives you no such illusion; all that exists is numbers.
When I came back to C afterwards, everything made so much more sense. Things that used to be insane quirks became obvious implications of the “thin layer of semantic sugar over a great big array of bytes” model.
I had this same problem, but for me the thing that made everything to click together was using assembler. Pointers (and other C types to some extend) are really wonderful abstraction, even though they are “a bit” thin. And that power of abstraction hides all the machine details if one does not yet know how to look past it.
The reasons you mention are really why I still use it at all. It comes with me to almost any device I feel like programming, and I do think it sometimes makes sense.
For example, programming the GBA is quite easy in C, and it doesn’t really matter if someone breaks my game by entering a really long name or whatever (in fact, I love things like that.)
I hope Rust will one day be my trusty companion but it’s not quite there yet.
Well Rust is getting there. I sometimes think it would be fun to program such systems but when I think about using C for that I always cringe, so Rust might be a viable option in the future.
It appears that this release will run arbitrary JVM bytecode using the system JDK, if included in a Blu-Ray ISO: git.videolan.org As far as I can tell, it uses a SecurityManager to attempt to sandbox. Here’s a summary of the efficacy of this approach: https://tersesystems.com/blog/2015/12/29/sandbox-experiment/
I don’t see any information on even a cursory security audit of this component. Is this alarming to anyone else?
Is this alarming to anyone else?
They put Java into some standard for Blu-Ray. A lot of places and things were using it thanks to the big, marketing push back in the day. Then, to use that or part of it you need to run Java. As usual with Java, using it is a security risk. This kind of thing happening in tech or standards designed by big companies for reasons other than security is so common that it doesn’t even alarm me anymore. I just assume some shit will happen if it involved codecs or interactive applications.
Old, best practice is to run the Internet apps and untrustworthy apps on dedicated box. Netbooks got pretty good. Substitute VM if trading for cost/convenience. Mandatory access control next with low assurance. You’re totally toast next.
Like @nickpsecurity said, it is used in lot of places so not including it is same as raising middlefinger to your users (they cannot watch their expensive discs) and they go and use some other application, which probably is even less secured. We really cannot make ordinary users stop wanting to use their goods because now we know that they are insecure. Having secure system does not matter if no one uses it.
I haven’t used VLC for ages, and even back then I preferred MPC-HC (on Windows) or mplayer2 (on Linux). I’m happy mpv user (on Windows and Linux) since its very beginning.
Are there any users who use both mpv and VLC often, and could shed a light what VLC has that mpv cannot provide them?
Opening videos straight from the browsers “open with dialogue”. Using subtitles with VLC is more convenient for me. Few years ago I used VLC more when mpv had problems with some matroska containers, don’t know if this is problem anymore.
No GUI front-ends. I’m mostly keyboard-oriented user, but mpv’s built-in OSC is actually good too, so sometimes I operate with mouse in mpv window.
This is actually a really good article. To summarize one of its key points: your mental intuition for how signal congestion works under wifi is wrong. A stronger signal won’t “drown out” noise from other networks since wifi enforces collision avoidance (i.e. “waiting your turn to speak”) even between different networks. The better idea is to have a mesh of smaller, lower-powered networks and to keep as many devices wired as you can.
Yes this article is quite good. But it (neccessarily) omits some details, such as the effects of dynamic rate adaptation and the differences in degree of propagation between data rates. This is very pronounced in the 2 GHz band. Low data rate frames use a much less “fragile” kind of modulation than fast data frames do. So you may see beacons (sent at the lowest rate) from other networks but issues only arise if you are close enough to also see many of the data frames (unless the other networks protect all frames with RTS which “reserves” the medium for a while, that’s yet another detail the article skipped).
All those details fill entire books, which is a core problem of wifi: It is very complex and can’t easily be described in marketing terms without telling lies.
It is very complex and can’t easily be described in marketing terms without telling lies.
Definitely, but one of the problems is that everyone lists the unicorn maximum speeds. It would be better to have speeds measured in a couple of realistic, controlled, standardized setups. Of course, it is hard to extremely standardize this across the industry when there is no urge.
I was a QA manager for one of the enterprise wireless startups startups back in 2002. I can tell you that there certainly was an urge to do that across the industry at the time. It didn’t happen because, as the author of the article said, it is extremely difficult to come up with a wifi performance testing methodology that is realistic, standardized, and controlled all at the same time. Forget about the cost or the political reasons– it is technically difficult to do.
Here’s how we tried to cover our realistic, controlled, and standardized testing:
Realistic – we held a monthly lease on a large, unoccupied office space and installed our system in it. We also ‘ate our own dogfood’ at the office.
Controlled – We ran 802.11 over 802.3 for repeatable protocol and scaling tests. (Yes, we ran the WiFi protocol over the Ethernet phy, and a lot of our wireless testing was done on wired Ethernet.) We had an anechoic chamber for doing phy layer tests.
Standardized – Once the test vendors started catching up, we had access points stuffed in little isolation chambers, antennas removed, radios cabled directly to 802.11 test gear provided by the likes of Sprient and Ixia.
As a test guy, working on wifi was super fun. It is a complex and bodged together protocol, and it has such a wide failure curve to play around on and explore… its not like any of the wired networking protocols that came before it. And that’s why the marketing around it is so jacked up. Marketing needs to be simple, and wifi simply isn’t.
it is extremely difficult to come up with a <insert your technology here> performance testing methodology that is realistic, standardized, and controlled all at the same time.
Measuring in software world seems to always be insanely hard, more so when hardware is involved. Sure it is always easy to find something to measure but when one thinks bit further how useful those metrics are.. they usually aren’t.
Yes, as soon as one vendor starts providing inflated numbers, the rest follow :(
Boiling product performance down to a single number is the best possible situation as far as the marketing department is concerned. It just sucks for everyone else.
Kinda bizarre that they aren’t advocating Promises to make life simpler, but yeah this is an interesting read.
I’ve found that the crowd Yoshua codes with (mafintosh, Julian Gruber, Max Ogden, Dominic Tarr) in general doesn’t use Promises. Not sure why but that explains their absence.
I guess because you need to polyfill them in the browser, and also the standard Node libraries don’t use them.
That is why libraries like bluebird have promisify and promisifyAll type of helpers, which wrap the normal node-callback style API and give promise based API. Kind of reminds me of C++ libraries exposing C API so people outside of C++ ecosystem can use them.
Yes indeed, and along with that, you add another dependency (bluebird), and extra “stuff” like promisify calls to your codebase. Perhaps it’s worth it to do so, perhaps not. In either case it’s a step away from the minimalist philosophy advocated by this document, which is why I think promises are not included.
Eh, a minimalist life philosophy may mean that buying and keeping soap is extraneous–that doesn’t make such an approach hygenic!
Promises are a feature that, in Node for a while now, can be relied on to be there in the core package. And they’re a hell of a lot neater than callbacks.
The more I look at it, the more places I disagree. LevelDB instead of PG? Some weird shell script (installed via npm no less!) for handling Nginx?
Not convinced, but hey, that’s just like, my opinion, man. Other good stuff in there too though.
If you’re writing a web server or something that isn’t going to be a dependency I can see a case for using promises. But if you’re writing an npm library for public consumption, I think there are a few reasons not to use promises:
Exposing promises makes your library inconsistent with the core libraries (like fs).
Along these lines, if a consumer of your library prefers callbacks to promises, it’s more difficult to opt out. However if you use callbacks in your library, there exist tools like promisfy or to let a consumer easily wrap your library with promises. For this reason I think putting promises into a library makes it overly opinionated.
A library that uses promises requires consumers to polyfill if they are using browserify and slightly older browsers. (This is even worse if the library uses promises internally but doesn’t expose them as part of its API because the polyfill requirement isn’t obvious.)
Besides requiring more work from end-users polyfills also increases byte size.
Finally, regarding polyfills - if callback-based library X depends on one’s promise-based library P, and a consumer downloads library X, users of that library X might not realize they have to polyfill promises because of the indirect dependency on library P, which can lead to an unpleasant surprise.
Why the fuck do you need block chain for everything ?
1. Everyone should register their Public Key with a central server.
2. Everyone can vote with Candidates Public Key and sign with their Private key.
3. A vote is a bloody byte at best. The costs of the server are trivial at best.
4. Let them fucking vote at their leisures time after reading the manifesto or something.
5. Once the vote is done open up the damn data to let everyone verify the final counts with their signatures.
I agree with what I take to be your overall point - blockchains solve a very specific category of problem, and it’s frustrating how many people talk about applying them to things where they add enormous overhead for no benefit. I think, in particular, people think they’re magic anonymity sauce, although they don’t actually provide anonymity at all.
I do think that the non-tampering properties offered by blockchains are worth thinking about here, but @zeebo’s suggested properties elsewhere in the thread make even more sense to me, and I don’t see that a blockchain would be useful for those.
This is more-or-less how it’s done in Estonia, except:
Features:
It works because you can be quite sure that your goverment doesn’t make your life miserable if you vote wrong. I wouldn’t say Estonias system would work nicely in Russia for example :)
Voting has two very important, totally conflicting requirements:
I don’t see way to perform voting electronically without either compromising one of the requirements or making vote tampering even easier than it is now.
Currently election voting fuilfills both of those goals by limiting amount of votes per person to one and physically separating strong validation of persons identity + vote available status checking from actual candidate selection. Persons voter ID is consumed, double voting is prevented and voter ID cannot be combined with the actual vote.
Sure, one cannot be sure that your vote really has been counted for the candidate you selected, but that is why there should be independent body supervising the election (from UN, etc). The upcome is that if you cannot check who you voted for, no one else can, not even your own government. In case of oppressive governments, this requirement is absolutely needed or otherwise people won’t be able to vote at all. Also, even when goverment systemically affects voting procoess it will be caught (for example, statistical proofs of vote tampering in Russia).
With electronic voting, to prevent double voting or voting behalf of other people, the strong verification of person + vote counting requires that everyone either has public key (Estonias model?) or somekind of generated vote voucher ID. In first case it is obvious how easy is is to tie the vote to the person who made. In second case one can, behind the scenes, keep track who gets which ID and use that to tie people to their votes.
Even if there is voting implementation which would fulfill both rules, how does the voter know that implementation is running? Physical seperation is easy to check and just walk away if necessary, which gives another layer of protection.
Who makes final decision to adopt this plan? Doesn’t wosign just immediately file a law suit and tie this up in court forever?
Tortious interference or promissory estoppel or detrimental reliance or something. I mean, Mozilla did set themselves up as gatekeepers, and they’re obviously aware their actions will have a detrimental effect on business…
So Wosign would sue Mozilla for not trusting companies (Wosign, Ernst & Young) which clearly have neglected and breached terms of the service they were supposed to provide? Wosign literally is the gatekeeper, Firefox/Chrome/me/you are just users of the gatekeeper service and we all have freedom to not to trust them anymore. I don’t see how any fair legal system could come to conclusion that Mozilla have done some harm to the Wosign: they operate in trust based business and they threw it all away by themselves in their short sighted greed.
The better question is, lawsuit where? Wosign is Chinese, StartCom is Israeli, and Mozilla is US-based. There’s no venue where any of them can sue each other.
qfind & qselect: fast, flagfree find . -iname foo with steroids and fuzzy-finder/filter to find exactly what I need interactively.
looks neat, doesn’t it bother you that you’ll only see the list of files when the search is complete?
Not really, qfind is lot faster (or it feels so at least!) than find. On my first generation SSDs it gives subsecond filters for Linux kernel and mozilla-central repos, so it hasn’t been bother. It also feels very intuitive when combining with qselect for extra filtering, especially when using it inside other programs. For example my open file in vim.
hmm, weird. it isn’t as fast on my mac book air so i created a pull request that prints the file as soon as it’s found.
do you mind trying my temporary fork?
Did they? I don’t remember hot-swapping being a significant issue 20 years ago, before USB and Bluetooth. Outputs like monitors used to be unidirectional dependencies, now they’re a complex bidirectional negotiation. Systems are bigger, with many more services - keeping your tweaks to a dozen init scripts on a few servers updated with the new OS every 18 months or so was tractable; now there’s a few hundred and Arch pushes a few updates per week. These three concerns feed positively into one another. It wasn’t too bad to add a sleep 2 for that one slow hard drive to come up before mounting /usr, but getting drivers loaded in the right order while the user adds and removes bargain-basement peripherals that require or notify daemons interfacing with this week’s self-updated browser or Steam client is really hard to make Just Work, to say nothing of the problems that appear when the server room you used to rack boxen in is now a set of virtual machines that are live and die by the auto-scaler.
Slow hard drives are a problem other operating systems deal with too. The usual solution is for the kernel not to announce the presence of the device until the platters are spinning and the data is readable. It doesn’t require a multi daemon message bus in user land to determine when mount can run.
I know that’s just a made up example, but it’s a fine exemplar of the big picture. Solving a problem at the wrong layer means you have another problem which needs a solution at a different wrong layer and now you need another layer to fix that.
Systems are bigger, with many more services
In what sense do you mean this? The “Cloud Revolution” has made systems (that most consumers interact with) smaller. Many companies run 1 thing per “server”. Compare this to the era that SMF was created in, where there was one big-honkin machine that had to do everything. And even in the SMF world, SMF wasn’t an amorphis blob that consumed everything in its way, it had clear boundaries and a clear definition of done. I’m not saying SMF is the solution we should have or that it was designed for desktops (it wasn’t) but rather questioning your claim of servers becoming bigger.
Tandems, high-end Suns, Motorola Unix, Apollo, and anything with EIDE or SCSI involved hot swap more than a decade before Bluetooth.
I will grant that the average linux contributor is more likely to paste ‘sleep 2’ in an init shell script than to try to understand shell, or init, but that’s a problem of the linux contributors, not of technology, or shell, or init, in my opinion.
They worked, albeit not well. It’s not like systemd was the first sysvinit replacement, nor was Linux even the first UNIX-like to replace initscripts. I think that honor actually goes to Solaris 10, which introduced SMF 10 years ago (not the case, see ChadSki’s post below mine). SMF solves most of the same problems listed in this reddit post, and predates systemd by quite some time.
init/rc scripts don’t handle depedencies well. Debian had some hacks that do that, which relied on special comments in initscripts. OpenBSD and FreeBSD order the base system’s startup for you, and rely on you to deal with ordering anything from packages or ports. SMF (and later, systemd) keep track of service dependencies, and ensure that those are satisfied before starting services.
init/rc scripts are fairly slow, and that can’t be avoided. Honestly, for traditional uses of UNIXes on traditional servers, this doesn’t matter. For desktops, laptops, and servers that expect frequent reboots for updates (CoreOS’s update strategy comes to mind), this does and that’s one of the places systemd inarguably shines. IME I haven’t seen similar speed from SMF, but it’s honestly so rare that I reboot a solaris box anyway.
init/rc scripts don’t have any mechanism for determining whether or not a service is running. pid files are very racey and unreliable. pgrep can also be suspect, depending on what else happens to be running on the system at the time. SMF and systemd reliably keep track of running processes, rather than leaving it up to chance.
Another point not well made in this article is that writing init scripts can be awful. Package maintainers save you from this pain 99% of the time, but if you ever have to write your own on Debian or Red-Hat, there’s a lot of boilerplate you need to begin with. To be fair, OpenBSD proves that this doesn’t have to be as painful as Linux distros made it.
Saying that it’s not a difficult problem or that it’s all already solved by sysvinit/openrc/rc doesn’t really cut it, because it’s straight up not honest. SMF does solve these problems, and it solves them well and reliably. I used to be a systemd fan, but over time I’ve grown much more skeptical given some of the defaults and the attitude the systemd maintainers have towards the rest of the community at large.
I’d love a tool like SMF for Linux and the BSDs. I’ve used runit before, but man is it awful.
Related: A history of modern init systems (1992-2015)
Covers the following init systems:
IBM System Resource Controller (1992)
daemontools (1997) + derivatives (1997-2015)
rc.d (2000)
simpleinit, jinit and the need(8) concept (2001-3)
minit (2001-2)
depinit (2002)
daemond (2002-3)
GNU dmd (2003)
pinit (2003)
initng (2005)
launchd (2005)
Service Management Facility (SMF) (2005)
eINIT (2006)
Upstart (2006)
Asus eeePC fastinit + derivatives (2007-2015)
OpenRC (2007)
Android init (2008)
procd (2012)
Epoch (2014)
sinit (2014)
I was very happy with runit when I was using void (but this was only for a laptop and personal server). Can you elaborate at all on what you don’t like about it?
I believe I responded to you before about this :) I can elaborate more on that if you want.
I imagine it works “well enough” on a desktop, so in that case it’s fine. But honestly, it’s not what I’m looking in a modern init system.
Porting some nontrivial initscripts from Fedora to Debian (or was it other way around, can’t remember) has been the most painful operation I have done in my career so far. Several hundred lines of shell script is never fun, even more so when the script uses crude distro dependent hacks to get everything bootstrapped in right order.
If systemd can lift that maintenance horror/pain from someones shoulders I am happy for it, even if I don’t personally like some decisions that systemd has done.
Just because they work doesn’t mean they are good. Windows always boots just fine for me when I want to play games, does that mean I would want to ever, ever, ever, ever touch anything system level in Windows? Certainly not.
Maybe you have a better idea of the issues here than you’re presenting, but I’m reading your comment as an “I hate systemd and I always will” knee-jerk response. The original post clearly articulates new issues that have cropped up in modern Linux systems, that systemd resolves. Instead you choose to blatantly ignore those issues. Have you actually dealt with lazy-loading hardware issues? Reproducible boots across 10k+ machines in a cluster? Have you actually worked with an init system in a meaningful capacity? Because it’s complete bullshit.
I acknowledge this comment is aggressive, but I’m sick and tired of systemd whining from people who don’t understand how much pain systemd resolves.
In my biased opinion, your post doesn’t actually articulate the problem many people have with systemd. It’s not that making a better init system may not be desirable but it’s that they don’t feel systemd actually does this well.
pkg install it and add <service>_enable="YES" to /etc/rc.conf. Done. It works. Now, some part of the FreeBSD community are talking about moving to something like launchd but…I don’t know what your experience is, but your post in no way makes me think “hrm, maybe systemd is a good idea”. Instead it makes me think “what kind of a chop-shop is Linux if you have the problems you listed and systemd is the answer?” It reminds me of the 7-Up commercials[0] where they have a blind taste test, and 7-Up wins, hands down, next to detergent and bile.
Well, you asked, so by way of context, I was a Unix sysadmin dealing with hot-plugged hardware (ranging from SCSI-1 up to entire CPUs and bus subsystems), appearing/disappearing dev entries, “socket activation”, and dependency management in 1988.
The original post is actually a pack of hilarious cockamamie buzzword malarkey, as is “reproducible boots across 10k+ machines in a cluster”. But then, bluster and bravado are the fundamental atom of the red hat/freedesktop playbook, apparently.
I see. The Arch init scripts were particularly useless, so the switch definitely made sense for them. Given your expertise, I’d like to shift my stance and instead ask how you dealt with all these issues? Systemd is a glorified event loop, which deals with the problem nicely, but I don’t see how classic init-script based systems handle this at all. And I didn’t see them handling these issues when I was working on cluster management.
0) every init system is a glorified event loop (https://github.com/denghuancong/4.4BSD-Lite/blob/c995ba982d79d1ccaa1e8446d042f4c7f0442d5f/usr/src/sbin/init/init.c#L1178).
1) the init scripts were written correctly. I will 100% grant you that the Linux sysv init scripts were increasingly written without regard for quality or maintainability. Nevertheless, that is the fault of the maintainers rather than of the init system. If you’d like more of an education in this regard, download OpenBSD or FreeBSD and check out their rc scripts, which are quite clean, fast, effective, extensible, and comprehensible, and even less featureful than sysvinit.
2) hot plug was handled differently by different vendors, but generally the device would appear or disappear in /dev (kernel drivers discovered the device arriving or departing from the (hardware) bus) and a device- or subsystem-specific userland program would get notified by the change, and, e.g., mount/unmount file systems, etc. Also not rocket surgery.
3) we referred to “socket activation” by its original name, “inetd”, and we got rid of it as soon as we could.
4) dependency management was generally resolved with runlevels and sequential scripts. Even loaded down with crap, modern Unix machines boot in a matter of seconds with any init system. Unix machines generally stay up for years, so ‘super fast boot’ and parallel boot were things nobody cared about until the Desktop Linux crowd, who were so incompetent at getting CPU P-states to work properly that they actually rebooted their systems every time they opened their laptop lids. For years.
5) I do “reproducible boots across 10k+ machines in a cluster” with, like, whatever came after BOOTP. Maybe DHCP? In any case, that has nothing at all to do with any init system anywhere.
The easiest thing is to ask a few more whys. Or instead of how do we solve this problem, why do we have this problem? The best code is no code. :)
In the grander scheme of things, this is something that I believe is unfortunately common. My prime example of this is the Hadoop ecosystem. Hadoop is junk. Absolute junk. And every solution to Hadoop is…to add more code to, and around, Hadoop in hopes that somehow adding more will make a piece of junk less junk. In the end you get something that looks like a caricature of the Beverly Hillbillies truck and works twice as bad.
Wait, what? I remember multiple flavours of Unix running with read-only NFS-mounted /usr, and I’m quite sure I wasn’t hallucinating at the time.
Nah, you’re wrong. https://freedesktop.org/wiki/Software/systemd/separate-usr-is-broken
Freedesktop said it’s broken. Must be true.
As in many other things, I’ve always been impressed by Twisted Python’s approach to platform support. You Too Can Have A Tier-1 Platform, if you provide a build agent on that platform for the continuous integration system, and as long as somebody’s around to help other contributors whose contributions accidentally break that platform.
No matter how much work you do up-front, it’s not reasonable to expect upstream to carry and maintain your patches forever after, if it’s not code they would have ever gotten around to writing themselves.
Hi, I’m the person doing the port. Just for context as I know scrolling up is not nice. I notified upstream on September 02 2015 that I intend to provide & support a full OpenBSD port and asked on how they want to coordinate the effort. I wasn’t asked a single time to provide a build platform/host or to maintain the code but I was willing to do both. So please don’t imply that I wanted the upstream to carry my work for me. This ticket is 3 years old and I had a fully working OpenBSD port of the runtime (without dartium), a person with a working FreeBSD runtime and a person willing to do a NetBSD port.
As someone who has worked on V8/Chromium fork and put patches upstream.. it really feels that both of those projects are open source but definetly not open develoment. Patches and issues that Googlers were not actively working on just got ignored most of times.
Dart team is mostly (?) old V8 people, probably brought the attitude with them.
As an outsider, I guess I don’t see the problem with adding bsd support in general. Yes cross platform invokes “problems” when you add new things, but my quick read of that issue they already rely on procfs on linux.
I’d hope they would be more willing to take this time as a way to reflect to fix the builtin linux-isms to their code and how they can avoid making it less portable in future.
Contributions are a two-way street, and I admire the Twisted Python project because they cover both directions pretty well. Good on you for holding up your end of the bargain, and shame on Google for not being prepared to handle platform-support contributions.
That said, I think the headline here is click-baity, suggesting that Google has something against BSD in particular, rather than just third-party-supported platforms in general.
Isn’t this how pretty much all data works on the OS? If you want to be sure that nobody else will read the values you had there you have to zero it out before giving it back. And let’s not forget the swap…
Yeah. As someone mentioned in r/netsec (where I found this), it’s really a bug with Chrome.
What it highlights is a lack of memory safety in video memory that people often forget about. With WebGL this becomes scary stuff for driveby full screen capture/recording.
Yep, bug with Chrome incognito mode. On Windows is memory cleared before being returned by malloc/new? I thought that was only the behaviour for secure OSs and/or secure alloc alternative functions?
OS memory management != GPU memory management. Think of the GPU as a completely separate computer, only this one is designed and optimized purely for speed (and benchmarks are super important) with very very basic security. This means that the constant cost of zeroing out when malloc’d or some other GPU/driver imposed changes to the API are seen as large negatives.
Re-reading your comment I see what you were saying. I was probably looking for an excuse to write that heh.
It is possible to get back pages that process itself has already used, but never pages from other processes. Of course kernel does what is required for this (map all pages to zero page and when actually accessed give back zeroed page). This also is the reason why bugs caused by reading uninitialized memory rarely happen during unit-test runs or the short duration manual testing that developer does: all memory that proess initially get is zeroed.
This also happens during rebooting. Sometimes when I reboot from Windows to Linux, I get portion of the last frame under windows before first refresh from xrandr.
I love how he promotes the usage of VLAs only to give plenty of warnings later on how easily this can fail for larger objects and that a user can exploit that to crash your program. At least with malloc, you can check to see if the allocation failed. With VLAs, you literally have to just live with the stack overflow in case you requested too much stack at once.
The only thing I agree on is the stdint.h-usage. It greatly improves the readability. Most of the other points were more of an experimental nature or don’t matter (personal taste).
1) “C99 allows variable declarations anywhere” / “C99 allows for loops to declare counters inline”
Why is it “bad practice” to declare the variables at the top of the function? Only because you can doesn’t mean you should. And if your functions grow too large you might have to think about splitting them up a bit, not scatter your variable declarations all over the place. This way you end up with more cruft in the end, not less.
2) “#pragma once”
Way to go for portability. If you only care about the gcc/clang-monoculture, this may seem logical, but it’s non-standard, so don’t use it. :P
3) “restrict-keyword”
If you do numerical mathematics, go ahead, use it. I use it for my work as well. Most people however don’t even know how restrict even works exactly and just add it anywhere, thinking it’s safe. In most cases, the speed benefit won’t matter anyway, because your program is stuck in I/O 99% of the time.
4) “Return Parameter Types”
The convention ‘0’ for success and ‘1’ for error is common knowledge. The bool-proposal was kind of stupid, because you end up setting up conventions there as well. Does return ‘true’ mean error or success?
5) “Never use malloc, use calloc”
Seriously? This can actually shadow bugs in your program (forgotten 0-terminators on dynamic strings) which can fuck things up later on. Also it’s slower. If you use calloc everywhere, you basically admit that your data structures are messed up and have let your program grow too much. Or that you have simply not understood the language/machine.
And the most important point: Make up your own mind people! If you prefer your own coding style, then use it. If it’s too weird, it’s not guaranteed if people will commit something, but in C you can’t go too wrong anyway. Nevertheless, I like the gofmt approach. :) Also, take those “how to’s” always with a grain of salt. This is merely a reflection of the author’s opinion. Hell, take what I say with a grain of salt. Read the docs, read the standards(!) and inform yourself. C is simple enough that you can make up your own mind on those technical details. If you are still thinking about using VLAs in your code, take a look at the GCC implementation.
Guides like this are the reason so many people are still writing bad code, because they let others think for them instead of informing themselves.
I don’t think 1 for error is that a common convention, though I agree not 0 is a relatively common way to signal failures. In a lot of the code I work on, almost everything returns an int which is 0 for success and -1 for failure.
I personally think it’s a “bad practice” (whatever that means – to be avoided, I guess) to declare variables outside the scope in which they are used. If you need a variable inside one arm of an if statement, put it in there, not at the top of the block. Inline loop counter declaration is essentially the same thing.
Regarding (1), declaring variables as needed instead of at the beginning of the block can help you in my experience. in ANSI C, it is easy to miss if a variable has not been initialized or actually has vanished from the code. Also, patterns of variable reduce (like i am going to reuse i here…) are probably not emerging as often.
So its not necessarily that declaring variabls at the top is bad, it is just nicer to declare them as you go.
In the end it doesn’t matter. I often reuse my loop variables, you probably don’t. I guess even if we worked on a project together this wouldn’t be too much of an issue, anything else is not important.
Way to go for portability. If you only care about the gcc/clang-monoculture, this may seem logical, but it’s non-standard, so don’t use it. :P
I actually think this as nice bonus! :P
Every time I have been using some other compiler than gcc/clang, there has been horrible headaches in every corner (especially with IAR, damn it!). Although I must say that all my experiences outside gcc/clang world has been with propietary compilers. I might be somewhat biased.
Try PCC. You might be pleasantly surprised? :)
It allocates a struct x on the stack and gives you a pointer to it. It’s equivalent to:
struct x fdsa;
struct x * asdf = &fdsa;
That’s…weird. I haven’t run into that idiom yet. Could you provide a large sample to help eradicate my ignorance?
It looks like the author prefers to keep all structure references as pointers, instead of some being pointers and others being values. So instead of:
struct timeval tv;
struct timezone tz;
gettimeofday(&tv, &tz);
do_something(&tz);
do_something_else(tz.tz_minuteswest);
The author would prefer,
struct timeval tv[1];
struct timezone tz[1];
gettimeofday(tv, tz);
do_something(tz);
do_something_else(tz->tz_minuteswest);
I don’t have a real preference between these styles when taken on their own. One advantage of the 2nd one is that code in functions that take structs as parameters, and code in functions that allocate the storage themselves, is written in the same way.
So, there’s a style question.
I generally prefer to typedef my struct types, and then I don’t have to explicitly struct everywhere in functions and whatnot…why don’t people do that more?
I really hate typedeffing structs and unions because it hides information. Information which understood wrongly causes big chunk of bugs in C programs.
FancyType tmp I have think about ALL possible things that could go wrong with allocating and passing reference to that variable (oh wow, there is lot of those). In case of basic types, world is so much simpler.In short, I want see at glance as many potential pitfalls as possible and typedeffing works against that.
I think it depends on your codebase. In SmartOS and illumos, folks are definitely strongly encouraged (through review, etc) to use the style:
typedef struct new_type {
int nt_num;
char nt_path[MAXPATHLEN];
boolean_t nt_yesno;
} new_type_t;
Our struct proc is better spelt proc_t, etc. Note, also, the struct members are prefixed with a common and relatively unique string to make searching and reading the code easier.
Depending on where this is used, you have to keep in mind that _t is part of the reserved namespace in POSIX.
That’s pretty much exactly what I’ve done in my codebases, given the opportunity. I was just curious why people do it any other way.
Well, it gives you an array object that will (often, but not always) decay to a pointer, not an actual pointer, so no, not actually equivalent. Beware of surprises when applying sizeof…
Can someone with cognitive science back me up on my gut feeling? Everyone has a hard time learning new skills and everyone uses search engines occasionally, but going to these lengths to cheat actually impair learning.
Unless you count learning how to use the cheat sheet as a valuable skill.
I never thought of cheatsheets as literally cheating, more like terse examples of tried and true patterns (in other words not cheating but preventing inventing the wheel over and over again).
Absolutely right! And I was feeling guilty while creating cheat.sh. That is why we have developed some countermeasures. If we manage to implement this, cheat.sh will have the most important feature of any real cheat sheet: it will help not really to cheat, but to learn too.
Thank you for this your comment. That is actually the comment that made me register here. This is a very very deep and important thought. Thank you
There is difference between learning skills and memorizing small details (which are irrelevant in bigger picture). Cheatsheets help with latter.
The stuff usually found on cheat sheets is “How do I reverse a list in Python?” and not some insanely advanced skill.
Usually the answer is something inefficient and misleading, when it should be “Mu! You do not.” - and even then it’s not a hard thing to learn.
Learning how to deal with your language’s and framework’s etc reference manuals, now that’s a worthwhile skill. A gift that keeps on giving.
But it could potentially also grow the cargo-cult culture of programming: those who don’t understand the language and don’t care, but copy and paste snippets together and bash on it until they reach some definition of success. The snippets are helpful reminders for those familiar with the deep details, but could be uninformative and relatively context-free crutches that only hobble newer programmers from understanding why that particular snippet needed to be different to be the right answer to their actual problem, in the context of the code they’re actually working on.
Yep.
Sometimes I use snippets for complex operations that I don’t really care about.
I think this is a great example. https://stackoverflow.com/questions/39799999/parsing-a-soap-message-using-xpath-in-java
Let it suffice to say that I need that for some reason, and it’s not performance critical, and I fully understand that I’m going to get really deep call stacks and great gooey gobs of complex objects in memory during runtime. I can picture some approximation of it in my mind. (I use rectangular prisms, nested like Russian dolls, in primary colors, megabytes upon megabytes of them, and the tiny little strings that are actually useful data are glowing, everything else is dusty-translucent.)
I’m also going to literally println a CSV file on the other end, throw it in a Scheduled Task (no, I didn’t say cronjob), and move on to a more important project.
Using the snippet is the right choice for this part of this project. All I need are those little (glowing) strings.