Threads for inactive-user

  1. 1

    Heh, well, I bought an ACOG optic for my ar-15 and I’m going to zero it and start to train myself in getting familiar with it’s use. I’ve been behind enough screens this week, time to responsibly punch holes in things from 100-200 yards away in a controlled environment. Down vote away, I’m not going to flush my life away on too much software development.

    1. 1

      Yes! Any non-trivial fizz-buzz type interview is going to be considerably stressful on average for even a Senior Level Software Engineer. This blog article is needlessly naive in coming to that conclusion. This was the best topic that was decided to address?!

        1. 2

          Are you offering show-in-finder in that init as a simpler alternative? It doesn’t seem to handle multiple file selection like the post’s animation?

        1. 1

          Company: Black Lantern Security

          Company site: https://www.blacklanternsecurity.com/

          Position(s): Senior/Junior/Web Penetration Tester, IR Analyst / Blue team

          Location: Charleston, SC, or possible remote (depends on role etc.)

          Description: About Black Lantern Security: Founded in 2013, Black Lantern Security helps financial, retail, service and variety of other companies learn how to defend their networks by exposing them to Attacker’s Tactics, Techniques, and Procedures (Attack to Defend). We are dedicated to developing security solutions specifically tailored to the customer’s business objectives, resources, and overall mission.

          Tech stack: See job descriptions for more details on our website.

          Compensation: Role etc. dependent.

          Contact: Email the listed contact in the job page on our site.

          1. 5

            A little bit more context would be nice. I have some questions which are relevant to get an opinion on this PR:

            • Who has original written this software?
            • Who is the owner of the repo/project?
            • Who has contributed?
              • Under which license?

            does anyone have any opinions on how we can push companies that do contribute to OS to use more progressive licenses?

            I assume by “more progressive licenses” you mean copyleft licenses. I don’t think this is the goal, because I believe everyone should be allowed to publish/sell his software under which license he want[0] (this also includes non free licenses). Nobody forces you contribute[1] to a project or use it. If you like to contribute to a non copyleft project but want your code only in a copyleft license you are free to fork the code and change the license[2]. Yes a fork is not an easy task and requires time. But this is also true for the original work. Free software means I allow you to use my code, change it and publish the changes. It doesn’t mean that I’m required to even look at your comments or changes.

            [0] with some legal restriction to customer protection

            [1] And yes I have already not contributed to projects, because I didn’t like the license or the required contribution agreement

            [2] you must still follow the original license and watch for some other legal foo

            1. 1

              i don’t really mean copyleft, i mean any license which enforces some “code of conduct” like constraint; i.e. “anyone can use this code; but not a fossil fuel company.”

              certainly the goal to limit use.

              1. 3

                certainly the goal to limit use.

                Limiting use is not possible with free / opensource software, because free software requires to allow the use of the software for whatever the user wants to use the software.

                But in general it’s up to you to add such restrictions to your software[0]. But it’s up to others not to use or contribute to your software, when they don’t like your restrictions. The other way around it’s up to you not to use or contribute to software projects which don’t have restrictions you would like to have. You might ask them to add such restrictions, but you should be prepared that they reject your request (by the way: rejecting such requests doesn’t imply they don’t share your values).

                [0] there should be (and are, depending on your restriction) some legal restrictions what you are allowed to require/enforce

                1. 1

                  saying this isn’t “open-source software” is a bit vacuous; the point is to reinterpret how we can do open-source stuff while also having some say about how our code is used, just as we would do when we are working.

                  1. 4

                    You can’t, because this wouldn’t be open-source by Definition (See point 5 and 6). I like more the argument over the four essential freedoms: What you suggest would violate freedom 0. Yes you can add such restriction to your software, but then it wouldn’t be open-source anymore.

                    I believe you ask the wrong question. You should ask: Is it a good idea to limit the usage of software for some users but still have the source code as open as possible? How could such a license look like? What restrictions should be hold? How could we name this licensing system?

                    To answer the first question: I would say it’s a bad idea. Because it tries to solve social issues with legal limitations. Also how would you find consent about what usage/which users the license should restrict? We currently have problems with incompatible open-source licenses and discussions about which is the “best” license.

                    1. 1

                      it wouldn’t be “definitionally” open source according to that definition; but the whole conversation we’re having here is what would it look like if we decided to treat re-use just like we treat our communities. there’s no point arguing about the definitions some privileged people wrote down many years ago.

                      as to your second point; isn’t the whole essence of the state about solving social issues with “legal” means; i.e. the state itself? i.e. what’s the point of a legal apparatus at all but to tackle social problems.

                      1. 3

                        OK I understand now what you want and I must say I hate this type of argumentation. You suggest to redefine a already set term till it means what you currently want. This is bad because it poisons the discussion if everyone uses the same word with different meanings.

                        For your suggestion, I believe you just burn your and other peoples time by restricting the use of the software. There are many important issues, but these issues are not solved by software licenses. Also it is not easy to define (and redefine) which restriction the license should enforce. So for example you exclude some “evil” company. Later the company changes and want to create some “good” product with your code. But one of your core contributor don’t want to allow the company to use his code, because he has had a fight with the CEO of the company. An other example: You might find out in 5 years that on of the restrictions is harming “good” use, but you have lost contact to one of the core contributors so you can’t change the license.

                        In general I see it more and more that project which started to solve one issue now try to solve all issues. Most of the time in a way that you only accepted if you share all values and believe proposed solution is the best solution (and most of the time there is at least on point I believe the proposed solution is harmful). I avoid this communities, because I don’t have the mental capacity to save the world.

                        1. 1

                          i’m not against the idea that software licenses are a totally pointless mechanism for change, aside from the fact that they clearly are a mechnaism for change, i.e. the mere idea of the GPL, the abundance of MIT, the fact that open-source powers literally everything on the planet at the moment, etc, etc.

                          i’m just curious if there is any way to also consider social good. maybe the answer is “no”, but it seems a shame.

                          1. 1

                            i’m not against the idea that software licenses are a totally pointless mechanism for change

                            I have never implied this. I would guess you believe the opposite. I also think software licenses can help to solve some issues. But it’s not the only tool available. Also free software communities are not the only communities available to solve social issues. The free software definition/community hasn’t solved the problems lead to the free software movement and definition. So adding more issues to solve don’t looks like a good idea.

                            i’m just curious if there is any way to also consider social good

                            In German there is the word “Eierlegendewollmilchsau” which translate to Pig which lays eggs while producing milk and wool. It is used when someone proposes a all in one solution which sounds to good to be real. What you propose sounds like you want a “Eierlegendewollmilchsau”-license. So to answer your questions: I don’t think the free software definition is the best definition for a social good (whatever that means) licensing type definition, but I haven’t seen a better one or one which might lead to a better definition.

                            Also your question sounds like you want to solve all (or at least a lot of) social issues just by one magic switch. But social problems aren’t that easy and there is not that one switch/law which just need to be enabled and the problem is gone. It’s a time consuming process to understand the issue, understand the reasons for the issue and find ways to solve/mitigate the issues. Also the issues are not stand alone. A solution for a problem might cause some issues somewhere else.

                            So can your licenses influence how some issues are solved? Yes of course, but be careful because you might make it worse or harm something unrelated. Also keep in mind: “Prediction is very difficult, especially about the future”.

            1. 12

              Free software with restrictions on use isn’t free software. Aside from being really hard to identify what does/doesn’t do harm (in essence making the Hippocratic unenforceable - is such software prohibited in United Nation humanitarian operations, since they are staffed by military personnel?), trying to put our current morals into license form just doesn’t work.

              50 years ago, homosexual and transgender people were considered harmful, and would be prohibited under such a license, and at the time people thought this was right, moral, and just. I think it would be arrogant to pretend that we’re at the pinnacle of moral judgement now.

              So let’s leave licenses open to all use, rather than ruling out behaviour we think is immoral now at the cost of the progressive future.

              1. 1

                how do you feel about a code of conduct to enforce norms in a community? the same way, or differently? why?

                1. 7

                  Codes of conduct describe practices for how people develop the software, it doesn’t change the way free software can be used by users, or who is even allowed to be a user.

                  If you want to set rules on how you work as a team, that’s just fine, but that’s different from then prohibiting certain uses by people or organisations outside your team because you don’t agree with those uses.

                  To send a question back at you, how do you view the morality of use for things like evacuation and disaster relief efforts that involve the military (for example, in the UK a lot of COVID support was done by the military), which under the Hippocratic license would be, at first blush, prohibited?

                  1. 1

                    If you want to set rules on how you work as a team, that’s just fine, but that’s different from then prohibiting certain uses by people or organisations outside your team because you don’t agree with those uses.

                    i’d say it’s exactly the same; setting the code of conduct enforces who is in and who is out; same with the license.

                    how do you view the morality … which under the Hippocratic license would be, at first blush, prohibited?

                    i’d say it’s within the spirit of acceptable usage; so it would be fine.

                    1. 7

                      Unfortunately licenses don’t operate on spirits or hopes, and the Hippocratic license says:

                      3.1. The Licensee SHALL NOT, whether directly or indirectly, through agents or assigns:
                      3.1.20. Military Activities: Be an entity or a representative, agent, affiliate, successor, attorney, or assign of an entity which conducts military activities;

                      This clearly sets out that if you, the licensee, are a representative of an org that conducts military activities, you cannot use the software. It’s clear cut and dry.

                      But the point that is being hit on here, by both of us, is there there is clearly nuance in use of the software. The intent of the relicensing is to limit it to peaceful and progressive, humanitarian use. The problem is that the legal wording of the license does actually prohibit this use if you happen to be from the wrong team when you are conducting those progressive/humanitarian goals. The software could not be used in any such efforts such as rapidly deployed military hospitals to disaster zones, military helicopter search and rescue teams, coast guard, etc.

                      But, I promise I’m not trying to say “military good” - the underlying point is that software ends up being used in all sorts of delicately nuanced and varied situations that we cannot possibly predict, and so by trying to suggest that we can ahead-of-time predict all these nuanced cases we will either be overly restrictive, or not restrictive enough. Given that the nature of progress is to improve upon ourselves, I would rather less restrictive to allow for uses I couldn’t have predicted, rather than stifle them because we are relatively backwards compared to our progressive peers in the future.

                      1. 1

                        licenses, like all legal agreements, are merely systems through which the world is interpreted; i.e. it’s the spirit of the intention.

                        i’m not saying the hippocratic license is perfectly worded; and of course i didn’t design it; but it’s certainly possible to have different interpretations of a piece of legal writing.

                        i think i agree with you that i don’t want to be overly specific, and i’d probably agree that the hippocratic license is a bit too specific; so i’m open to alternatives (hence this conversation)

                        i’d hope there’s a middle ground between MIT and the Hippocratic license; and i think i’m arguing that i’d prefer to err towards hippocratic vs MIT, because at least that enables me to say something about what i want.

                        1. 2

                          certainly possible to have different interpretations of a piece of legal writing

                          This is actually what lawyers try very hard to remove. They like things that are clear and settled.

                          1. 2

                            This is actually what lawyers try very hard to remove. They like things that are clear and settled.

                            for what it’s worth, while i think this is a side issue to the central point - namely, how can we as programmers have some say on how software is used; and in particular try and push our industry towards positive applications of software; or at least not planet-destroying usages - i don’t think you’re right at all.

                            law is all about interpreting the essence in certain settings; so while i’m sure the hippocratic license doesn’t get it perfect; i’m sure there is a way to make a best effort, that does not necessitate total prediction of the future.

                            1. 3

                              Licenses based in morality are guaranteed to be restrictive, since morality itself is relative. Same with “positive applications”, or “not planet-destroying usages” - relative topics, and licenses based on these are bound to be restrictive. As an example, software that cannot be used for deforestation applications (say controlling the mechanical saw) cannot be used in locations where primary source of fuel is wood and no alternative exists.

                              In my experience, it is a futile effort to try and enforce some arbitrary definition of “positive applications”, “not planet-destroying” etc., without also restricting valid, legitimate, and moral use (moral as per the license author, who actually wishes to allow moral use).

                      2. 3

                        A project with a proper FOSS license and a highly restrictive CoC can still be legitimately forked into a community with a different or even contradicting CoC.

                        A project with a restrictive license can’t be legitimately forked away into a contradicting license.

                        1. 1

                          indeed!

                          and that’s exactly what i’m going for :)

                          1. 4

                            My point is that this is what makes it not be “exactly the same” that you responded above. :)

                            A license ties your moral judgements to the code, a CoC ties your moral judgements to your community.

                            I understand that you want to tie moral judgements to code, but that’s where the disagreement lies. I, and I suspect other people you’re debating here, believe that we should be free to legitimately fork away from moral judgements. It’s less of a debate of whether the moral judgements are objectively correct or absolute or pious or whatever.

                            1. 1

                              i see

                              i suppose what i’m getting at is, at what point do we as a tech community take a stand against various injustices? one way is through the companies we work for, and the communities we support. but what about the open-source work we do? are we doomed to just always be left open to abuse and misuse; or is there some avenue by which we can exercise personal judgement there as well? clearly there’s some level on which people are “okay” with this (i.e. GPL licenses, etc; which maybe while somewhat widely frustrating, also get traction). my interest lies in exploring that domain where we’re concerned with social good.

                              it seems a shame to not at least attempt to explore this space, given how pervasive software is.

                              1. 1

                                I don’t think there’s much disagreement about the existence of injustices (no matter that our definitions of injustice changes over time) and the need to take actions against them.

                                The disagreement is more about whether action should be taken at all layers and aspects of life/society/technology, or whether there are some places where it’s more appropriate to encode restrictions vs others where it’s less appropriate.

                                In my view, the community code of conduct is a very appropriate avenue for this. We can create or think of other avenues, too! I don’t feel that the code license is a good fit, for many reasons already expressed elsewhere in this debate. :)

                                I understand the urge to be absolute and complete in sanctioning people we disagree with, and maybe it’s a political axis spectrum thing. I tend to land more on sanctions through voluntary relations (deplatforming, refusing to trade, etc) rather than through mechanical means (restricting access to technology, safety, food, oxygen, whatever extreme we can imagine). I’m sure it’s a varying spectrum for many people. I’ve seen some people express this as “higher level” (social) vs “lower level” (physical).

                1. 2

                  I would take a step back and ask why you are making the code open source in the first place. If you want just to share your work with others, then pick whatever you want. As long as you own all of the intellectual property related to the code, you can pick whatever license you desire.

                  However, if you want your project to be adopted within a corporate environment, you can’t expect things outside their standard set to get a lot of traction. That set was picked by lawyers to reduce the risk of the company having future issues with clean intellectual property rights to their product. Even if it was adopted by a company before they were big enough to have lawyers who cared, one day they will grow, get acquired, IPO, and there will be a team of people running license checks for stuff outside of the approved set. That is especially true for relatively unknown licenses like the one in this case. At that point, they’re likely going to stop engineering work to replace the components affected, too, once again, reduce risk.

                  Here is a hypothetical. A company adopts this component with the license as it was; they get acquired by a large, multinational public company. There is not a lawyer that would read this license and agree to run down every aspect of this license and ensure they’re complying with it. Some are easy, but many are vague enough to be a pain. So instead, they tell engineering to yeet it from the product.

                  Given all of that, to answer your prompt, you don’t. Companies are not taking a risk on small open-source components. If you want to get the Hippocratic License added to the set of approved licenses, it is a Sisyphean effort. The only way I see it happening is if your project gets to the level of something like Kubernetes or Linux, which (in a catch) often doesn’t happen without corporate support.

                  1. 2

                    why open-source it? clearly, to provide some benefit. it’s a useful library.

                    i personally don’t care a great deal about adoption; what i do care about is “good use”. i personally don’t want to support the military or fossil fuel companies, say. just like i wouldn’t work at those companies.

                    i’m curious to gauge peoples views about expressing such sentiments via licenses. it seems like the hippocratic license - https://firstdonoharm.dev/ - is a very clear approach to do this; yet it seems to be met with quite some anxiety by people who think tech should somehow be “neutral”. it’s long been shown that neutrality only rewards the privileged; to make social change one needs to step out, somehow.

                    so my question is, as a tech community at large, do we just completely give up on licenses? (aside from the standard few?) or is there some room to innovative; some way to create social change for ourselves, our users, and the broader community? and if so, what is that mechanism?

                    1. 1

                      I’ll ask it a different way. In an ideal world, would a company change its policies to adopt your open source software? If you want to change corporate governance, I don’t think you do it with progressive open source licenses. No engineering leader is going to go to a board and ask them to change broad policy so they can use an open source library.

                      1. 1

                        and let me ask you in a different way - what would make them change?

                        1. 3

                          IMO, probably only government regulation and popular opinion.

                          1. 1

                            A plurality of US states – Delaware (the important one for corporate governance!) included – allow corporations to incorporate or reincorporate as a public benefit corporation. It’s conceivable that a corporation could be subject to enough pressure by its employees and shareholders that it would reincorporate as a B corporation.

                            But while I think a niche could exist in B corporations for software licensed under the Hippocratic license & similar, it’s important to not mix cause & effect: your Hippocratic licensed software may be eligible for selection by a company because they chose to become a B corp, but it strikes me as exceptionally unlikely that a company will ever become a B corp to use your Hippocratic licensed software.

                            1. 1

                              how is B-corp and the license even related?

                              i.e. we’re just taking about a simple license here, where the terms are of course only enforceable through some (hypothetical) law suit; i..e the license really just expresses some notion of personal preferences enforceable only if i feel like suing random companies that use it.

                              maybe one thing i could point out is the difference between a code of conduct and a license. we all feel (somewhat?) comfortable with a code of conduct expressing behaviour wanted in our spaces; why not licenses for those same desires?

                              1. 1

                                how is B-corp and the license even related?

                                only if i feel like suing random companies that use it.

                                maybe one thing i could point out is the difference between a code of conduct and a license

                                Corporate governance seems like the thing being discussed here. You hope to impact governance through clauses in a license. However, governance is not limited to the time when you decide to sue some companies. Companies are bound to various agreements which require them to make some attempt to mitigate risk so that they can achieve the outcomes that the owners desire. The result is that they pick and choose which risks they want to take on by limiting the number of licenses they support and the scope of these licenses.

                                Regular corporations (and, I suspect B-corps too) are unlikely to want to increase the number of risks they are dealing with by using software with the Hippocratic license. We already know that many companies rule out GPL and derivative licenses entirely just to limit their risk. Some will pick and choose, but only when they have resource to review and fit it into their business.

                                Above I used terms like “various agreements” because I don’t have the time to write in the level of the detail I’d like to. Agreements come in many forms and we care most about the explicit ones which are written like contracts. Some agreements are more implicit and while still important, I’m ignoring these to simplify. Agreements include but aren’t limited to:

                                • Founding documents between the founders, or between the government and the founders.
                                • Partnership agreements with others selling/integrating your product, or providing code for your product.
                                • Agreements with organizations that represent employees.
                                • Customer contracts.
                                • Funding agreements with VCs, or banks.

                                For your license to succeed, you need to navigate all of these agreements. A license like MIT is relatively compatible because it’s limited in scope.

                                1. 1

                                  i see

                                  i mean, suppose you are a regular developer living your life, and you feel like sharing code. clearly, i don’t want to engage at the level you mention with anyone who uses the code.

                                  licenses seem like a reasonable way, no? or no. would you suggest there is no way? we should just give up and MIT everything?

                                  1. 2

                                    licenses seem like a reasonable way, no? or no. would you suggest there is no way? we should just give up and MIT everything?

                                    There is no way to achieve what you desire to any great extent with your approach. The trade-offs are for you to decide.

                                    I would posit that most people don’t want to have relationships based on the requirements of the license you put forth. If you want to define your relationships and engagement through that license for your code, or companies you run, then that’s 100% fine. Many types of small communities can be sustained with hard work.

                                    When you go in that direction don’t expect other people to reciprocate in various ways that they can in the open source world through code use, testing, bug reporting, doc writing, etc. If you use MIT then you’ll open the door to a lot more collaboration and usage. For many people who have a livelihood depending on open source, this is the only approach. When your livelihood doesn’t depend on open source it’s easier to pick and choose licenses, but even then the decision can limit who will engage with you.

                    2. 1

                      You’ve forgotten one more potential situation: you want other open source projects and people to be able to use it, but don’t care at all about corporate usage, or even want to discourage it.

                      In such situations, licenses like the unlicense, AGPL, Hippocratic license, etc can be useful.

                      1. 2

                        I bucket that under the first point of share your work with others.

                    1. 12

                      I’m sorry, but as much as I dislike copyright, I don’t get it.

                      Trust

                      You cannot trust non public domain information products. You can only make due. By definition, non public domain information products have a hidden agenda.

                      What hidden agenda would a software licensed under the BSD-2-Clause have “by definition”, that it wouldn’t have if it was in the public domain?

                      Speed

                      Public domain products are strictly faster to use than non public domain products. Not just faster, orders of magnitude faster.

                      Cost to build

                      Public domain products are far cheaper to build than non public domain products. Failure to embrace the public domain increases the cost to build any information product by at least an order of magnitude.

                      Again, I’m sorry but without any attempt at a proof, it is more wishful thinking than actionable information. What sources do you use, what makes you believe that?

                      1. 3

                        I think the verbiage can be better around “hidden agenda”. Thanks for the question. Let me explain (and maybe I’ll come up with a tweak to improve the post).

                        What hidden agenda would a software licensed under the BSD-2-Clause have “by definition”, that it wouldn’t have if it was in the public domain?

                        Every human has hidden agendas. You can not see inside someone’s brain. I’m not saying most people walk around with evil hidden agendas—far from it. But most people don’t necessarily have your interests at the top of their mind either—why should they, everyone’s got their own problems.

                        The “BSD-2-Clause” comes with conditions. If you don’t want to follow those conditions, the only way to legally do so would be to seek the permission of the author, who, as a human, of course has a hidden agenda.

                        Therefore, by definition even a BSD-2-Clause licensed software has a hidden agenda. As I’m assuming that was your best counter, you can follow that the situation is worse as they license gets worse, and most things have worse licenses than BSD-2.

                        [Not sure if I’ve made a good enough argument, will have to reread this tomorrow]

                        Again, I’m sorry but without any attempt at a proof, it is more wishful thinking than actionable information. What sources do you use, what makes you believe that?

                        It’s a fair point. It would be good to build up a big public dataset on this. I think that would make the argument stronger. I have an internal mental dataset from decades of experience but a good ole CSV would be 100x better. I’ve added a commit to that post with a todo and will start that soon.

                        1. 5

                          Why must their agenda be hidden, though? Isn’t it possible that people who choose, say, a “copyleft” license like AGPL have a fairly obvious agenda of not wanting modified versions of their software to be run by giant cloud companies without releasing the modified source? That is what such licenses are designed to prevent, so doesn’t it follow that someone using such a license likely just wants to prevent that? Nothing is hidden here, it’s just the right tool for the job.

                          But if you’re still worried about agendas being hidden in cases like that, would it suffice for the author to have a statement of intent for the software including reasoning around their choice of license?

                          1. 4

                            the author, who, as a human, of course has a hidden agenda

                            Of course, in that sense every human has a hidden agenda. It follows that the author of a public domain work also has one. My point is that, while I cannot see inside someone’s brain, I would guess that those hidden agendas are generally similar whether the software is in the public domain, or licensed under a permissive license.

                            Edit: by the way, are you familiar with this recent article?

                            1. 2

                              It follows that the author of a public domain work also has one.

                              For sure, but the key difference is that when the product is public domain the author’s hidden agenda is not transferred to the product. The product is free, with zero strings attached back to the author. Even if someone uses a very liberal “license”, say a string 1 micron wide, there is a big difference between 1 and 0.

                              1. 8

                                Even if someone uses a very liberal “license”, say a string 1 micron wide, there is a big difference between 1 and 0.

                                The trouble with that reasoning is that for some of us[1], the 0 is not an option. For example in France, dedicating my work to the public domain is not a thing and using a very permissive license is the best alternative we’ve got.

                                [1] Perhaps for most of us since, as argued in the article I linked to in a late edit, the situation might not be much better in the US.

                                1. 1

                                  In the USA, federal agencies cannot hold copyright. For example, when NASA shares photography with us, the data is in the public domain. This might be a good example of a situation where the public domain is superior – NASA is generally considered one of the leaders in space exploration.

                                  1. 1

                                    For example in France, dedicating my work to the public domain is not a thing

                                    !!!! I didn’t expect things to be worse in Europe. Thanks for the information.

                                    1. 1

                                      This article is mostly about books but there are lots of exceptions

                                      https://medium.com/copyright-untangled/public-domain-why-it-is-not-that-simple-in-europe-1a049ce81499

                                      I believe the real issue is that there’s no “positive” way to declare a work in the public domain in Europe, like in the US for works funded by the federal government. So everything is under copyright until 70 years after the author dies.

                            2. 2

                              Thank you @jmiven for the very helpful feedback.

                              I just updated the paragraphs about Speed and Cost: https://github.com/breck7/breckyunits.com/commit/65bc0aa36eaaa9345350181c444ab0b41c54e434

                            1. 2

                              The same thing I’ve been reading for quite some weeks now. https://thewanderinginn.com , currently nearing the end of the fourth volume as an ebook. A fantasy web serial. Light reading, but quite extraordinarily long. I think someone claimed it was longer than the wheel of time. And still being written.

                              1. 3

                                fyi, that’s not the right link, you want - https://wanderinginn.com/ - the one you’ve given seems to be squatted by some weird thing.

                                1. 1

                                  Thanks. I noted the same thing while writing the comment, but managed to copy from the wrong tab. And can’t edit anymore =(

                              1. 2

                                My first (and only) experience with FreeBSD happened because I received a GitHub issue where one of my open source projects didn’t compile on FreeBSD due to a missing include. I downloaded the FreeBSD ISO and spun up a VM.

                                After installation and setup, I downloaded git, cloned my project, and fixed the issue. So far so good.

                                But then I had to set up my git user, which is when I found out that FreeBSD’s TTY doesn’t support the ‘ø’ in my name. Great. After some googling, I found out that there’s no workaround other than to install X11.

                                So I did that. But even though I had set up a Norwegian keymap with setxkbmap, I still couldn’t enter the ‘ø’ in xterm.

                                In the end I had to do some ridiculous hack to get it working. I think what ended up working was to install Firefox and navigate to a website containing ‘ø’, and copy-paste it to xterm. Or maybe I have up on getting the terminal to remember and edited the vim config in a text editor. Or maybe I have up entirely and manually made my changes on the host Linux system. I honestly can’t remember, it’s been a few years. But that experience told me that FreeBSD isn’t for me.

                                And yes. I know that you can work around this. There are, potentially, non-American FreeBSD users. But the fact that they just don’t seem to care at all about this stuff told me more than enough about their priorities; people with my kind of name aren’t in their target market.

                                It doesn’t surprise me that all other FreeBSD defaults are similarly fucked up.

                                1. 5

                                  On the contrary, we do care very much about this sort of thing - we want to make sure that folks have a good, “low friction” experience with their first exposure to FreeBSD. I’m interested in both making sure that this works in general, as well as reproducing the specific case you encountered. Would you kindly let me know what FreeBSD version you tried, and which VM software you were using?

                                  1. 4

                                    FreeBSD’s console started supporting UTF-8 since 2013 (as a default in 2014). https://wiki.freebsd.org/Newcons

                                    I don’t know what would be the problem with X11 then, but FreeBSD ports does not change Xorg’s default configuration, nor does FreeBSD change Vim’s default configuration.

                                    If you had not time traveled from 2013, it would be useful to submit the console bug to FreeBSD, and submit the second bug to Xorg. Or maybe Vim.

                                    1. 2

                                      You can use kbdmap to set up the keyboard in the console; e.g. with US-International you can get é with “alt + ’ + e”. ø isn’t in US-International it seems, but you can select a Norwegian keyboard and get ø, or modify the standard US-International layout to add it. As mentioned, this has worked since 2014, and getting Unicode support was a major reason for working on it in the first place; the old syscons was from the early 90s when things like multibyte encodings weren’t really on the radar.

                                      IIRC it asked for this during installation too, but not 100% sure as it’s been a while. I think even the old “sysinstall” installer asked for it (but that has been replaced years ago).

                                      I haven’t seriously used FreeBSD since 2015 or so; I just checked the handbook’s Localization - i18n/L10n Usage and Setup chapter and tested it in my QEMU VM just to verify it works (it does).

                                      Setting up UTF-8 inside Xorg and xterm on FreeBSD is pretty much identical to Linux, and is something that has worked for decades.

                                      I appreciate you had a frustrating time, but “lemme just do this real quick” on an unfamiliar system is often frustrating, for many different tasks. Your rant is misplaced and misinformed.

                                      1. 3

                                        I just checked Norwegian keyboard input on my ~FreeBSD 13 laptop:

                                        • Switch to a virtual console
                                        • Run kbdmap, chose Norwegian
                                        • Press ;-labelled key
                                        • Observe ø on the console
                                        1. 1

                                          I was definitely switching to a norwegian keyboard in the TTY. It just didn’t work.

                                          Setting up UTF-8 inside Xorg and xterm on FreeBSD is pretty much identical to Linux

                                          I have never “set up UTF-8 inside Xorg and xterm” in Linux. Linux systems all default to UTF-8. Systems which don’t are simply broken by default.

                                          1. 3

                                            So maybe you did something wrong?

                                            I am so tired of this rant culture. It was a mistake to come back.

                                        2. 1

                                          That’s really shitty and I’m sorry that happened to you.

                                        1. 1

                                          You need to trick the user in to running a VBScript or PowerShell wrapper to set the environment, or modify the registry. What practical security problem does this pose that’s not already present if the attacker can trick you in to running code or change the registry?

                                          I’m not that familiar with Windows; maybe I’m missing something, but I don’t really see the practical problem, and it seems one of those “imagine an attacker has access to your computer, then they could …”-type of “threats”.

                                          1. 1

                                            PATH handling is usually not great and has been annoying for ages. Under certain circumstances you needed a reboot for e.g. Java or other toolchains to be recognized, so if you have a tool that requires a JVM you could easily provide bogus instructions so the user will happily follow them, I guess.

                                          1. 2

                                            Someone told me once that there’s no such thing as a “10x engineer”, unless one counts the effect an engineer might have on the rest of the team (e.g. if one person is able to multiple their team’s output by 1.1, then, for an appropriately sized team, that engineer might have had a 10x impact). I didn’t agree, and this submission shows an example of why: S.D. Adams reduced the problem to the 1/10th core that actually needs to be solved to meet the goal, and appears to have been 10x as productive as a result.

                                            1. 8

                                              While not solving the problem I have with data loss on OpenBSD: Any OpenBSD machine can, on losing power, require human intervention to bring up again (run the fsck, hope it comes out OK).

                                              Until I can bounce OpenBSD boxes like I can consumer routers, or linux boxes, I can’t install OpenBSD in all the places I want to. These weaknesses are a layer below muxfs’s area of concern.

                                              The problem space was reduced, yes, without solving the problem space where I’m most interested.

                                              1. 3

                                                I have a bunch of QEMU boxes for testing/building things on different platforms, and very occasionally one doesn’t have a clean shutdown (while doing nothing) and OpenBSD is the only one that gives me grief with “starting in single-user mode, run fsck manually” regularly. I usually start these things without video connected, so it will just “hang” forever. Thus far I’ve never had this with NetBSD, FreeBSD, Dragonfly, or anywhere else. I “solved” it by just taking a snapshot and reverting to that, but meh….

                                                1. 3

                                                  Most operating systems ‘solved’ this by introducing journalling filesystems. This guaranteed that the on-disk representation always had enough state to be able to either discard or complete in-flight transactions. FreeBSD provided an alternative called soft-updates, which enforced an ordering on the writes such that a failure would leave the disk in a well-defined state that fsck could recover and allowed fsck to run while the filesystem was mounted (an unclean shutdown would leak blocks, so fsck was needed to go and find unlinked ones and return them to the free pool). This was later combined with journalling, which eliminated the need for background fsck to scan all of the inodes. Copy-on-write filesystems such as ZFS are typically self-healing and so don’t ever enter a situation where the on-disk structure can be invalid except in the case of a drive failing.

                                                  To my knowledge, OpenBSD never pulled in the soft-update or journalling code from FreeBSD. NetBSD has an independent implementation of journaling for UFS (I vaguely remember that they added journaling before FreeBSD, but I think the FreeBSD version did so in a way designed to compose with soft-updates, whereas NetBSD’s is independent).

                                                  1. 1

                                                    OpenBSD does seem to support “soft dependencies” – which seems similar or identical to soft updates – with mount -o softdep, but it’s not enabled by default.

                                                    I don’t remember FreeBSD being quite this bad; soft-updates were generally discouraged for your root fs and not enabled by default for it, and I rarely had a “booting in single-user mode, run fsck manually” in many years of desktop usage. OpenBSD is much more fragile, either due to the design of their FFS/UFS implementation, fsck implementation, or choices they made in when to boot to single-user mode.

                                            1. 1

                                              In zsh you can also use setopt no_flow_control to disable ^S and ^Q. The difference with stty is that it only applies to ZLE (the zsh line editor) rather than everything, which may be better or worse, depending on what you want.

                                              1. 5

                                                Leaving aside the technical reasons, FreeBSD “feels like home”. I’ve just purchased a refurbished X250 and plan to switch back to FreeBSD from Ubuntu as my daily driver for everything other than streaming media and gaming.

                                                Which is odd, because I grew up with Linux in the 1990s, and only discovered FreeBSD in ~ 2014.

                                                1. 2

                                                  Arguably the Linux of 2022 is unrecognisable from the Linux of the 1990s due to the insane amount of software churn, whereas FreeBSD hasn’t changed that much. When the BSDs adopt something new, they try to make it fit in with the rest of the system and make sure it’s of a solid enough design that it doesn’t have to be replaced in a few years. And the rate of change is much slower, too.

                                                  1. 1

                                                    On the other hand, in the (late) 1990s and 2000s running Linux or BSD on your desktop was more or less the same experience: loads of stuff wouldn’t work “because $X only supports Windows”.

                                                    Today things have expanded a little bit with “$X support Windows, macOS, and Linux”. Of course, lack of support for $X is often not FreeBSD’s fault, but I find it’s a good pragmatic reason to prefer Linux over any of the BSD systems, especially for daily desktop use. Not that I have a particularly great love for the “Linux ecosystem”, but “Just Works™” (most of the time) counts for a lot as I’m too old to be dealing with that kind of stuff.

                                                    1. 2

                                                      That’s why I switched back to Linux from (Net)BSD, too. Not enough time for ceaseless yak shaving and working around deficiencies.

                                                      1. 1

                                                        Today things have expanded a little bit with “$X support Windows, macOS, and Linux”. Of course, lack of support for $X is often not FreeBSD’s fault, but I find it’s a good pragmatic reason to prefer Linux over any of the BSD systems, especially for daily desktop use.

                                                        I’d argue that it’s a good short-term reason, but pragmatically, supporting OSs outside of that group will result in a healthier ecosystem in the future.

                                                        Edited to add: my experience is that a lot of open source software that doesn’t work well on FreeBSD just needs a little help (bug reports, suggestions, testing) to get over the line.

                                                  1. 1

                                                    5GB seems very low! I wonder if they’re worried specifically about the size of storage or if it’s a proxy for general workload. Or maybe all the bigger projects are not things they want; people storing porn or whatever.

                                                    1. 9

                                                      It seems very low, but it’s still quite a lot, assuming you’re using repositories for source code. I’m currently using ~2.2Gb of storage for my git repos (on my own server mind you), 1.7Gb of which is my brother’s website with huge photos. The other ~500mb are ~170 repositories of various sizes (including a mirror of Guix, accounting for about half of that). My repositories, even those with a long history and a fair amount of commits clock in below 100mb, but most of them are even smaller than that. For reference, Gitea, a 8+ year old project with over 13k commits and 140 tags is only about 250mb. You’d need to have ~20 Gitea-scale (or ~15 Guix-scale) projects to exceed the 5Gb storage limit.

                                                      You can fit a lot of source code into 5Gb, so that quota seems quite reasonable to me.

                                                      1. 5

                                                        I am a little amused by their progressive size steps, starting at 45,000 GB and working their way (rapidly) downward. I have to wonder what proportion of their free tier users are consuming >45,000 GB of space!

                                                        1. 3

                                                          Yeah, the step sizes are a bit baffling indeed, and I’d love to see some numbers, how many repos/accounts are affected by each, and so on. I’m not entirely sure I understand why those steps are necessary, seeing as the delay between 45gb and 5gb is only 3 weeks, with the biggest drop (45000g->7500gb) is only a single day. Might aswell start at 7500g then, or as repos are going to be locked only, and ample preparation time was given, just flag-day the 5g.

                                                          They could just notify people now, and introduce the 5g limit on October 19th. That’d give people more time than when they start getting in-app notifications on October 22nd. But maybe I misunderstood when they’ll start notifying people.

                                                          1. 5

                                                            I mean, I can understand the gradual roll-out; first to see if it works at all, and second to give their infrastructure and customer support people both a chance to ramp up slowly. It’d be cool to see numbers though; I would have divided it up such that 10 repos get affected by the first change, 100 on the second, 1000 on the third, and so on, but there’s other valid ways of doing it they may want instead.

                                                        2. 3

                                                          The storage counts build artifacts as well, not just the git repo.

                                                          1. 1

                                                            To be specific:

                                                            Storage types that add to the total namespace storage are:

                                                            • Git repository
                                                            • Git LFS
                                                            • Artifacts
                                                            • Container registry
                                                            • Package registry
                                                            • Dependecy proxy
                                                            • Wiki
                                                            • Snippets

                                                            https://docs.gitlab.com/ee/user/usage_quotas.html#namespace-storage-limit

                                                          2. 2

                                                            You can fit a lot of source code into 5Gb, so that quota seems quite reasonable to me.

                                                            My ~/code directory is currently 872M. This includes projects going back almost 20 years (some of which aren’t even on GitHub but were on Sourceforge, Google Code, or BitBucket back in the day), a bunch of cache/data files that aren’t in git, projects I worked on but haven’t pushed (yet), generated/cached stuff that’s not in git, etc. All of that combined is probably at least 100-200M less, but I didn’t bother to really check. There are 398 directories/projects in total.

                                                            This doesn’t include binary uploads though; for example all of uni‘s git history is ~16M (it’s comparatively large as it includes a unicode database), but there are also ten releases each has 10 binaries of ~1.5M, so that’s ~150M extra.

                                                            Still, I have a lot more projects than most, and thus far I probably would have had enough within the 5G limit. As you mentioned, even large projects tend to be on the order of several hundred M.

                                                            I feel it’s regrettable that the only way to upgrade is $19/month; I pay $5/month to FastMail and get 30G of storage with that. I guess it makes business sense to focus on the business customers.

                                                            1. 1

                                                              It seems very low, but it’s still quite a lot

                                                              Just don’t work on Linux (or Chromium or Firefox)

                                                              1. 3

                                                                How does GitLab account for forks? I believe GitHub internally uses a content-addressable store and so multiple clones of mostly-identical repos use the same storage. If one person pushes Chromium then they’re adding a lot of data, but then every subsequent person who pushes the same repo just adds a reference count. This is why they don’t care too much about large popular repositories: the cost is amortised over the users who have forks. It’s only large private or unpopular repositories that have a large per-user cost to the platform.

                                                                1. 1

                                                                  Neither of those are developed on GitLab to begin with, and I see no point in having a personal mirror there, either.

                                                                  1. 1

                                                                    There’s a few reasons. First, you may want to share a branch with other developers. This can be because it is a work in progress (and thus not suitable for submission yet). Pushing a branch which is a compilation of multiple separate patch series can allow other users/developers to test your work easier. You may want to run third-pary CI (Azure, Travis (although I suppose that is less common now), etc) against your branch.

                                                                2. 0

                                                                  1.7Gb of which is my brother’s website with huge photos

                                                                  Do not store large binary files in Git.

                                                              1. 20

                                                                As somebody who work with Gitlab on a day to day basis for the last 3 years: This really shows that Gitlab is getting desperate.

                                                                Since before the IPO, I think the way they run their products line up: to favor breadth over depth, is already a non-sustainable approach. Even after the IPO, I still see they try to expand their product line up toward craps that they have no talents strategy to back: MLOps, Monitoring, Remote Development environment, Code Search… While their core product offering such as Merge Request, CI, Runner, Gitaly barely get staffed to fix critical bugs.

                                                                As Github started to ramp up in features delivery in the last few years, I think very soon (if not already) they would surpass Gitlab in core features’ depth. I have strong doubt on Gitlab as a company.

                                                                1. 3

                                                                  Since before the IPO, I think the way they run their products line up: to favor breadth over depth, is already a non-sustainable approach

                                                                  Pretty much this right here is why I’ve been so resistant to give SourceHut a real go, even though the product matches more closely what I want in terms of usage. I can’t imagine how it won’t all come crashing down under the weight of all that breadth, and the numerous side projects (and presumably consulting) that they engage in to keep it afloat. (Yes, I recognize that giving them my money will help here…)

                                                                  I mean, if GitLab can’t do it, how can a team of 3?

                                                                  1. 8

                                                                    I mean, if GitLab can’t do it, how can a team of 3?

                                                                    In recent months, I have been trying to figure out why the hell does Gitlab take so long to address some of the obvious Issues in their backlog that affected 30-40s paying enterprise customers. I spent the time to filter throught their backlog, watch their PM’s update video on the Unfiltered channel.

                                                                    Side note: One thing Gitlab has it right: a super transparency approach in running the company. Thanks to this, you could just dive in and get these information yourself.

                                                                    Turn out, the reason why the feature is keep getting delayed is… the entire team who own that feature has 3 engineers + 1 shared(?) PM + 1 shared(?) UX designer working full time on it. They have 1 huge backlog of high priority ‘new’ features to further expand their product breadth and critical bug fixes, pretty much enough work to drown the team of this size for another year.

                                                                    They would never be able to get to the feature request I have been watching because: it would only have impact on big enterprise customers. Gitlab’s product development pipeline are not optimized for this audience, they are trying to optimized for the startup of 5 people who are (1) smaller than Gitlab itself and (2) require something quick to bootstrap everything. There is no feature depths required for that targeted audience.

                                                                    I think such a product strategy makes sense pre-IPO: they spent 3 years hinting IPO via different media channels while trying to make their product offering to cover a massive range of things. This makes the company looks very nice in the eyes of non-technical investors, the growth potential is infinite. But a result of this is that their core product offering rot away:

                                                                    • Merge Request UI got slow, terrible bug that would let mergability check spinning forever until user manually refresh the page.

                                                                    • CI Merge Train was so ahead of everything else, but got no love because… they directed the talents who worked on it to lead new product lines. Now Github is coming out with a better feature offering.

                                                                    • Gitlab CI yaml was so rich in feature, all it requires is a saner YAML syntax. They did… nothing there. Github Actions came out with proper yaml syntax versioning and a much better plugin ecosystem. BuildKite, CircleCI are also getting a lot better.

                                                                    Here are the things I think Gitlab should have never get themselves into producing:

                                                                    • Kubernetes buzz: They used to advertise support for Serverless and WAF. Both of which are deprecated today, waste of effort.

                                                                    • Package registry: They underestimated the serious amount of talents required to build something to compete with Jfrog’s Artifactory and ‘s package registry solution. Their product have very little competitive edge except for “it’s already on Gitlab and we have Gitlab so let’s just use it”.

                                                                    • The things they are trying to get into today:

                                                                      • MLOps: It does not work, don’t make it a thing
                                                                      • Symantic CodeSearch: Trying to compete with SourceGraph using the search indexing engine that SourceGraph is maintaining?
                                                                      • Remote Dev Container: Competing with Github’s Codespace and GitPod and JetBrain’s Space and Replit. If I understand this correctly, Gitlab is starting from behind with 2-3 times less the headcount dedicated to this vs any of the players in the space.
                                                                      • Security Scanning: This one is actually sound product strategy, but instead of building things in-house, they should have aimed to enable better integration with 3rd party solutions.
                                                                    1. 7

                                                                      I mean, if GitLab can’t do it, how can a team of 3?

                                                                      Some of it has got to be the inefficiencies innate in a large organization, or having people who are not enthusiastic about just making a good product. Other parts are the inefficiencies in having engineering time doled out by those who don’t share the priorities of those doing the work.

                                                                      There are many things a small company can do, and well, that a large company can’t hope to do better. Inertia is a factor in engineering organizations.

                                                                      1. 1

                                                                        Yeah, as an employee of a many-hundred-person company it doesn’t surprise me at all that a team of 3 can outmaneuver a VC-backed corporation. Sometimes when I compare what I can do within the structures that are enforced at work vs what I can do in my free time I feel like I’m a hundred times more productive on my own doing what I want to do. I can think of a few factors:

                                                                        • allowing myself the room to do something right the first time VS cutting corners to hit a management-enforced deadline and claiming we’ll go back later to clean it up but never actually doing it
                                                                        • decisions about what to do next are made by people who use the software every day and possess a deep knowledge of how it works and why
                                                                        • not being affected by political maneuverings of managers trying to advance their own career (obviously larger OSS orgs can have plenty of political drama, but usually this doesn’t manifest until you reach a certain size)
                                                                        • not having to ever use Jira
                                                                      2. 5

                                                                        Worth noting that SourceHut publishes yearly finance reports. In particular, the 2021 report mentions:

                                                                        We have three full-time staff paid under the standard compensation model, and about $700/mo in recurring expenses. Currently we make about $9200/mo from the platform, putting our monthly profit at about $1000, without factoring in consulting revenue. Thus, the platform provides for a sustainable business independently of our other lines of business.

                                                                        1. 4

                                                                          I should have chosen a better word than “afloat.” They charge money to use the service, presumably enough to keep things paid.

                                                                          However, my concern is more with keeping the service actually running given how much it does, and it scaling. There was this a few years ago, that made me wonder if this is the scaling strategy, or if there’s another plan. The site has a large number of services and a very small team. That’s generally a risky bus factor, but I am probably being over cautious.

                                                                          1. 5

                                                                            Bro, they have $1000/month in profit. They’re good. Nothing could possibly go wrong with that massive warchest in reserve. /s

                                                                            For real: one decent lawsuit would bankrupt them.

                                                                            1. 2

                                                                              What would you sue them for? Daring to compete with our dear overlord Microsoft?

                                                                              1. 1

                                                                                Literally anything, make up your own lawsuit. It just needs a sheen of merit. You don’t need to win, just run down the clock so they can’t afford to defend themselves.

                                                                                Lack of accessibility under ADA. DMCA or copyright violations. Some OSS licensing or patent garbage.

                                                                                1. 4

                                                                                  Drew has moved to the Netherlands; and it seems the Sourcehut company registry moved with it, so these kind of US-style trivial lawsuits with huge bills are probably less of an issue.

                                                                                  I get your point, but what other options are there if you’re just $some_guy looking to make a small tech business? Starting any business is always a matter of risk.

                                                                                  1. 1

                                                                                    what other options are there if you’re just $some_guy looking to make a small tech business?

                                                                                    For sure, initially you’re running on fumes but no one sues a party with no cash and no influence. But sourcehut has influence now. That means they need to raise prices and create a savings reserve. Ideally, make enough profit (e.g. $10k/mo) to pay for a lawyer to handle any legal cases. $1k/mo isn’t enough.

                                                                              2. 1

                                                                                one decent lawsuit

                                                                                which would force Dr. Evil to out himself.

                                                                          2. 2

                                                                            I can’t imagine how it won’t all come crashing down under the weight of all that breadth

                                                                            Yeah, but the difference here is that sourcehut is fully open source. If it crashes and burns (and I hope it doesn’t) you can just host it yourself. That’s one of the reasons I switched to sourcehut from Gitlab a while ago, despite being quite happy with Gitlab.

                                                                            I mean, if GitLab can’t do it, how can a team of 3?

                                                                            I don’t mean any disrespect to the folks at Gitlab, but a small team actually gives me confidence. Per Kelly’s rules for the Lockheed Skunk Works:

                                                                            1. The number of people having any connection with the project must be restricted in an almost vicious manner. Use a small number of good people (10% to 25% compared to the so-called normal systems).
                                                                            1. 2

                                                                              That’s one of the reasons I switched to sourcehut from Gitlab a while ago, despite being quite happy with Gitlab.

                                                                              You mean like GitLab? I’m sure neither SourceHut, nor GitLab is trivial to host, but yeah, both are possible, at least.

                                                                              1. 3

                                                                                “GitLab’s open core is published under an MIT open source license. The rest is source-available.”

                                                                                Not all of GitLab is possible to self-host, at least not at the moment.

                                                                                1. 1

                                                                                  Fair.

                                                                            2. 1

                                                                              looks gitlab is confident they can, (just) if they charge.

                                                                          1. 2

                                                                            Is there somewhere I can read this that’s not GitHub?

                                                                            1. 3

                                                                              Vim? Just clone it.

                                                                              1. 1

                                                                                I’m reading this on my phone. Oh well.

                                                                              2. 1

                                                                                It’s a peeve to me too. The line length is way too long, Markdown isn’t very well suited for blogging (too limiting) and see all the distracting Microsoft GitHub UI elements doesn’t help a reader—which is further compounded by Firefox’s Reader Mode not being available because it’s not a blog (but should be).

                                                                                1. 1

                                                                                  Reader Mode only works on blogs?! This is a weird limitation (and how does it know it’s a blog??)

                                                                                  I use the EasyrReader add-on for Chrome, works ok for this.

                                                                                  1. 6

                                                                                    https://github.com/mozilla/readability/blob/master/Readability-readerable.js

                                                                                    This is what Firefox uses to determine if a page is Reader Mode ready. It looks like it wants some HTML5 semantic elements so it can be fairly sure this is an article and not a e-commerce site, web app, etc. Microsoft’s GitHub is a web app GUI for a Git forge–so of course it shouldn’t be considered an article. Getting your message out there as the first priority makes a lot of sense, but folks should consider rendering their content on a blog if that’s what the content really is (e.g. our linked article is not source code).

                                                                                    Chrome

                                                                                    No. No to Google. No to Blink hegemony.

                                                                                  2. 1

                                                                                    Reader Mode in Firefox at least on Linux Desktop is available on GitHub, not on Lobste.rs for example though. A super bad hack on the line length can be resizing the window though (and praying that a particular website doesn’t switch to mobile mode).

                                                                                    1. 1

                                                                                      Reader Mode in Firefox at least on Linux Desktop

                                                                                      Interesting. Can confirm, however, Firefox on Android is a sad trombone. News sites and blogs almost always work though.

                                                                                1. 8

                                                                                  There seems to be some lede-burying going on. Not having heard of this before, even after reading the first few pages of the spec I’m still wondering what the justification is for reinventing floating-point arithmetic. Is this format more accurate? Faster? Easier to implement?

                                                                                  It’s been kind of nice having a single standard for floats. Aside from endianness, you don’t have to worry about which format some library uses, or having to tag the type of a value, or choosing which format to use in new code. Unlike, say, character encodings, which used to be a horrible mess before UTF-8 took over.

                                                                                  1. 10

                                                                                    The main sales pitch for Posits is that for a given number of bits, typical numerical computations retain more accuracy. Or conversely, you can use fewer bits to achieve a given level of accuracy.

                                                                                    Posits have better semantics. As a language implementor, I have beefs with the semantics of IEEE floats, which do not map properly onto real numbers, and have shittier mathematical properties than is necessary. The worst problem is NaN, and the rule that NaN != NaN. My language supports equational reasoning, and has an equality operator that is an equivalence relation: a==a, a==b implies b==a, a==b and b==c implies a==c. The semantics of negative 0 is also a big problem. The infinities are easier to deal with. These problems are fixed by Posits.

                                                                                    1. 6

                                                                                      Not all mathematical entities obey transitive equality, though, e.g. infinities. The behavior of NaNs is useful because the end result of a computation can reflect that something within it overflowed or produced an illegal result; you don’t have to test every individual operation.

                                                                                      If Posits don’t support infinities nor NaNs, then operations on them need different error handling — division by zero has to return some kind of out-of-band error code or throw an exception, and then the code that calls it has to handle it. That would be an issue for languages like JavaScript, where division by zero or sqrt(-1) don’t throw an exception, rather return an infinity or NaN.

                                                                                      1. 4

                                                                                        In IEEE floats, there are a bevy of non-real values: -0, +inf, -inf, and the NaNs. Posit has a single unique error value called NaR: Not a Real. This is returned for division by zero.

                                                                                        In IEEE float, positive underflow goes to 0 and negative underflow goes to -0. So 0 ends up representing both true zero and underflowed positive reals. -0 represents underflowed negative reals, in some sense, but it’s messier than that. This design is also not symmetric around zero. -0 is neither an integer nor a real, and in practice, every numeric operation needs to make arbitrary choices about how to deal with it, and there is no mathematical model to guide these choices, so different programming languages make different choices. What’s sign(-1)? Could be 0, -0 or -1 depending on what mathematical identities you want to preserve, or depending on an accident of how the library code is written.

                                                                                        In Posit, 0 denotes true zero, which is easy to understand mathematically. Positive numbers underflow to the smallest positive number. Negative numbers underflow to the smallest negative number. This design is simple, symmetric around zero, and doesn’t introduce a non-real number with unclear semantics.

                                                                                        1. 1

                                                                                          The difference between 0 and -0 is important for many numeric applications, +/-Infinity is important, NaN is important vs Infinity. 1/0 and 0/0 are mathematically distinct, those are real values, -1/0 is again a mathematically distinct value, saying they’re “non-real” is nonsense and does not match the most basic of mathematics.

                                                                                          What is important on the basis of a normal day-to-day needs of a person does not match what is important when you’re actually performing the kind of numeric analysis that is needed in scientific computation. So please don’t say that they are non-real, and please don’t claim that these aren’t necessary just because you personally don’t use them.

                                                                                          1. 5

                                                                                            You raise an important point. The precise semantics of IEEE floats are important to a lot of numeric code, because people code to the standard. Posits are not backward compatible with IEEE floats, and this is a serious issue that will hinder adoption. Posits break some of my code as well.

                                                                                            But there’s nothing sacred about IEEE floats. They aren’t the best possible design for floating point numbers. The Posit proposal comes out of a community that has discussed a variety of alternative designs: they write papers and hold conferences. These people work in high performance computing and are numerical analysts. There are papers on the new idioms that must be used to write numeric code using Posits, explaining the benefits.

                                                                                            Please do not claim that 1/0 and 0/0 are real numbers. This is not mathematically correct. These entities are not members of the set ℝ of real numbers. In mathematics, there are a variety of extensions to the reals that add additional values (such as infinities), but these additional values are not real numbers. For example,

                                                                                            1. 2

                                                                                              I think that some of the things you describe as “coding to the standard” are the post-factum view of the IEEE standard being specifically designed to handle cases that were difficult to handle in other schemes. (Please note that I don’t want to claim anything bad about Posits by this (I haven’t studied the standard in enough detail) – I just want to point out that some things that we are occasionally annoyed by in IEEE 754 really do have practical use).

                                                                                              IIRC signed zero, for example, didn’t arise because the representation is nasty, nor as an unpleasant compromise to make other, more important things possible at the expense of an ambiguous representation. It was in fact a deliberate choice, which ensured that, for complex analytic functions, expressions that represent the same function inside their domain will usually have the same value on the boundary as well. This is a pretty useful property, as lots of engineering problems derive from, or are defined in terms of, boundary conditions. Many systems that don’t have signed zero require that you e.g. be careful to use either sqrt(z^2 - 1) or sqrt(z+1)*sqrt(z-1) depending on boundary conditions, even though they both mean the same thing.

                                                                                              The same thing goes for signed infinities. These aren’t error cases, they’re legit values that propagate through calculations. Pre-IEEE754 real number representation systems that didn’t allow infinities usually did so either at the expense of ambiguity in e.g. inverse trigonometric functions, or by quietly introducing special non-propagating extensions to handle these cases. (I don’t recall the specifics of any representation system that used projective extension. Somehow I doubt that would’ve floated too many engineers’ boats – I’d have a hard time coping with the existential dread induced by a discontinuous exponential function, for example. Even if it turns out to be numerically irrelevant in most cases, I’d have to either be careful about mixing numerical results with analytically-derived conclusions, or carefully rewrite all the math involved in analysing transient systems to account for discontinuities, and I’m really not looking forward to that).

                                                                                              Maybe Posits avoids these problems – like I said, I haven’t studied the standard in detail, and I’m not trying to bash on it. Just wanted to point out that lots of things which now look like standard warts were actually deliberate decisions made to handle real-life situations, not compromises introduced to allow for better handling of other things.

                                                                                              1. 1

                                                                                                IIRC signed zero, for example, didn’t arise because the representation is nasty, nor as an unpleasant compromise to make other, more important things possible at the expense of an ambiguous representation. It was in fact a deliberate choice, which ensured that, for complex analytic functions, expressions that represent the same function inside their domain will usually have the same value on the boundary as well.

                                                                                                Yes; see much ado about nothing’s sign bit.

                                                                                              2. 1

                                                                                                I was using they are “real” in the sense that these are a mathematical concept that exist in the reality of math. Much like 0 this was not acknowledged for most of history, and as such ℝ does not include them. Claiming that they do not exist in ℝ does not mean that they magically cease to exist, any more than 0 does not exist, or that irrational numbers do not exist.

                                                                                                1/0 is well defined, and has a sound mathematical definition, they may not be in ℝ but that doesn’t make them cease to exist and is simply an artifact of the age of ℝ. That there are a group of people who do arithmetic that don’t need a floating point format that reflects the possible non-finite possibilities does not negate that those values exist, nor does it negate their value to other users.

                                                                                                Posits do not offer any particularly meaningful improvement in what can be represented, it has demonstrable reduction in what can be represented, and circuitry to implement it uses more area and is slower.

                                                                                                1. 1

                                                                                                  Posits are meant to represent approximations of members of ℝ, the Real numbers. Therefore, it doesn’t make sense to include representations for things that aren’t members of ℝ.

                                                                                                  1. 2

                                                                                                    In that case posits aren’t a replacement for IEEE floating point, and should stop claiming that they are. The values being disregarded by posits because they aren’t in R are useful, that is why they are there. In the early specification process every feature was under a lot of pressure for performance given the technology of the error. Even something we take for given - gradual underflow - was on the chopping block until intel shipped x87 to show that other manufacturers were wrong to say that what was being specified was “impossible” to implement efficiently (it’s also why fp80 has a decidedly more wonky definition than fp32,64,etc).

                                                                                                    So the idea that the perf gains that posits get from eliding these features were known, and very heavily hashed out in the 80s, where there was much more pressure against additional logic than there is today, and yet even in that environment they decided to keep them.

                                                                                                    So it isn’t surprising the eliding those values make posits “simpler”, but you could also make them simpler and faster by having a fixed exponent - it would greatly reduce usefulness of course, but I give this absurd extreme to demonstrate that everything is trade offs. Posit dropped values that are useful for real world purposes because posit folk don’t use them, and that’s fine, but you don’t get to claim you have a replacement when you are fundamentally not solving the same problem.

                                                                                                    Also, as one final thing, posits don’t support those values to gain some performance back, but despite that hardware implementations are slower and use more area to implement. So to me it seems posits remain a lose/lose proposition.

                                                                                        2. 1

                                                                                          also, posits always use bankers rounding

                                                                                          1. 5

                                                                                            yup, but there are real reasons that you want different rounding, which is why ieee754 specifies them.

                                                                                            1. 2

                                                                                              Okay, but the need to control rounding modes is pretty rare, and support is hit and miss. Hardware doesn’t provide a consistent way to control the rounding modes, if they are supported at all, and most programming languages don’t provide much, if any support. The current Posit standard focuses on just the core stuff that everybody needs, and that’s good. Features like rounding modes that not everybody is going to implement should be optional extensions, not mandatory requirements, and should be added later, if Posits take off.

                                                                                              I’ve personally not had a use for rounding modes, other than in converting floats to ints. The only rationale I’ve seen for rounding control on arithmetic operations is as a way for numerical analysts to debug precision problems in numeric algorithms by using interval arithmetic. The Posit community has a separate proposal for doing this kind of interval arithmetic using pairs of Posits (“valids”) that is claimed to have better numeric properties than using IEEE rounding modes, but I haven’t read more about that than the summary.

                                                                                              1. 1

                                                                                                Okay, but the need to control rounding modes is pretty rare, and support is hit and miss.

                                                                                                The need to care about numerical accuracy for floating point numbers in general is pretty rare. A lot of uses of floating point numbers are very happy with a hand-wave probably-fine approximation. For example, a lot of graphics applications have a threshold of human perception that is far higher than any floating point value (though can have some exciting corner cases where you discover that your geometry abruptly becomes very coarsely quantised when you render an object far from the origin).

                                                                                                For applications that do care, support is generally very good. Fortran has had fine-grained control over rounding modes for decades and it is supported by all Fortran compilers that I’m aware of. Most of the code that cares about this kind of thing is written in Fortran.

                                                                                                C99 also introduced fine-grained control over rounding modes into C. As far as I know, clang is the only mainstream C compiler that doesn’t properly support them (or, didn’t, 10 years ago - I think the Flang work has added the required support to LLVM and the front-end parts are fairly small in comparison). GCC, Visual Studio, XLC, and ICC all support them.

                                                                                                1. 1

                                                                                                  In that case there is no difference in rounding modes, “bankers rounding” is what I would call “to even” but I think is more formally “to nearest or even if half” or some such

                                                                                          2. 8

                                                                                            Posits is one of those “obviously better” things that appear from time to time in techie circles, a bit like tau instead of pi.

                                                                                            I found the following previous submissions :

                                                                                            Edit: unums seem to be a superset of posits, here’s a submission about them: Unums and the Quest for Reliable Arithmetic. And Unums: A Radical Approach to Computation with Real Numbers (Gustafson’s paper).

                                                                                            1. 7

                                                                                              Legit though tau is better

                                                                                              1. 4

                                                                                                I await your 1.5 hour YouTube video explaining it ;)

                                                                                                1. 7

                                                                                                  No need for a youtube video! A circle is uniquely defined by its center point and radius, but π is the ratio of the circumference to the diameter. This makes π exactly half the “elegant” value, so a lot of equations adds a factor of two “correction” that goes away if you use τ instead:

                                                                                                  • A 1/4 turn of a circle is π/2 radians (instead of τ/4 radians)
                                                                                                  • sin and cos are periodic around 2π (instead of τ)
                                                                                                  • Most double integrations are of the form 1/2 Cx²: displacement is 1/2 at², spring energy is 1/2 kx², kinetic energy is 1/2 mv², etc. The one exception is area of a circle, which is 1 π r² (instead of 1/2 τr²).

                                                                                                  It’s not like the end of the world that we use π instead, it’s just inelegant and makes things harder for a beginner to learn.

                                                                                                  1. 7

                                                                                                    I was a member of the cult of τ back in high school and in my first years of engineering school, mostly because I was on really bad terms with my math teacher :-D. So at one point I τ-ized some of the courses I took.

                                                                                                    I can’t say I recommend it, at least not for EE. It’s not bad, but it’s not better, either. I was really in awe about it before, because it made the basic formulas “more elegant” and “mathematically beautiful”. But once I did enough math to run into practical issues, it just wore off, I found the effect negligible at best, and in some cases it just made some easy things easier at the expense of making hard things a little harder.

                                                                                                    First off, I found you wind up playing a lot of correction factor whack-a-mole. For example, working with τ instead of π makes it easier to work with sine signals (and Fourier series of periodical signals), because they’re periodic over τ. But it makes it harder to work with rectified sine signals because those are now periodic over τ/2.

                                                                                                    Most of the time, I found that working in terms of τ just moved the correction factors from pages 1-2 of my notes from each lecture to page 3 and onwards. (Note that I’m also using “rectified” rather loosely here – lots of quantities wind up effectively looking like rectified versions of other quantities, not just voltage fed to a rectifier).

                                                                                                    Then there were a bunch of cases where the change was basically inconsequential. For example, lots of the integrals that were brought up in various τ-related topics on the nerd forums I frequented were expressions written in terms of 2π, which seemed annoying to work with. Then I ran into the same integrals in various EE classes, except everyone was just writing (and using them) in terms of ω, as in 2πf. Whether you define it as 2πf or τf has pretty much no effect. You derive lots of stuff in terms of ω anyway, but ultimately, you really want to end up with expressions in terms of f, because that’s what you can actually measure IRL.

                                                                                                    In most of these cases, working in terms of τ just means you end up with an expression that starts with 1/τ instead of 1/2π (or τ instead of 2π), which hardly makes much of a difference. The expressions you end up with are all in the frequency domain, so their physical interpretation is in terms of “how fast is it spinning on the circle?”, not lengths or ratios of lengths, so τ and π work equally well.

                                                                                                    And then there were a whole lot of cases that you could simplify much more efficiently by applying some slightly cleverer math. For example, working in terms of τ does simplify a bunch of nasty integrals relevant to transient or oscillating regimes, as in, you don’t have to carry an easily-lost constant term in front of the integral. What really simplifies it though is working in s-domain via the Laplace transform, which you can do without caring if it’s τ or π because you’re working in terms of ω anyway, and which allows you to skip the whole nasty integral part entirely.

                                                                                                    Finally – I didn’t know it then, but I did think about it later – there are various things that work worse in terms of τ, like some of the discrete cosine transforms, which have nice expressions in terms of π, not 2π.

                                                                                                    Basically I wasted a couple of weeks of a summer vacation 15+ years ago to find out that, overall, it sucks just as much with both, it’s just that the parts that suck with π are different from the ones that suck with τ. I think that’s when I realised I should’ve really become a musician or, like, drop it all and go someplace nice and raise goats or whatever :(.

                                                                                                    (FWIW a lot of math I learned in uni was basically “how to avoid high school math”. I knew from my Physics textbook that calculus is really important for studying transient regimes, so by the time I finished high school I could do pretty hard integrals in my head. Fast forward to my second year of EE and ten minutes into the introductory lecture my Circuits Theory prof goes like okay, don’t worry, I know the math classes you guys took don’t cover the Laplace transform – I’m going to teach you about it because *gestures at a moderately difficult integral* I haven’t the faintest clue how to solve this, I haven’t done one of these since I was in high school and that was like forty years ago for me).

                                                                                                    1. 1

                                                                                                      Eh, I’m not convinced by the special pleading for the tau version of Euler’s identity:

                                                                                                      https://tauday.com/tau-manifesto#sec-euler_s_identity

                                                                                                      I prefer the original.

                                                                                                      (Apparently Euler was the one who popularized pi the symbol, and he vacillated between it meaning pi or tau.)

                                                                                                      I like pi because it hearkens back to the primeval discovery that if you have a round object, and measure its diameter with a piece of string (more easily done than its radius), then its circumference, the lengths don’t divide easily. Why is that?

                                                                                                      1. 4

                                                                                                        TBH I don’t understand why people find e^iπ + 1 = 0 so elegant. Why +1? You’re sneaking negative numbers in there to make the equation nice.

                                                                                                        I like pi because it hearkens back to the primeval discovery that if you have a round object, and measure its diameter with a piece of string (more easily done than its radius), then its circumference, the lengths don’t divide easily. Why is that?

                                                                                                        Even easier than measuring the diameter of a circle with a string is measuring the diagonal of a square, which gives you the even more primeval (and much easier to prove!) discovery that the diagonal doesn’t divide the sides of the square. It’s a lot easier to prove sqrt(2) is irrational than pi is irrational!

                                                                                                        1. 1

                                                                                                          Since the invention of the potter’s wheel, accessing an object that’s close to perfectly circular has been easier than one that’s perfectly square. According to El Wik, the potter’s wheel is from ~4000 BCE, so a curious kid in ancient Babylon could wonder about the ratio of the circumference to the diameter long before a more privileged one in ancient Greece learned how to construct a square using straight-edge and divider and measured the diagonal (and got murdered by the Pythagoreans for exposing the secret)[1]

                                                                                                          In day-to-day use, diameters are almost universally used: pipes, firearm calibers, screws… I have a tape measure with a scale that’s multiplied by pi so that you can get the diameter by wrapping the tape around an object.

                                                                                                          Sure, all of this can be handled by tau too, but outside the classroom, a radius is much more abstract than a diameter.

                                                                                                          [1] actual murder probably apocryphal

                                                                                                        2. 2

                                                                                                          https://tauday.com/tau-manifesto#sec-euler_s_identity

                                                                                                          I was already convinced that tau is better for constructing radian arguments to trig functions (tau is a full turn). But the Euler identity is so much more elegant using tau. The pi version never made intuitive sense, but the tau version does make intuitive sense to me. Thanks for pointing it out.

                                                                                                          1. 2

                                                                                                            I find the Euler identity most elegant in its full form.

                                                                                                             𝑖𝑥
                                                                                                            𝑒   = cos(𝑥) + 𝑖 sin(𝑥)
                                                                                                            
                                                                                                    2. 4

                                                                                                      I prefer the Indiana definition :D

                                                                                                      1. 9

                                                                                                        I wrote about the history of that redefinition! It’s wild. https://buttondown.email/hillelwayne/archive/that-time-indiana-almost-made-p-32/

                                                                                                        1. 1

                                                                                                          I do love that they didn’t even get a correctly rounded value :D

                                                                                                1. 1

                                                                                                  This is weird to me:

                                                                                                  RHEL 9 also maims the openssl library by disabling SHA1 support by default.

                                                                                                  But isn’t SHA1 disabled for security reasons?

                                                                                                  https://www.zdnet.com/article/openssh-to-deprecate-sha-1-logins-due-to-security-risk/

                                                                                                  1. 2

                                                                                                    ssh-rsa keys are to be phased out but (imho) you can take the stance of not breaking usage of existing ones or to prod users to change by telling them “won’t work on this machine”, I wouldn’t say just “security reasons” works, because the other end still has the public key… How good that transition has worked out world wide is another story, I’m not exactly hopeful though. Unless you only talk about the usage of “maim” - same thing basically.

                                                                                                    1. 2

                                                                                                      The problem isn’t the keys, it is the key exchange protocol. You can use RSA keys with SSH still, you just can’t use the confusingly named ssh-rsa protocol, which uses a known-to-be-weak hash, which would allow an attacker to craft a malicious host key that would impersonate a valid host key and intercept your connections.

                                                                                                      The protocol is still there, it’s just disabled by default. It’s easy to explicitly enable it for specific hosts if you’re willing to take the risk, but I’d rather that SSH had a policy of secure by default and required me to explicitly opt into downgrading than having a policy of insecure by default and requiring me to explicitly disable insecure options.

                                                                                                      1. 1

                                                                                                        Zero people I’ve spoken to have migrated to rsa-sha2-256/512 so with my (limited?) view ed25519 is the new de-facto default, but I agree with your point that I was overly brief.

                                                                                                  1. 1

                                                                                                    The history of ERNIE 1 through ERNIE 5 is fascinating, but you have to wonder about the practicality of using a hardware RNG for all your randomness today, even if “quantum”. A properly built fast-key erasure CSPRNG, initially seeded with a true random seed of course, is generally preferred over a HWRNG for high quality randomness. The Linux RNG uses a ChaCha20 fast-key erasure setup, and its performance is respectable:

                                                                                                    $ dd if=/dev/urandom bs=16M | pv -S -s 100G > /dev/null
                                                                                                     100GiB 0:04:45 [ 358MiB/s] [================================>] 100%
                                                                                                    
                                                                                                    1. 2

                                                                                                      Does anyone know about the legal standing of random number generators with regards to lotteries (which the Premium Bonds were)? Maybe they have to be hardware-random because of statute.

                                                                                                      I seem to remember the Nevada Gambling Comission getting pretty technical when it came to slot machines.

                                                                                                      1. 5

                                                                                                        I made a Freedom of Information Request to the Government Actuary’s Department in the U.K. I’ll report back with their response.