1.  

    The world record in question appears to be “number of digits computed”, which wasn’t totally obvious from a skim of the text.

    1. 16

      So, what’ve you been paid and what are you being paid now?

      Here, putting my money where my mouth is: 55K -> 60K -> 125K -> 160K now, not including contracting and consulting and founding and other misadventures. All base, not counting (usually laughable, never worth it) equity.

      1. 12

        Approximations from memory with some kind of parseable format

        year,salary,tc,cause
        2008,37000,39000,first dev job
        2009,42000,48000,merit raise
        2010,53000,62000,merit raise
        2011,64000,70000,merit raise
        2012,75000,115000,merit raise + acquisition
        2013,81000,83000,COL raise
        2014,115000,120000,role change
        2015,117000,121000,COL raise
        2016,124000,127000,merit raise
        2017,140000,140000,retention raise
        2017,176000,195000,new job with reports
        2018,183000,202000,COL raise
        2019,140000,170000,laid off in end of 2018 with new job early 2019
        2020,140000,174000,new bonus and RSU structure kicks in
        
        1.  

          Can I ask, what is tc?

          1. 7

            My guess is “total compensation” i.e. salary + bonuses

            1. 7

              Total Compensation, which is generally calculated as salary + bonuses + equity if RSUs and not options. Some folks will include 401k in it, but that’s rare because 401k matches are all over the place and are a function of your salary anyway.

              1.  

                That helps, thank you both.

          2.  

            From 2012 to now: $60k -> €63k -> €68k -> €78k -> €110k -> €98k. A couple of those years I also got around €35k in bonuses, but those will probably prove to be outliers in the long run.

            I took a pay cut at the start of the year to have a job with more flexible hours and less stress so I could spend more and better time with my family. It has been 100% worth it and I wish I’d done it sooner.

            1.  

              In terms of cash, I’ve gone:

              [Pittsburgh]

              2010-11: $7.25-$10/hr (I was a high schooler / college freshman)

              2012: $15/hr interning where @colindean was at the time

              [Chicago]

              2013: $25/hr at a startup

              2014: $75k/yr + RSUs at my first long-term full-time job

              2015: $90k/yr + RSUS (promotion)

              2016-2018: $90-150/hr doing freelancing

              2018: $95k/yr + a little equity working 3/4 time at a startup

              2019: $145k/yr + more equity switching to full-time and also getting a raise

              Almost all of this has been full-stack web development in Rails or Clojure.

              1.  

                Oh hai!

                1.  

                  Hope things are going well in your post-IBM life! I miss the burgh!

              2.  

                Boston area, software developer, primarily backend.

                • 66 (base, thousands of USD), starting job out of college in 2011, where I had interned before
                • 69, standard raise
                • 88, changed employer, 2013, did not negotiate
                • 99, when manager noticed how little I was paid
                • 103, standard raise
                • 106, standard raise
                • 108, standard raise
                • 118, raise when I pointed out how badly underpaid I was
                • 142, changed employer in 2019 and actually negotiated my salary (although insurance plan not as good, which cuts several thousand out of this)

                I could probably be making 150+ depending on employer, or 180+ if I worked for an employer I hated.

                I’ve tried mentioning my salary to other developers in contexts when it made sense, but they’ve never offered, and I’ve never asked. Not really sure how to get that conversation going.

                1.  

                  A friend of mine did very well on his startup equity 4 startups in a row. But yeah your mileage will vary.

                1. 38

                  Rust.

                  The dev experience is so much nicer than my usual C/C++. After spending a lot of time writing and doing code reviews of C, C++, and rust, I am pretty convinced that it is much easier to write correct code the first time in rust than it is in the others, and rust has equally nice performance properties but is much easier to deploy.

                  I spend most of my day working on high performance network software. I care about safety, correctness, and performance (in that order). The rust compiler pretty much takes care of the first item without any help from me, makes it very easy to achieve the second one, and is just as good as the alternatives for the third.

                  1. 6

                    I’m curious if you’ve ever tried another — non C/C++/Rust — language (anything garbage collected or dynamically typed) for projects where you don’t necessarily care about the fastest runtime? Is that ever relevant, or do you really only work on “high performance network software”?

                    1. 8

                      I work in games, and my experience is very similar to mortimer. I would go rust with no hestitation.

                      I’ve done a lot of C# with Unity, and quite a bit of Go. I’d pick Rust over both of them any day of the week.

                      The big thing with C# in games is that you lack control, and also have to do generally more memory management than even C++, working around the garbage collector is not fun.

                      1. 7

                        Sure, there is some stuff where performance doesn’t matter too much, and for those we’re free to choose something else. Python is pretty popular in this space, though even for these things I’d still consider using Rust instead just because the compiler makes it harder to screw up error handling and such.

                        I did a transparent network proxy in ruby once, and that was super nice because ruby is super nice, but if I were to do it again today then I’d pick Rust. Most of the code wasn’t something you’d get from a library, and the vast bulk of bugs I had to handle would have been squashed by a better type system (this thing that is usually a hash is suddenly an array!) and better error handling (this thing you thought would work did not, and now you have a nil object!). Ruby (also python) just don’t help you at all with these things because it’s dynamically typed and will usually return nil to indicate error (or python will sometimes throw, which is just offensive). This paradigm where the programmer has to manually identify all the places where errors can happen by reading the documentation, and then actually remember to do the check at runtime is really failure prone - inevitably someone does not remember to check and then you get mystery failures at runtime in prod. Rust’s Result and Option types force the programmer to deal with things going wrong, and translate the vast bulk of these runtime errors into compile time errors (or super-obvious-at-code-review-time unwrap()s that you can tell them to go handle correctly).

                        I haven’t really done any professional Java dev, but the people I know who do Java dev seem happy with it. They don’t have any complaints about performance - and they deploy in places where performance matters. When they do complain about Java, they complain about the bloat (?) of the ecosystem. FactoryFactoryFactories, 200 line backtraces, needless layers of abstraction, etc.. I don’t think they’re looking to change, so they must be happy enough. When I did Java in school I remember lots of NullPointerExceptions though, so I assume the same complaint I have about ruby / python / C / C++ error handling would apply to Java.

                        For personal projects, it was usually ruby (because ruby is super nice), but lately all the new stuff is Rust because the error handling is so much better and it’s easier to deploy. Even when I don’t care about it being fast I do care about it being correct.

                      2. 1

                        Another reason: Attract good developers!

                        That’s the flipside of all the good techical reasons, plus actually some of the bad - learning curve and newness.

                        There are too few Rust and Haskell jobs, ok many C and C++ jobs, and absurdly many Java jobs.

                        1. 11

                          In order to validate the ‘learning curve for newbies’ concern, I actually gave Rust to a new employee (fresh out of uni) to see what would happen. They had a background in Java and hadn’t heard of Rust before then. I gave them a small project and suggested they try Rust, then sat back to see what happened. They were productive in about a week, had finished the project in about two weeks, and that project has been running in production ever since without any additional care or feeding for over a year now. This experience really cemented for me that Rust isn’t that hard to learn, even for newbies. The employee also seemed to enjoy it (this is a bit of an understatement), so if new staff can be both productive and happy then I’m not too concerned about learning curves and stuff.

                          1. 4

                            Vast majority of people that write about Rust online mention fighting the borrow checker. Your new folks didn’t have that problem?

                            1. 8

                              Having helped both a few co-workers and a fresh intern with answering rust questions as they learned it, I’ve come up with a theory: Fighting the borrow checker is a symptom of having internalized manual memory management in some previous language before learning rust. And especially severe cases of it is from having internalised some aspect of manual memory management wrong. People who don’t have that are much more likely to be open to listening to the compiler than people who “know” they’re already implementing it right & they just need to “convince the compiler”.

                              1. 8

                                I find that I can often .clone() my way out of problems for now and still be correct.

                                Sometime later I can revisit the design to get better performance.

                                1. 4

                                  Oh yes, new people fight the borrow checker but it just isn’t that bad (at least not in my experience) and they seem to get past it quickly. The compiler emits really excellent error messages so it’s easy to see what’s wrong, and once they get their heads around what kinds of things the borrow checker is concerned about they just adapt and get work done.

                                  1. 3

                                    I felt that I wasn’t fighting it. It was difficult, but the compiler was so helpful that it felt more like the compiler was teaching me.

                                    (That said, I was coming from Clojure, which has terrible compilation errors.)

                                    1. 1

                                      Not sure about his employee’s perspective. But, I’m new to writing in Rust, and I think the frustration with the borrow checker is not understanding (or maybe just not liking?) what it is trying to do. My experience has been that at first I wanted to just try to build something in Rust and work through the documentation as I go. In that case the borrow checker was very frustrating and I wanted to just stop. But, instead I worked my way through the Rust book and the examples. Now I’ve picked up the project again, and it isn’t nearly as frustrating because I understand what the borrow checker and ownership stuff is trying to do. I’m enjoying working on the project now.

                                    2. 2

                                      This experience really cemented for me that Rust isn’t that hard to learn, even for newbies.

                                      Counter anecdota – we have a team at $job that works entirely in rust, and common complains from the team are:

                                      1. The steep learning curve and onboarding time for new team members
                                      2. The Very Slow compile times

                                      We aren’t hiring many folks direct from uni though – so perhaps counter-intuively, having more experience in other languages may make learning rust more difficult for some, and not less? Unsure.

                                1. 5

                                  Linux user for 15 years, but I’m still unclear on something: Are all of these things literally in the kernel, or do these line items refer to APIs and other support code being added to the kernel to enable integrations for these things?

                                  1. 7

                                    It’s all literally in the kernel. All of the APIs/most drivers/etc are all a part of the kernel, due to Linux’s monolithic design.

                                  1. 1

                                    and wanting to override the seemingly massive default browser font size of 16px (which, it turns out, is correct)

                                    So while this guy’s blog renders at 16px, the linked article renders at an obnoxiously-large 22px…

                                    1. 3

                                      Heads-up: Author is a woman.

                                      1. 0

                                        Please don’t refer to the author of this piece as a “guy”.

                                      1. 2

                                        Ooh, this makes me uncomfortable. The basic rationale of structure-sharing is interesting, but that is deeply a performance decision. The use-cases people have talked about are also interesting, but I don’t think most use of dicts relies on or even benefits from this behavior.

                                        My preference would have been to expose OrderedDict as a supported implementation (if it wasn’t already), but left it behind the Dict abstraction.

                                        If a future version of Python wants a different behavior, this will be impossible to safely undo at scale, since there’s nothing static analysis can do to check for reliance on this behavior.

                                        1. 3

                                          Yeah, the not-constraining-future-implementations way to go would have been to keep the order unspecified and to actually add randomness to the hash iterator at the same time as the storage was made deterministic, so as to render any code that depends on the ordering obviously broken.

                                          For a lot of years, Perl advertised hashes as being unordered, but the reality was that the retrieval order on iteration was deterministic in 99.99% of cases, and a few packages, entirely by accident, ended up with behavior reliant on this (mostly test cases relying on output being in a certain order, but also a few cases where e.g. an object depended on one of its slots being initialized before another, and that just happened to be true with the fixed “random” order). Then, in order to harden Perl against potential DoS attacks, a change came along that perturbed all hash values using a random seed chosen at process startup. And it got pushed through and released by vendors pretty quickly because it was considered a security fix. Needless to say, all of those buggy packages that were relying on the previous ordering got detected very quickly :)

                                        1. 9

                                          I love everything about this. What a great solution. :-)

                                          1. 4

                                            Yep it’s really nice somehow to see these amazing results!

                                            Sad thing is since an hour of an average engineers time is worth more on the open market then this TV is, so rationality would say that these efforts are rarely worth the time spent, but yet we do it, there’s a less tangible recreational value in it.

                                            I’ve got an 10 year old flaky tv myself, it’s probably not worth anything but it’s good enough for me. Lately it’s had problems with powering on properly and a friend using the same model had the same problem, we fixed both our tv’s by screwing them appart and surely finding the same cheap capacitor building like there’s no tomorrow, procuring new replacements and replacing it (and the rest of them while at it) combined with time spent unmounting it from the wall etc must have largely out-weighted both it’s value and the price of a new tv, yet there’s a rewarding feeling in fixing it and knowing that I’ll be able to use it a couple of years longer.

                                            1. 7

                                              If the time did not displace working time, then there’s no loss of income for you from doing this.

                                              Electronics are also likely priced too cheaply because environmental and labour costs are discounted through poor living standards for the workers and lax environmental regulation. If we lived in a world with a flatter income distribution and better environmental controls then we’d probably reuse, repair and recycle a lot better.

                                            2. 2

                                              I want to have friends doing stuff like this!

                                              1. 0

                                                For me, it’s more at the level of “not bad”. Here’s the disappointing bit:

                                                It would be nice to apply the corrective filter to the whole screen instead of just a video playing in an application, but I couldn’t think of a way to do it.

                                                If the author had managed to get the filter into the TV’s firmware or something like that, I would be truly impressed.

                                                1. 4

                                                  It would be indeed sweet to have the correction running all the time. An FPGA devboard with two HDMI ports could be a realistic solution here.

                                                  However I can’t even begin to imagine the toll such hardware hacking would take on my free time. Sometimes a “80% there” solution is good enough.

                                              1. 2

                                                I can’t put my finger on it but I don’t like this article. Also I find it a bit weird to first write an article on how hard it is, then one using OpenBSD (that will surely go over well for people who have never administered a webserver OR used the OS) and then this one.

                                                Not that I’m disagreeing with the basic premise and outcome of the article (I.. guess?) but that doesn’t mean I have to agree with this article.

                                                Disclaimer: Have self-hosted mail server for 15+ years, have done that at work, have also run OpenSMTPd on OpenBSD for a while and found it lacking/too complicated to replicate the features I wanted, which I’d describe as “not too fancy”, but that was a few years ago, I’m generally a fan of Postfix.

                                                1. 6

                                                  I interpreted the article about mail hosting being hard to be attempting to inform people that running your own mail server isn’t as incredibly difficult as is commonly said to be.

                                                  1. 0

                                                    I’m not sure that everyone read the whole thing instead of the headline and opening paragraph. :P

                                                    1. 1

                                                      I skimmed that one, and don’t understand why the blog post title contradicted the text. I wonder if it was intended to be sarcastic or mocking, somehow.

                                                1. 9

                                                  How the hell is that a headache to zip a git checkout? Also what does the entry level blurb have to do with git bundle? If you can’t get it to work it’s probably the code’s fault and not that it wasn’t properly zipped? I don’t understand any of the premise or the problem. Thanks for the hint to git bundle though.

                                                  1. 2

                                                    Note that a simple archive of the entire repository will include site-local data (perhaps even sensitive data like access credentials for databases or API endpoints) that is otherwise excluded from the repository by gitignore rules. If I frequently had to create repository archives, I would probably grow annoyed with cleaning up those files before archiving, or I might forget to do so on occasion. git bundle exists for this exact use case, which is distinct from simply archiving your work tree that happens to have a git repository in it.

                                                    1. 1

                                                      Not to mention that the zip file could include hook scripts, which might be a security risk.

                                                      1. 1

                                                        I have nothing against the use of git bundle and I’m happy I now know it exists, my criticism was mostly aimed at the perceived motivation of the blog post :)

                                                      2. 0

                                                        Also, bundle gives you smaller files than simply creating a zip file or a tarball, without the need for an additional tool for extraction, and it’s very easy to screw up the creation/extraction of archives such that you end up with a whole bunch of crap in your current working directory rather than the subdirectory that you expect it to be extracted to.

                                                        The scenario is a bit contrived, but the problems it solves are not.

                                                        1. 1

                                                          I’m not disagreeing here, theres a reason I inspect every zip file AND always make a new dir and unzip it in there, on principle.

                                                          But I’m not really sure if the overlap of “people who create a mess with zip files” and “people who would know about git bundle” is meaninful.

                                                          1. -1

                                                            I would imagine that the set of people who realise git bundle exists in the first place is very small, but has now grown a bit. git bundle is a superior solution to using archives for this kind of thing, in spite of its command line interface being leaky.

                                                            Also, when a tool makes it really easy to shoot yourself in the foot, I blame the tool, not the user. It’s far too easy to accidentally create a zip file that will bite the person extracting it, and similarly easy to forget to check on the receiving side.

                                                      1. 1

                                                        AT&T ZTE 223 that I bought unlocked off Amazon for like $40. (The seller had clearly purchased it, cut open the package, unlocked the phone, and taped it back in.) Works pretty well! Good battery life (a week or more), assignable ring tones, decent calendar and alarms.

                                                        Big downside is that it doesn’t have an SD card, so I have to turn on Bluetooth to load custom ringtones. And it doesn’t have call recording.

                                                        I use a laptop when I need internet. Laptops have gotten very thin and light these days, for better or worse.

                                                        1. 6

                                                          I disagree with more things in this list than I agree with, a few things seem particularly pathological:

                                                          37: This meme is just outdated. Please stop perpetuating it.

                                                          49: This is not that hard to fix, even in small organisations.

                                                          32: As edef put it, this is how you get low-frequency, high-impact bugs to hang around forever.

                                                          A lot of the technical things in this post are issues that a smaller organisation without the budget to build sophisticated tooling would have (e.g. 10) or that are indicators of larger organisational issues (e.g. 26, 5).

                                                          1. 3

                                                            37: This meme is just outdated. Please stop perpetuating it.

                                                            Nah, I see this a lot at both my work and where I volunteer teaching programming. Most memorize a few commands for it that usually work. When they don’t work, they turn to StackOverflow or one of the few people who do understand it.

                                                            1. 1

                                                              49 is still valuable. If your org has solved it, that’s great, but you still need to use the solution the org has come up with, rather than just looking at master.

                                                              1. 1

                                                                I’m in an org that is switching to Gitlab from a long history with TFS. And it’s amazing how many of my “intermediate” and “senior” software engineers can’t get code out of git with out having to use Visual Studio. And God forbid they have to use the git cli. So I think 37 isn’t a meme for a lot of us.

                                                                And 49 is great till you find out somebody switched a pipeline to use a branch for some reason and never moved it back.

                                                              1. 16
                                                                1. 6

                                                                  Thank you! I basically never watch videos, so I’m glad I clicked into the comments here and saw your link. :-)

                                                                  Developers could theoretically build an ECC implementation with terrible parameters and fail to check for things like invalid curve points, but they tend to not do this. A likely explanation is that the math behind ECC is so complicated that very few people feel confident enough to actually implement it. In other words, it intimidates people into using libraries built by cryptographers who know what they’re doing. RSA on the other hand is so simple that it can be (poorly) implemented in an hour.

                                                                  Yyyyyyep. I implemented RSA as a teenager. It was way too easy. Deceptively easy. And my implementation didn’t use any padding, so it would have been vulnerable.

                                                                1. 7

                                                                  It is very expensive and time consuming to build datasets and make data driven statements without data errors, so am I saying until we can publish content free of data errors we should stop publishing most of our content? YES! If you don’t have anything true to say, perhaps it’s best not to say anything at all.

                                                                  If you truly believe this, then why have you written this blog post? It is teeming with uncited statements, as you yourself note.

                                                                  1. 1

                                                                    I think it is fine to post content with data errors as long as you are aware of them and disclose them.

                                                                    1. 2

                                                                      But you didn’t even mark and disclose all of your data errors.

                                                                      1. 1

                                                                        “click the button below to highlight just some of the data errors on this page alone.”

                                                                        1. 2

                                                                          Yes. And you didn’t bother to highlight all of your data errors. I find this telling, because you’re asking people to do a huge amount of work to mark data errors, and you didn’t even do it to completion in a single post.

                                                                          1. 1

                                                                            Exactly. It’s a huge amount of work to mark data errors. So I’m hoping someone invents something new to make that less work.

                                                                  1. 4

                                                                    i’d always like to hear what provoked the “condescending replies”. sounds like a fun guy :)

                                                                      1. 27

                                                                        I don’t read anything condescending, he does not and know anything about you, he has to explain things clearly and can’t assume anything about what the end user does or does not understand.

                                                                        1. 12

                                                                          Yeah, me neither.

                                                                          “I’m a contributor to Mercurial, but thanks for explaining how it’s designed to me.” is a pointlessly aggressive response unless you expect him to somehow know (and remember) that.

                                                                          1. 12

                                                                            On the other hand, stating that “hg is not suitable for your use case” strikes me as rather patronizing. It’s demonstrably false as evidenced by the fact that this repo exists, and has been working like this for a while on BitBucket. So clearly it works and Drew’s un-nuanced assertion is false.

                                                                            Drew’s case would have been much better if he had just stated “sorry, we don’t support this particular use case” instead of saying that “you’re doing it wrong”.

                                                                            I’m not trying to defend Steve here, but no one is exactly smelling like roses in this conversation. Both parties could have done better.

                                                                            1. 2

                                                                              That’s fair.

                                                                          2. 7

                                                                            Ditto, the only one being condescending as far as I can tell was @sjl. @ddevault acted in a professional manner.

                                                                            1. 4

                                                                              Professional in terms of tone. Not sure I would describe his decisions as professional, but that’s clearly more subject to debate, based on the number of comments here.

                                                                              1. 2

                                                                                My interpretation of the interaction was that @ddevault’s initial email was giving @sjl a heads up and wasn’t specifically ordering him to take down his files. @sjl took down his files voluntarily, but I think @ddevault’s initial email left open the possibility of a discussion / negotiation, which seems courteous and professional to me. I.e. there was no explicit decision made on @ddevault’s part, outside of the initial warning. Maybe I’m wrong in my interpretation

                                                                      1. 10

                                                                        @ddevault Would it be possible to get a clear “Terms of Service” clarifying these sorts of use cases? 1.1 Gb seems like an excessive file size, but having a crystal clear & mutually agreed upon set of rules for platform use is essential for trust (more so for a paid service), and right now users don’t know what does and does not constitute as a reasonable use of the service .

                                                                        1. 37

                                                                          No, they’re intentionally vague so that we can exercise discretion. There are some large repositories which we overlook, such as Linux trees, pkgsrc, nixpkgs, even mozbase is overlooked despite being huge and expensive to host.

                                                                          In this guy’s case, he had uploaded gigabytes of high-resolution personal photos (>1.1 Gb - it takes up more space and CPU time on our server than on your workstation because we generate clonebundles for large repos). It was the second largest repository on all of SourceHut. SourceHut is a code forge, not Instagram.

                                                                          1. 40

                                                                            No, they’re intentionally vague so that we can exercise discretion.

                                                                            I like to call this “mystery meat TOS”. You never know what you’ll get until you take a bite!

                                                                            1. 24

                                                                              I mean, honestly, a small fraction of our users hit problems. I’ve had to talk to <10 people, and this guy is the only one who felt slighted. It’s an alpha-quality service, maybe it’ll be easier to publish objective limits once things settle down and the limitations are well defined. On the whole, I think more users benefit from having a human being making judgement calls in the process than not, because usually we err on the side of letting things slide.

                                                                              Generally we also are less strict on paid accounts, but the conversation with this guy got hostile quick so there wasn’t really an opportunity to exercise discretion in his case.

                                                                              1. 30

                                                                                the conversation with this guy got hostile quick

                                                                                Here’s the conversation, for folks who want to know what “the conversation got hostile” means to Source Hut: https://paste.stevelosh.com/18ddf23cb15679ac1ddca458b4f26c48b6a53f11

                                                                                1. 31

                                                                                  i’m not a native speaker, but have the feeling that you got defensive quickly:

                                                                                  Okay. I guess I assumed a single 1.1 gigabyte repository wouldn’t be an unreasonable use of a $100/year service. I certainly didn’t see any mention of a ban on large binary files during the sign up or billing process, but I admit I may have missed it. I’ve deleted the repository. Feel free to delete any backups you’ve made of it to reclaim the space, I’ve backed it up myself.

                                                                                  it’s a pay-what-you-like alpha service, not backed by venture capital. you got a rather friendly mail, noticing you that you please shouldn’t put large files into hg, not requesting that you delete it immediately.

                                                                                  ddevaults reply was explaining the reasoning, not knowing that you are a mercurial contributor:

                                                                                  Hg was not designed to store large blobs, and it puts an unreasonable strain on our servers that most users don’t burden us with. I’m sorry, but hg is not suitable for large blobs. Neither is git. It’s just not the right place to put these kinds of files.

                                                                                  i’m not sure i’d label this as condescending. again I’m no native speaker, so maybe i’m missing nuances.

                                                                                  after that you’ve cancelled your account.

                                                                                  1. 13

                                                                                    As a native speaker, your analysis aligns with how I interpreted it.

                                                                                    1. 9

                                                                                      Native speaker here, I actually felt the conversation was fairly polite right up until the very end (Steve’s last message).

                                                                                    2. 27

                                                                                      On the whole, I think more users benefit from having a human being making judgement calls in the process than not, because usually we err on the side of letting things slide.

                                                                                      Judgement calls are great if you have a documented soft limit (X GB max repo size / Y MB max inner repo file size) and say “contact me about limit increases”. Your customers can decide ahead of time if they will meet the criteria, and you get the wiggle room you are interested in.

                                                                                      Judgement calls suck if they allow users to successfully use your platform until you decide it isn’t proper/valid.

                                                                                      1. 12

                                                                                        That’s a fair compromise, and I’ll eventually have something like this. But it’s important to remember that SourceHut is an alpha service. I don’t think these kinds of details are a reasonable expectation to place on the service at this point. Right now we just have to monitor things and try to preempt any issues that come up. This informal process also helps to identify good limits for formalizing later. But, even then, it’ll still be important that we have an escape hatch to deal with outliers - the following is already in our terms of use:

                                                                                        You must not deliberately use the services for the purpose of:

                                                                                        • impacting service availability for other users

                                                                                        It’s important that we make sure that any single user isn’t affecting service availability for everyone else.

                                                                                        Edit: did a brief survey of competitor’s terms of service. They’re all equally vague, presumably for the same reasons

                                                                                        GitHub:

                                                                                        [under no circumstances will you] use our servers for any form of excessive automated bulk activity (for example, spamming or cryptocurrency mining), to place undue burden on our servers through automated means, or to relay any form of unsolicited advertising or solicitation through our servers, such as get-rich-quick schemes;

                                                                                        The Service’s bandwidth limitations vary based on the features you use. If we determine your bandwidth usage to be significantly excessive in relation to other users of similar features, we reserve the right to suspend your Account, throttle your file hosting, or otherwise limit your activity until you can reduce your bandwidth consumption

                                                                                        GitLab:

                                                                                        [you agree not to use] your account in a way that is harmful to others [such as] taxing resources with activities such as cryptocurrency mining.

                                                                                        At best they give examples, but always leave it open-ended. It would be irresponsible not to.

                                                                                        1. 17

                                                                                          The terms of service pages don’t mention the limits, but the limits are documented elsewhere.

                                                                                          GitHub:

                                                                                          We recommend repositories be kept under 1GB each. Repositories have a hard limit of 100GB. If you reach 75GB you’ll receive a warning from Git in your terminal when you push. This limit is easy to stay within if large files are kept out of the repository. If your repository exceeds 1GB, you might receive a polite email from GitHub Support requesting that you reduce the size of the repository to bring it back down.

                                                                                          In addition, we place a strict limit of files exceeding 100 MB in size. For more information, see “Working with large files.”

                                                                                          GitLab (unfortunately all I can find is a blog post):

                                                                                          we’ve permanently raised our storage limit per repository on GitLab.com from 5GB to 10GB

                                                                                          Bitbucket:

                                                                                          The repository size limit is 2GB for all plans, Free, Standard, or Premium.

                                                                                          1. 8

                                                                                            I see. This would be a nice model for a future SourceHut to implement, but it requries engineering effort and prioritization like everything else. Right now the procedure is:

                                                                                            1. High disk use alarm goes off
                                                                                            2. Manually do an audit for large repos
                                                                                            3. Send emails to their owners if they seem to qualify as excessive use

                                                                                            Then discuss the matter with each affected user. If there are no repos which constitute excessive use, then more hardware is provisioned.

                                                                                            1. 11

                                                                                              Maybe this is something you should put on your TOS/FAQ somewhere.

                                                                                          2. 8

                                                                                            This informal process also helps to identify good limits for formalizing later.

                                                                                            Sounds like you have some already:

                                                                                            • Gigabyte-scale repos get special attention
                                                                                            • Giant collections of source code, such as personal forks of large projects (Linux source, nix pkgtree) are usually okay
                                                                                            • Giant collections of non-source-code are usually not okay, especially binary/media files
                                                                                            • These guidelines are subject to judgement calls
                                                                                            • These guidelines may be changed or refined in the future

                                                                                            All you have to do is say this, then next time someone tries to do this (because there WILL be a next time) you can just point at the docs instead of having to take the time to explain the policy. That’s what the terms of service is for.

                                                                                        2. 8

                                                                                          Regardless of what this specific user was trying to do, I would exercise caution. There are valid use cases for large files in a code repository. For example: Game development, where you might have large textures, audio files, or 3D models. Or a repository for a static website that contains high-res images, audio, and perhaps video. The use of things like git-lfs as a way to solve these problems is common but not universal.

                                                                                          To say something like, “SourceHut is a code forge, not Instagram” is to pretend these use cases are invalid, or don’t exist, or that they’re not “code”, or something.

                                                                                          I’ve personally used competing services like GitHub for both the examples above and this whole discussion has completely put me off ever using Sourcehut despite my preference for Mercurial over Git.

                                                                                          1. 3

                                                                                            I agree that some use-cases like that are valid, but they require special consideration and engineering work that hg.sr.ht hasn’t received yet (namely largefiles, and in git’s case annex or git-lfs). For an alpha-quality service, sometimes we just can’t support those use-cases yet.

                                                                                            The instragram comparison doesn’t generalize, in this case this specific repo was just full of a bunch of personal photos, not assets necessary for some software to work. Our systems aren’t well equipped to handle game assets either, but the analogy doesn’t carry over.

                                                                                      2. 4

                                                                                        I don’t think the way you’re working is impossible to describe, I think it’s just hard and I think most people don’t understand the way you’re doing and building business. This means your clients may have an expectation that you will give a ToS or customer service level that you can not or will not provide

                                                                                        To strive towards a fair description that honours how you are actually defining things for yourself and tries to make that more transparent without having to have specific use cases, perhaps there is a direction with wording such as:

                                                                                        • To make a sustainable system we expect the distribution of computing resource usage and human work to follow a normal distribution. To preserve quality of service for all clients and to honour the sustainability of the business and wellbeing of our stuff and to attempt to provide a reasonably uniform and undestandable pricing model, we reserve the right to remove outliers who use an unusually large amount of any computing and/or human resource. If a client is identified as using a disproportionate amount of service, we will follow this process: (Describe fair process with notification, opportunity for communication/negotiation, fair time for resolution, clear actions if resolution is met or not).
                                                                                        • This system is provided for the purposes of XYZ and in order to be able to design/optimise/support this system well we expect all users to use it predominatly for this purpose. It may be the case that using our system for other things is possible, however in the case we detect this we reserve the right to (cancel service) to ensure that we do not arrive at a situation where an established client is using our service for another prupose which may perform poorly for them in the future because it is not supported, or may become disproportionately hard for us to provide computing resource or human time for because it is not part of XYZ. This will be decided at our discretion and the process we will follow if we identify a case like this is (1,2,3)
                                                                                        1. 1

                                                                                          No, they’re intentionally vague so that we can exercise discretion.

                                                                                          Funny way to say “so I can do whatever I want without having to explain myself”

                                                                                          1. 14

                                                                                            I think that’s unfair. He did in fact explain himself to the customer and it was the customer who decided to cancel the service. I’d agree if the data was deleted without sufficient warning, but that is not the case here.

                                                                                          2. 1

                                                                                            Would it be possible to get a clear “Terms of Service” clarifying these sorts of use cases?

                                                                                            No, they’re intentionally vague so that we can exercise discretion. There

                                                                                            May I suggest, perhaps: “ToS: regular repositories have a maximum file size X and repository size Y. We provide extra space to some projects that we consider important.”

                                                                                        1. 3

                                                                                          So my question now is, how much does this affect SHA-256 and friends? SHA-256 is orders of magnitude stronger than SHA-1, naturally, but is it enough orders of magnitude?

                                                                                          Also, it’s interesting to note that based on MD5 and SHA-1, the lifetime of a hash function in the wild seems to be about 10-15 years between “it becomes popular” and “it’s broken enough you really need to replace it”.

                                                                                          1. 8

                                                                                            […] the lifetime of a hash function in the wild seems to be about 10-15 years […]

                                                                                            That’s assuming that we’re not getting better at creating cryptographic primitives. While there are still any number of cryptanalysis techniques remaining to be discovered, at some point we will likely develop Actually Good hashes etc.

                                                                                            (Note also that even MD5 still doesn’t have a practical preimage attack.)

                                                                                            1. 3

                                                                                              It would stand to reason that we get as good at breaking cryptographic primitives as we get at creating them.

                                                                                              1. 1

                                                                                                Why? Do you believe that all cryptographic primitives are breakable, and that it’s just a matter of figuring out in what way?

                                                                                                1. 1

                                                                                                  I have no idea but that sounds like a GREAT theoretical math problem!

                                                                                              2. 2

                                                                                                This seems likely, but we won’t know we’ve done it until 30-50 years after we do it.

                                                                                              3. 5

                                                                                                In the response to the SHA1 attacks (the early, theoretical ones, not the practical ones) NIST started a competition, in part to improve research on hash function security.

                                                                                                There were voices in the competition that it shouldn’t be finished, because during the research people figured out the SHA2 family is maybe better than they thought. Eventually those voices weren’t heard and the competition was finished with the standardization of SHA3, but in practice almost nobody is using SHA3. There’s also not really a reason to think SHA3 is inherently more secure than SHA2, it’s just a different approach. Theoretically it may be that SHA2 stays secure longer than its successors.

                                                                                                There’s nothing even remotely concerning in terms of research attacking SHA2. If you want my personal opinion: I don’t think we’re going to see any practical attack on any modern hashing scheme within our lifetimes.

                                                                                                Also the “10-15 years” timeframe - there is hardly any trend here. How many relevant hash functions did we have overall that got broken? It’s basically 2. (MD5/SHA1). Cryptography just doesn’t exist long enough for there to be a real trend.

                                                                                                1. 5

                                                                                                  As any REAL SCIENTIST knows, two data points is all you need to draw a line on a graph and extrapolate! :D

                                                                                                  1. 1

                                                                                                    FWIW, weren’t md2 and md4 were both used in real world apps? (I think some of the old filesharing programs used them.) They were totally hosed long before md5.

                                                                                                    1. 1

                                                                                                      I considered those as “not really in widespread use” (also as in: cryptography wasn’t really a big thing back then).

                                                                                                      Surprising fact by the way: MD2 is more secure than MD5. I think there’s still no practical collision attack. (Doesn’t mean you should use it - an attack is probably just a dedicated scientist and some computing power away - but still counterindicating a trend.)

                                                                                                      1. 1

                                                                                                        I have a vague (possibly incorrect) recollection of hearing that RIAA members were using hash collisions to seed broken versions of mp3 files on early file sharing networks that used very insecure hashing which might have been md4 (iirc it was one where you could find collisions by hand on paper). Napster and its successors had pretty substantial user bases that I’d call widespread. :)

                                                                                                  2. 2

                                                                                                    The order of magnitude is a derivative of many years of cryptanalysis over the algorithm and the underlying construction. In this case (off the top of my head), this is mostly related to weaknesses to Merke-Damgard, which sha256 ony partially uses.

                                                                                                    1. 1

                                                                                                      How funny!

                                                                                                      What are your relevant estimates for the time periods?

                                                                                                      When was the SHA-256 adoption, again?

                                                                                                      1. 12

                                                                                                        Here’s a good reference for timelines: https://valerieaurora.org/hash.html

                                                                                                        1. 2

                                                                                                          That site is fantastic, thank you.

                                                                                                    1. 6

                                                                                                      I was thinking about doing this last week when I moved from AWS WorkMail to Fastmail, thanks to the fact that Fastmail lets you not only receive emails at aliases, but also send them as such (which some sites might need for authentication purposes when contacting support, etc.).

                                                                                                      I’d like to hear the downsides of this approach, if any.

                                                                                                      1. 1

                                                                                                        I’ve done this for over a decade, and at Fastmail for the last few years. In my experience the downsides are:

                                                                                                        • This doesn’t work so well for mailing lists. It’s best to use your real address for mailing lists.
                                                                                                        • Gravatars are obnoxious — there’s no such thing as a catch-all Gravatar. Of course, Gravatar is problematic from a privacy perspective anyway, but many sites don’t allow you to configure an avatar any other way.

                                                                                                        Otherwise it works great when you use Fastmail’s (okayish) web UI to respond — it automatically selects the correct identity.

                                                                                                        1. -1

                                                                                                          I’d the main downside is you are “tied” to Fastmail. Let’s say at some point you want to use another email provider then migrating all these aliases could take some time. I’d rather recommend (subjectively obviously) using another solution like SimpleLogin that focus solely on the email alias.

                                                                                                          1. 1

                                                                                                            Fastmail also lets you configure a catchall address, which allows you to make up addresses on the fly. (Of course, then you can also get spam at addresses that someone else made up… That said, it has worked well for me.)

                                                                                                        1. 5

                                                                                                          valueOf will return one of two fixed Boolean instances while calling the constructor will always allocate a new object.

                                                                                                          Also, new Boolean(false) will wreak utter havoc if you ever use == (identity comparison) on boxed booleans by accident, rather than .equals(). It’s especially bad in Clojure, where new Boolean(false) will behave as true but print as false.

                                                                                                          1. 1

                                                                                                            I don’t see this on my iPad

                                                                                                            1. 1

                                                                                                              I suspect that has to do with scaling.

                                                                                                            1. 3

                                                                                                              No problems here on the Lenovo E585 (Windows), Thinkpad Carbon X1 3rd Gen (Windows) and Thinkpad T480s (Linux) I have to hand.</smugmode>

                                                                                                              1. 2

                                                                                                                Have you checked the inversion pattern test pages mentioned in kevinmehall’s comment? http://www.techmind.org/lcd/#inversion (epilepsy warning on subpages)

                                                                                                                I have a Thinkpad T520, and while OP’s page doesn’t show anything, I get flicker for these test pages:

                                                                                                                • Line-paired pixel dot-inversion (160,160,160)-(0,0,0)
                                                                                                                • Line-paired RGB subpixel dot-inversion (160,0,160)-(0,160,0)
                                                                                                                • Line-paired dot-inversion (green) (0,160,0)-(0,0,0)

                                                                                                                …with the middle one being the worst.

                                                                                                                ETA: My T430s doesn’t have an inversion pattern matching any of those, interestingly. Same for my T420, although that one shows some faint “crawlies” on a few of the dot inversion and paired row-inversion pages.

                                                                                                                1. 3

                                                                                                                  Yep, got some flicker there…and it didn’t trigger my epilepsy…..fffzzzttttt NO CARRIER :)