1. 1

    More inclusive and open

    Male and female emoji have been merged into gender-neutral emoji that are relevant to you

    I fail to see why this is “more inclusive” or “open”. I mean, I’m all for people who are gender-neutral, I have no problems with this, but considering that the vast majority of the population isn’t gender-neutral, it’s actually less inclusive, by definition. Either provide more possibilities or go ahead and just say that there are fewer options. But don’t bend over backward to appease a small amount of people by normalizing everyone. :(

    1. 6

      The way I see it is that Unicode emoji tried to be gender-inclusive by adding a “female” modifier (emoji + zero-width joiner + ♀️).

      This reinforces a false binary, and the way it’s implemented in mainstream emoji design reinforces gender stereotypes. Women have long hair, makeup, and pink clothes; men have short hair and blue clothes; that sort of thing.

      Gender isn’t something that can be seen, and so I don’t think adding more and more gender options makes sense. In my opinion, it would be much better to offer stylistic choice, such as “long hair” or “makeup”, as options unrelated to gender, since that’s what they are.

      I don’t agree that having fewer gender options makes it less inclusive. It isn’t saying that every emoji represents a non-binary or agender person or anything. It simply does not specify a gender, and so they aren’t excluding anyone on basis of gender.

      Edit: it’s like how the basic facial expression emoji were never gendered to begin with, and so there is no need to add more gender options to them.

      1. 0

        Gender isn’t something that can be seen

        Most of the time it can but not on something as tiny and reductive as an emoji without having to use things like clothes color.

        1. 0

          Sure, you can guess, and odds are you’d be right a lot of the time, but it still isn’t defined by any kind of outward appearance.

      2. 2

        Additionally, I find it odd that they’ve chosen to reduce the set of available choices when it comes to gender, but they’ve greatly expanded the choices when it comes to race. I think they should decide whether they want to provide options for things like race, gender, sexuality or not provide options.

        1. 1

          I poked around and found a relevant pair of questions in the FAQ on the race and gender questions. They don’t specifically compare the two decisions, but the rationale for each is there.

      1. 2

        Registered a domain and am currently setting up group chat software for queer folks in this tiny country that has become my home.

        Facebook is very entrenched here, but I spoke to some friends who recently deleted their accounts, and we figured it’s worth a shot. I hope it gets some uptake.

        1. 8

          Imagining a time where I can go to the local maker space and print some open source garments that I have modified to have larger pockets.

          In the present, I just stick to what works and get a lot of stuff from the same brand, since I know it fits me and my essential items well.

          1. 5

            Programmable sewing machine would be quite amazing.

            1. 1

              http://softwearautomation.com/li-fung-announce-partnership-softwear-automation/

              Softwear’s revolutionary digital t-shirt SEWBOT® Workline is fully autonomous and requires a single operator, producing one complete t-shirt every 22 seconds…

              1. 1

                Found this as well: https://www.youtube.com/watch?v=qXFUqCijkUs Seems like clothes would have be ‘re-architected’ for this method.

                1. 1

                  Why would they need to be rearchitected? The video doesn’t seem to point to that directly.

                  1. 1

                    Not all clothes are assembled from fabric panels that stack neatly on top of each other, consider the crotch of the common pants, or double inner seam of jeans.

              2. 1

                Ha, there’s actually some work on this, I think there’s a DARPA project as well, robotic garment assembly. I’ve given this topic a lot of thought myself. Robots can wield, why not sew?

            1. 13

              Participating in the annual “queer days” festivities.

              1. 1

                And of course someone is upset about this and marks it as spam.

                1. 2

                  It’s sometimes hard to tell where people just disagree or are haters. This is a thread about literally anything we might be doing this weekend. Someone marked that as spam. Clearly a hater. Ignore them and enjoy your weekend. ;)

              1. 7

                Bad idea, it should error or give NaN.

                1/0 = 0 is mathematically sound

                It’s not mathematically sound.

                a/b = c should be equivalent to a = c*b

                this fails with 1/0 = 0 because 1 is not equal to 0*0.

                Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.

                There is a subtlety because some people say (X) and others say (Y)

                • (X) a/b = c should be equivalent to a = c*b when the LHS is well defined

                • (Y) a/b = c should be equivalent to a = c*b when b is nonzero

                If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.

                It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.

                It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                1. 14

                  I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.

                  1. 8

                    This is explicitly addressed in the post. Do you have any objections to the definition given in the post?

                    1. 13

                      I cover that exact objection in the post.

                      1. 4

                        It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values

                        That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.

                        1. 4

                          Those truthy/falsey values are an often source of errors.

                          I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.

                        2. 4

                          1/0 is integer math. NaN is available for floating point math not integer math.

                          1. 2

                            It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                            I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.

                            1. 7

                              It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be +?, /?, *?, -?.

                              https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff

                          1. 2

                            I’ve never heard of this joke, but if you want to find a cycle in a linked list, the canonical way to do it is to use two pointers and have one walk one step at a time, while the other walks two. If they are ever equal after they take a step, there’s a cycle. Using signals is – I think it’s safe to say – way over the top. (Unless that’s the joke.)

                            1. 8

                              That is the joke. The way I heard it was: keep free()ing the nodes, and if there’s a crash (due to double-free) you found a cycle.

                              1. 2

                                I’ve heard of the two-pointer approach (with pointer variables typically given the names “tortoise” and “hare”), but I really like the double-free approach.

                                1. 1

                                  Of course, free(x) could be a noop if you have a garbage collecting C (like Zeta-C)

                                1. 22

                                  After writing Go for 5 years, I’d recommend Rust for C developers. It’s more complicated than Go for sure, but also has more to offer. The lack of garbage collection and support of generics are definitely a plus compared to Go.

                                  Go is a better language for junior devs, but I wouldn’t call C programmers junior. They should be able to digest Rust’s complexity.

                                  1. 9

                                    They should be able to digest Rust’s complexity.

                                    Non trivial amount of C programmers are still doing C to avoid additional complexity. Not everyone wants a kitchen & sink programming language.

                                    1. 6

                                      Rust can definitely get overly complex if the developers show no constraint (i.e. type golf), but the control afforded by manual memory management makes up for it, IMHO. Unless it’s a one-run project, performance will eventually matter, and fixing bad allocation practices after the fact is a lot harder than doing it right from the beginning.

                                      1. 1

                                        Couldn’t they just start with a C-like subset of Rust adding from there to their arsenal what extra features they like? It’s what I was going to recommend to those trying it for safety-critical use since they likely know C.

                                        1. 9

                                          I think it’s rather difficult to write rust in a C like manner. This contrasts with go, where you can basically write C code and move the type declarations around and end up with somewhat unidiomatic but working go.

                                          1. 3

                                            I think C++ as a better C works because you still have libc besides the STL, etc. The Rust standard library uses generics, traits, etc. quite heavily and type parameters and lifetime parameters tend to percolate to downstream users.

                                            Though I think a lot of value in Rust is in concepts that may initially add some complexity, such the borrow checker rules.

                                            1. 3

                                              The problem with C++ is its complexity at the language level. I have little hope of teams of people porting various tools for static analysis, verification, and refactoring to it that C and Java already have. Certifying compilers either. C itself is a rough language but smaller. The massive bandwagon behind it caused lots of tooling to be built, esp FOSS. So, I now push for low-level stuff either safer C or something that ties into C’s ecosystem.

                                            2. 4

                                              You could argue the same for C++ (start with C and add extra features). Complexity comes with the whole ecosystem from platform support (OS, arch), compiler complexity (and hence subtle difference in feature implementations) to the language itself (C++ templates, rust macros). It’s challenging to limit oneself to a very specific subset on a single person project, it’s exponentially harder for larger teams to agree on a subset and adhere to it. I guess I just want a safer C not a new C++ replacement which seems to be the target for newer languages (like D & Rust).

                                              1. 4

                                                It’s challenging to limit oneself to a very specific subset on a single person project, it’s exponentially harder for larger teams to agree on a subset and adhere to it.

                                                I see your overall point. It could be tricky. It would probably stay niche. I will note that, in the C and Java worlds, there’s tools that check source code for compliance with coding standards. That could work for a Rust subset as well.

                                                “I guess I just want a safer C not a new C++ replacement which seems to be the target for newer languages (like D & Rust).”

                                                I can’t remember if I asked you what you thought about Cyclone. So, I’m curious about that plus what you or other C programmers would change about such a proposal.

                                                I was thinking something like it with Rust’s affine types and/or reference counting when borrow-checking sucks too much with performance acceptable. Also, unsafe stuff if necessary with the module prefixed with that like Wirth would do. Some kind of module system or linking types to avoid linker errors, too. Seemless use of existing C libraries. Then, an interpreter or REPL for the productivity boost. Extracts to C to use its optimizing and certifying compilers. I’m unsure of what I’d default with on error handling and concurrency. First round at error handling might be error codes since I saw a design for statically checking their correct usage.

                                                1. 3

                                                  I can’t remember if I asked you what you thought about Cyclone. So, I’m curious about that plus what you or other C programmers would change about such a proposal.

                                                  I looked at it in the past and it felt like a language built on top of C similar to what a checker tool with annotations would do. It felt geared too much towards research versus use and the site itself states:

                                                  Cyclone is no longer supported; the core research project has finished and the developers have moved on to other things. (Several of Cyclone’s ideas have made their way into Rust.) Cyclone’s code can be made to work with some effort, but it will not build out of the box on modern (64 bit) platforms).

                                                  However if I had to change Cyclone I would at least drop exceptions from it.

                                                  I am keeping an eye on zig and that’s closest to how I imagine a potentially successful C replacement - assuming it takes up enough community drive and gets some people developing interesting software with it.

                                                  That’s something Go had nailed down really well. The whole standard library (especially their crypto and http libs) being implemented from scratch in Go instead of being bindings were a strong value signal.

                                                  1. 2

                                                    re dropping exceptions. Dropping exceptions makes sense. Is there another way of error handling that’s safer or better than C’s that you think might be adoptable in a new, C-like language?

                                                    re Zig. It’s an interesting language. I’m watching it at a distance for ideas.

                                                    re standard library of X in X. Yeah, I agree. I’ve been noticing that pattern with Myrddin, too. They’ve been doing a lot within the language despite how new it is.

                                                    1. 4

                                                      Dropping exceptions makes sense. Is there another way of error handling that’s safer or better than C’s that you think might be adoptable in a new, C-like language?

                                                      Yes, I think Zig actually does that pretty well: https://andrewkelley.me/post/intro-to-zig.html#error-type

                                                      edit: snippet from the zig homepage:

                                                      A fresh take on error handling that resembles what well-written C error handling looks like, minus the boilerplate and verbosity.

                                                      1. 2

                                                        Thanks for the link and tips!

                                          2. 7

                                            Short build/edit/run cycles are appreciated by junior and senior developers alike. Go currently has superior compilation times.

                                            1. 10

                                              Junior and senior developers also enjoy language features such as map, reduce, filter, and generics. Not to mention deterministic memory allocation, soft realtime, forced error checking, zero-cost abstractions, and (of course) memory safety.

                                              1. 3

                                                Junior and senior developers also enjoy language features such as map, reduce, filter, and generics.

                                                Those are great!

                                                deterministic memory allocation, soft realtime, forced error checking, zero-cost abstractions, and (of course) memory safety.

                                                Where are you finding juniors who care about this stuff? (no, really - I would like to know what kind of education got them there).

                                                1. 8

                                                  I cared about those things, as a junior. I am not sure why juniors wouldn’t care, although I suppose it depends on what kind of software they’re interested in writing. It’s hard to get away with not caring, for a lot of things. Regarding education, I am self-taught, FWIW.

                                                2. 1

                                                  Map, reduce and filter are easily implemented in Go. Managing memory manually, while keeping the GC running, is fully possible. Turning off the GC is also possible. Soft realtime is achievable, depending on your definition of soft realtime.

                                                  1. 1

                                                    Map, reduce and filter are easily implemented in Go

                                                    How? Type safe versions of these, that is, without interface{} and hacky codegen solutions?

                                                    1. 1

                                                      Here are typesafe examples for Map, Filter etc: https://gobyexample.com/collection-functions

                                                      Implementing one Map function per type is often good enough. There is some duplication of code, but the required functionality is present. There are many theoretical needs that don’t always show up in practice.

                                                      Also, using go generate (which comes with the compiler), generic versions are achievable too. For example like this: https://github.com/kulshekhar/fungen

                                                      1. 9

                                                        When people say “type safe map/filter/reduce/fold” or “map, reduce, filter, and generics” they are generally referring to the ability to define those functions in a way that is polymorphic, type safe, transparently handled by the compiler and doesn’t sacrifice runtime overhead compared to their monomorphic analogs.

                                                        Whether you believe such facilities are useful or not is a completely different and orthogonal question. But no, they are certainly not achievable in Go and this is not a controversial claim. It is by design.

                                                        1. 1

                                                          Yes, I agree, Go does not have the combination of type safety and generics, unless you consider code generation.

                                                          The implementation of generics in C++ also works by generating the code per required type.

                                                          1. 5

                                                            The implementation of generics in C++ also works by generating the code per required type.

                                                            But they are not really comparable. In C++, when a library defines a generic type or function, it will work with any conforming data type. Since the Go compiler does not know about generics, with go generate one can only generate ‘monomorphized’ types for a set of predefined data types that are defined an upstream package. If you want different monomorphized types, you have to import the generic definitions and run go generate for your specific types.

                                                            unless you consider code generation

                                                            By that definition, any language is a generic language, there’s always Bourne shell/make/sed for code generation ;).

                                                            1. 1

                                                              That is true, and I agree that go does not have support for proper generics and that this can be a problem when creating libraries.

                                                            2. 3

                                                              That’s why I said “transparently handled by the compiler.” ;-)

                                                              1. 0

                                                                I see your point, but “go generate” is provided by the go compiler, by default. I guess it doesn’t qualify as transparent since you have to type “go generate” or place that command in a build file of some sort?

                                                                1. 1

                                                                  Yes. And for the reasons mentioned by @iswrong.

                                                                  My larger point here really isn’t a technicality. My point is that communication is hard and not everyone spells out every point is precise detail, but it’s usually possible to infer the meaning based on context.

                                                                  1. -1

                                                                    I think the even larger point is that for a wide range of applications, “proper” and “transparent” generics might not even be needed in the first place. It would help, yes, but the Go community currently thrives without it, with no lack of results to show for.

                                                                    1. 1

                                                                      I mean, I’ve written Go code nearly daily since before it was 1.0. I don’t need to argue with you about whether generics are “needed,” which is a pretty slimy way to phrase this.

                                                                      Seems to me like you’re trying to pick a fight. I already said upthread that the description of generics is different from the desire for them.

                                                                      1. -2

                                                                        You were the first to change the subject to you and me instead of sticking to the topic at hand. Downvoting as troll.

                                                  2. 1

                                                    By superior, I guess you meant shorter?

                                                    1. 2

                                                      Compiling a very large go project with a cold cache might take a minute (sub-second once the cache is warm).

                                                      Compiling a fairly small rust app with a warm cache has taken me over a minute (I think it’s a little better than that now).

                                                      1. 1

                                                        Yes, and superior to Rust in that regard. Also the strict requirement to not have unused dependencies contributes to counteract dependency rot, for larger projects.

                                                  1. 2

                                                    Suing abandonware archives is too meanly. Personally I found Nintendo franchises like all these marios and zeldas as disgusting as Hollywood stuff. They had done lots of aggressive marketing in social networks recently to ensure “geek culture” is associated with their silly characters targeted to 5-year-old kids. I hope if all these ROMs would be removed from internets, this will lower popularity of Nintendo brands.

                                                    1. 12

                                                      It’s not abandonware when they’re maintaining their titles for virtual console on recent platforms. It’s not targeted at just 5-year-olds, it’s family entertainment that plenty of adults enjoy. Your comparison with Hollywood is far-fetched, and the adjectives you use are very trollish.

                                                      1. 3

                                                        when they’re maintaining their titles for virtual console

                                                        Except they’re not? On a switch only VC Mario title is an arcade one. No Zelda except BOTW (the latest one). DS Zelda titles are only available second-hand as cartridges.

                                                        1. 4

                                                          They’re available on the 3DS VC. I’ve been playing through them all. And I’m in my thirties, FWIW. :)

                                                          1. 4

                                                            This was not the case at one point if memory serves. It is also no guarantee going forward.

                                                            1. 2

                                                              I think the point that people have been making is that Nintendo had no interest in re-releasing these games until they discovered how popular they were in the ROM scene and second hand markets.

                                                      1. 16

                                                        Moving from Linux, though, could have upsides for Google. Android’s use of the technology, which is distributed by Oracle Corp., is at the center of a lengthy, bitter lawsuit between the two companies.

                                                        I am confused. I thought they were confusing Linux with Java, but the very next paragraph addresses the Java situation.

                                                        A previous version of this story was corrected to make clear Oracle link with Linux.

                                                        🤔

                                                        1. 4

                                                          If I had to guess, the reporter writing the story couldn’t imagine them spending the resources to replace something in Android and have that thing not be what Oracle is suing them over.

                                                          1. 2

                                                            lol… I think they just referred to Java as “Linux” in the correction as well 🤣

                                                          1. 6

                                                            I live in a small island nation, that will continue to see drastic changes due to climate change.

                                                            I haven’t thought much about its impact on my work, but I’ve thought a lot about my work’s impact on it. And more generally, I didn’t know if I could stay in tech at all.

                                                            But, I found a job that would have me work on embedded systems to help reduce supple-chain waste on a large scale, so that seems like a positive use of my existing skills.

                                                            1. 5

                                                              Another Keyboardio Model 01 user. It’s the only keyboard of its kind (split, ortholinear) that I’ve tried, where I can comfortably reach the most important keys. I have gone through periods of being unable to type at all, and have almost no pain now. I’ve had an Ergodox, a Kinesis Advantage, and a Kinesis Freestyle.

                                                              I have made some customisations, by modifying the stock firmware, to make it easier for me to use my main 3 natural languages, without having to switch the input language in my OS.

                                                              1. 4

                                                                re standard libraries

                                                                One idea I had when looking at transpilers and C-like LISP’s was to make a Python-like LISP. Also thought of Nim since they look similar. Anyway, the idea is to embed a Python-compatible language in a well-tooled LISP or Scheme where Python code could be easily ported. Start porting its standard library. Automate this process with Python semantics matched to the LISP. Eventually, one can do most things in the LISP using all those Pythonic libraries with the extra benefit of native, code compiler. Maybe also do it in reverse where one can extract idiomatic Python for distribution to those that don’t know the LISP version. Last part of that brainstorm was possibly doing it in Racket so people who started with or know Python can go to How to Design Programs to learn Scheme then to the Pythonic Scheme to get its power reusing their existing knowledge and libraries.

                                                                Just a brainstorm folks might find interesting. Python has gotten pervasive and critical enough that I do keep thinking back on automated methods to optimize it, secure it, etc.

                                                                1. 8

                                                                  Not quite like what you’re describing, but have you seen Hy?

                                                                  1. 4

                                                                    I haven’t. Excerpting this:

                                                                    “This is pretty cool because it means Hy is several things:

                                                                    1. A Lisp that feels very Pythonic
                                                                    2. For Lispers, a great way to use Lisp’s crazy powers but in the wide world of Python’s libraries (why yes, you now can write a Django application in Lisp!)
                                                                    3. For Pythonistas, a great way to start exploring Lisp, from the comfort of Python!”

                                                                    That looks like it’s at least half of what I was aiming for. Cool stuff. Bookmarking it. Thanks for the link!

                                                                  2. 5

                                                                    There is a python implementation in Common Lisp, https://github.com/metawilm/cl-python. There is also a library to run a CPython interpreter inside a Common Lisp image and interface with it: https://github.com/mmontone/burgled-batteries

                                                                    Also Marijn Haverbeke, of code-mirror fame, had a similar idea but with JavaScript instead of Python so cl-javascript was born.

                                                                    1. 1

                                                                      That’s more like it! Even runs on multiple implementations of CL. Bookmarked. :)

                                                                  1. 2

                                                                    Ouch. I worked on some Python stuff a few jobs ago, and the optional_arg=[] feature is one I will never forget. I was made aware of it before I had a chance to shoot myself in my shared, mutable foot.

                                                                    Seems to be like it’s one of those things that can quite readily evade unit tests, too.

                                                                    1. 1

                                                                      Nice article. I’ve had two Model M’s (the m key on the first one stopped responding) and I like the sturdy feel. However recently I’ve been finding the buckling springs a bit too loud and heavy. The keyboard is also pretty big so it’s hard to reach for the mouse. Maybe it’s time to go for a Spacesaver 104 :)

                                                                      1. 2

                                                                        it’s hard to reach for the mouse

                                                                        For this reason I have vi keybindings in all the programs I use most. It’s not a productivity thing, I just really dislike taking my paws off home row to reach for a pointing device.

                                                                        1. 2

                                                                          I have a mouse layer with movement on WASD, and mouse buttons on F/R, for the same reason, for more unruly programs.

                                                                          1. 1

                                                                            Same, but I also consider a TKL keyboard (CM MasterKeys S for the beginning) or 60% which may require a while to accustom.

                                                                        1. 1

                                                                          Does bitcode really make it that much easier to steal the “secret algorithm”? Those who are afraid of that sort of thing tend to obfuscate their binaries anyway.

                                                                          1. 9

                                                                            Rust

                                                                            • You sign a waiver, acknowledging that handling firearms is inherently risky, and that the compiler is not liable for any damages, and proceed to shoot yourself in the foot.
                                                                            1. 15

                                                                              As a junior developer doing my best to learn as much as I can, both technically and in terms of engineering maturity, I’d love to hear what some of the veterans here have found useful in their own careers for getting the most out of their jobs, projects, and time.

                                                                              Anything from specific techniques as in this post to general mindset and approach would be most welcome.

                                                                              1. 33

                                                                                Several essentials have made a disproportionate benefit on my career. In no order:

                                                                                • find a job with lots of flexibility and challenging work
                                                                                • find a job where your coworkers continuously improve themselves as much (or more) than you
                                                                                • start writing a monthly blog of things you learn and have strong opinions on
                                                                                • learn to be political (it’ll help you stay with good challenging work). Being political isn’t slimy, it is wise. Be confident in this.
                                                                                • read programming books/blogs and develop a strong philosophy
                                                                                • start a habit of programming to learn for 15 minutes a day, every day
                                                                                • come to terms with the fact that you will see a diminishing return on new programing skills, and an increasing return on “doing the correct/fastest thing” skills. (e.g. knowing what to work on, knowing what corners to cut, knowing how to communicate with business people so you only solve their problems and not just chase their imagined solutions, etc). Lean into this, and practice this skill as often as you can.

                                                                                These have had an immense effect on my abilities. They’ve helped me navigate away from burnout and cultivated a strong intrinsic motivation that has lasted over ten years.

                                                                                1. 5

                                                                                  Thank you for these suggestions!

                                                                                  Would you mind expanding on the ‘be political’ point? Do you mean to be involved in the ‘organizational politics’ where you work? Or in terms of advocating for your own advancement, ensuring that you properly get credit for what you work on, etc?

                                                                                  1. 13

                                                                                    Being political is all about everything that happens outside the editor. Working with people, “managing up”, figuring out the “real requirements’, those are all political.

                                                                                    Being political is always ensuring you do one-on-ones, because employees who do them are more likely to get higher raises. It’s understanding that marketing is often reality, and you are your only marketing department.

                                                                                    This doesn’t mean put anyone else down, but be your best you, and make sure decision makers know it.

                                                                                    1. 12

                                                                                      Basically, politics means having visibility in the company and making sure you’re managing your reputation and image.

                                                                                      A few more random bits:

                                                                                  2. 1

                                                                                    start a habit of programming to learn for 15 minutes a day, every day

                                                                                    Can you give an example? So many days I sit down after work or before in front of my computer. I want to do something, but my mind is like, “What should I program right now?”

                                                                                    As you can probably guess nothing gets programmed. Sigh. I’m hopeless.

                                                                                    1. 1

                                                                                      Having a plan before you sit down is crucial. If you sit and putter, you’ll not actually improve, you’ll do what’s easy.

                                                                                      I love courses and books. I also love picking a topic to research and writing about it.

                                                                                      Some of my favorite courses:

                                                                                      1. 1

                                                                                        I’ve actually started SICP and even bought the hard copy a couple weeks ago. I’ve read the first chapter and started the problems. I’m on 1.11 at the moment. I also started the Stanford 193P course as something a bit easier and “fun” to keep variety.

                                                                                  3. 14

                                                                                    One thing that I’ve applied in my career is that saying, “never be the smartest person in the room.” When things get too easy/routine, I try to switch roles. I’ve been lucky enough to work at a small company that grew very big, so I had the opportunity to work on a variety of things; backend services, desktop clients, mobile clients, embedded libraries. I was very scared every time I asked, because I felt like I was in over my head. I guess change is always a bit scary. But every time, it put some fun back into my job, and I learned a lot from working with people with entirely different skill sets and expertise.

                                                                                    1. 11

                                                                                      I don’t have much experience either but to me the best choice that I felt in the last year was stop worrying about how good a programmer I was and focus on how to enjoy life.

                                                                                      We have one life don’t let anxieties come into play, even if you intellectually think working more should help you.

                                                                                      1. 8

                                                                                        This isn’t exactly what you’re asking for, but, something to consider. Someone who knows how to code reasonably well and something else are more valuable than someone who just codes. You become less interchangeable, and therefore less replaceable. There’s tons of work that people who purely code don’t want to do, but find very valuable. For me, that’s documentation. I got my current job because people love having docs, but hate writing docs. I’ve never found myself without multiple options every time I’ve ever looked for work. I know someone else who did this, but it was “be fluent In Japanese.” Japanese companies love people who are bilingual with English. It made his resume stand out.

                                                                                        1. 1

                                                                                          . I got my current job because people love having docs, but hate writing docs.

                                                                                          Your greatest skill in my eyes is how you interact with people online as a community lead. You have a great style for it. Docs are certainly important, too. I’d have guessed they hired you for the first set of skills rather than docs, though. So, that’s a surprise for me. Did you use one to pivot into the other or what?

                                                                                          1. 7

                                                                                            Thanks. It’s been a long road; I used to be a pretty major asshole to be honest.

                                                                                            My job description is 100% docs. The community stuff is just a thing I do. It’s not a part of my deliverables at all. I’ve just been commenting on the internet for a very long time; I had a five digit slashdot ID, etc etc. Writing comments on tech-oriented forums is just a part of who I am at this point.

                                                                                            1. 2

                                                                                              Wow. Double unexpected. Thanks for the details. :)

                                                                                        2. 7

                                                                                          Four things:

                                                                                          1. People will remember you for your big projects (whether successful or not) as well as tiny projects that scratch an itch. Make room for the tiny fixes that are bothering everyone; the resulting lift in mood will energize the whole team. I once had a very senior engineer tell me my entire business trip to Paris was worth it because I made a one-line git fix to a CI system that was bothering the team out there. A cron job I wrote in an afternoon at an internship ended up dwarfing my ‘real’ project in terms of usefulness to the company and won me extra contract work after the internship ended.

                                                                                          2. Pay attention to the people who are effective at ‘leaving their work at work.’ The people best able to handle the persistent, creeping stress of knowledge work are the ones who transform as soon as the workday is done. It’s helpful to see this in person, especially seeing a deeply frustrated person stand up and cheerfully go “okay! That’ll have to wait for tomorrow.” Trust that your subconscious will take care of any lingering hard problems, and learn to be okay leaving a work in progress to enjoy yourself.

                                                                                          3. Having a variety of backgrounds is extremely useful for an engineering team. I studied electrical engineering in college and the resulting knowledge of probability and signal processing helped me in environments where the rest of the team had a more traditional CS background. This applies to backgrounds in fields outside engineering as well: art, history, literature, etc will give you different perspectives and abilities that you can use to your advantage. I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                                                                                          4. Learn about the concept of the ‘asshole filter’ (safe for work). In a nutshell, if you give people who violate your boundaries special treatment (e.g. a coworker who texts you on your vacation to fix a noncritical problem gets their problem fixed) then you are training people to violate your boundaries. You need to make sure that people who do things ‘the right way’ (in this case, waiting for when you get back or finding someone else to fix it) get priority, so that over time people you train people to respect you and your boundaries.

                                                                                          1. 3

                                                                                            I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                                                                                            The methodology from that talk is here: http://codecrit.com/methodology.html

                                                                                            I would change “If the code doesn’t work, we shouldn’t be reviewing it”. There is a place for code review of not-done work, of the form “this is the direction I’m starting to go in…what do you think”. This can save a lot of wasted effort.

                                                                                          2. 3

                                                                                            The biggest mistake I see junior (and senior) developers make is key mashing. Slow down, understand a problem, untangle the dependent systems, and don’t just guess at what the problem is. Read the code, understand it. Read the code of the underlying systems that you’re interacting with, and understand it. Only then, make an attempt at fixing the bug.

                                                                                            Stabs in the dark are easy. They may even work around problems. But clean, correct, and easy to understand fixes require understanding.

                                                                                            1. 3

                                                                                              Another thing that helps is the willingness to dig into something you’re obsessed with even if it is deemed not super important by everyone around you. eg. if you find a library / language / project you find fun and seem to get obsessed with, that’s great, keep going at it and don’t let the existential “should i be here” or other “is everyone around me doing this too / recommending this” questions slow you down. You’ll probably get on some interesting adventures.

                                                                                              1. 3

                                                                                                Never pass up a chance to be social with your team/other coworkers. Those relationships you build can benefit you as much as your work output.

                                                                                                (This doesn’t mean you compromise your values in any way, of course. But the social element is vitally important!)

                                                                                              1. 16

                                                                                                The biggest issue I have with the defaults, and the borrow checker is that places in FP where you would normally >pass by copy — pass by value, in Rust instead it assumes you want to pass by reference. Therefore, you need to >clone things by hand and pass the cloned versions instead. Although it has a mechanism to do this automatically, it’s >far from ergonomic.

                                                                                                The argument of pass by reference, or borrowing is that it’s more performant than cloning by default. In general, >computers are getting faster, but systems are getting more complex.

                                                                                                It’s actually not the case that computers are getting faster in general anymore - Moore’s law has been slowing as we get closer to fundamental physical limits in terms of how small we can build transistors, and actual effective clock speeds for hardware haven’t been increasing significantly for about a decade now. Consequently programmers should be more leery than they are in practice in using non-performant but easy-to-write programming languages and constructs - even ignoring the fact that Moore’s law gains can no longer be counted on, it’s easy for people writing in the middle or towards the top of a large software stack to write non-performant code that stacks on top of other people’s non-performant code, leading to user-visible slowdown and latency even on modern, fast hardware. This is one of the huge issues with applications built on the modern web (in fact, my browser is chugging a little as I write this in the text box, which really shouldn’t be happening on a 2018 computer, and I think it’s the result of a shitty webapp in another tab).

                                                                                                In any case, one of Rust’s explicit design goals is to be a useful modern language in contexts where minimal use of computing resources like CPU time and memory is important, which is exactly why Rust generally avoids copies unless you explicitly tell it to with .clone() or something similar. Personally, I’ve written a fair amount of Rust code where I do make inefficient copies to avoid complexity (especially while developing an algorithm that I plan to make more efficient later), and I don’t find it particularly onerous to stick a few .clone()s here and there to make the compiler happy.

                                                                                                1. 6

                                                                                                  I agree with you, and would go further and say that resource usage always matters. In my opinion, performance is an accessibility issue; programs that care about performance can be used on cheaper/older hardware. Not everyone can afford the latest, greatest hardware.

                                                                                                1. 31

                                                                                                  at this point most browsers are OS’s that run (and build) on other OS’s:

                                                                                                  • language runtime - multiple checks
                                                                                                  • graphic subsystem - check
                                                                                                  • networking - check
                                                                                                  • interaction with peripherals (sound, location, etc) - check
                                                                                                  • permissions - for users, pages, sites, and more.

                                                                                                  And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                                                                                                  1. 11

                                                                                                    Browsers rarely link out the system. FF/Chromium have their own PNG decodes, JPEG decodes, AV codecs, memory allocators or allocation abstraction layers, etc. etc.

                                                                                                    It bothers me everything is now shipping as an electron app. Do we really need every single app to have the footprint of a modern browser? Can we at least limit them to the footprint of Firefox2?

                                                                                                    1. 10

                                                                                                      but if you limit it to the footprint of firefox2 then computers might be fast enough. (a problem)

                                                                                                      1. 2

                                                                                                        New computers are no longer faster than old computers at the same cost, though – moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                                                                                                        (Maybe secondary storage speed will have a big bump, if you’re moving from hard disk to SSD, but that only happens once.)

                                                                                                        1. 3

                                                                                                          moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                                                                                                          Are you claiming there have been no speedups due to better pipelining, out-of-order/speculative execution, larger caches, multicore, hyperthreading, and ASIC acceleration of common primitives? And the benchmarks magazines post showing newer stuff outperforming older stuff were all fabricated? I’d find those claims unbelievable.

                                                                                                          Also, every newer system I had was faster past 2005. I recently had to use an older backup. Much slower. Finally, performance isn’t the only thing to consider: the newer, process nodes use less energy and have smaller chips.

                                                                                                          1. 2

                                                                                                            I’m slightly overstating the claim. Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                                                                                                            Once we’ve picked all the low-hanging fruit (simple optimization tricks with major & general impact) we’ll need to start seriously milking performance out of multicore and other features that actually require the involvement of application developers. (Multicore doesn’t affect performance at all for single-threaded applications or fully-synchronous applications that happen to have multiple threads – in other words, everything an unschooled developer is prepared to write, unless they happen to be mostly into unix shell scripting or something.)

                                                                                                            Moore’s law isn’t all that matters, no. But, it matters a lot with regard to whether or not we can reasonably expect to defend practices like electron apps on the grounds that we can maintain current responsiveness while making everything take more cycles. The era where the same slow code can be guaranteed to run faster on next year’s machine without any effort on the part of developers is over.

                                                                                                            As a specific example: I doubt that even in ten years, a low-end desktop PC will be able to run today’s version of slack with reasonable performance. There is no discernible difference in its performance between my two primary machines (both low-end desktop PCs, one from 2011 and one from 2017). There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.

                                                                                                            1. 5

                                                                                                              Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                                                                                                              I agree with that totally.

                                                                                                              “Multicore doesn’t affect performance at all for single-threaded applications “

                                                                                                              Although largely true, people often forget a way multicore can boost single-threaded performance: simply letting the single-threaded app have more time on CPU core since other stuff is running on another. Some OS’s, esp RTOS’s, let you control which cores apps run on specifically to utilize that. I’m not sure if desktop OS’s have good support for this right now, though. I haven’t tried it in a while.

                                                                                                              “There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.”

                                                                                                              Yeah, all the ideas I have for it are incremental. The best illustration of where rest of gains might come from is Cavium’s Octeon line. They have offloading engines for TCP/IP, compression, crypto, string ops, and so on. On rendering side, Firefox is switching to GPU’s which will take time to fully utilize. On Javascript side, maybe JIT’s could have a small, dedicated core. So, there’s still room for speeding Web up in hardware. Just not Moore’s law without developer effort like you were saying.

                                                                                                    2. 9

                                                                                                      Although you partly covered it, I’d say “execution of programs” is good wording for JavaScript since it matches browser and OS usage. There’s definitely advantages to them being smaller. A guy I knew even deleted a bunch of code out of his OS and Firefox to achieve that on top of a tiny, backup image. Dude had a WinXP system full of working apps that fit on one CD-R.

                                                                                                      Far as secure browsers, I’d start with designs from high-assurance security bringing in mainstream components carefully. Some are already doing that. An older one inspired Chrome’s architecture. I have a list in this comment. I’ll also note that there were few of these because high-assurance security defaulted on just putting a browser in a dedicated partition that isolated it from other apps on top of security-focused kernels. One browser per domain of trust. Also common were partitioning network stacks and filesystems that limited effect of one partition using them on others. QubesOS and GenodeOS are open-source software that support these with QubesOS having great usability/polish and GenodeOS architecturally closer to high-security designs.

                                                                                                      1. 6

                                                                                                        Are there simpler browsers optimised for displaying plain ol’ hyperlinked HTML documents, and also support modern standards? I don’t really need 4 tiers of JIT and whatnot for web apps to go fast, since I don’t use them.

                                                                                                        1. 12

                                                                                                          I’ve always thought one could improve on a Dillo-like browser for that. I also thought compile-time programming might make various components in browsers optional where you could actually tune it to amount of code or attack surface you need. That would require lots of work for mainstream stuff, though. A project like Dillo might pull it off, though.

                                                                                                          1. 10
                                                                                                            1. 3

                                                                                                              Oh yeah, I have that on a Raspberry Pi running RISC OS. It’s quite nice! I didn’t realise it runs on so many other platforms. Unfortunately it only crashes on my main machine, I will investigate. Thanks for reminding me that it exists.

                                                                                                              1. 2

                                                                                                                Fascinating; how had I never heard of this before?

                                                                                                                Or maybe I had and just assumed it was a variant of suckless surf? https://surf.suckless.org/

                                                                                                                Looks promising. I wonder how it fares on keyboard control in particular.

                                                                                                                1. 1

                                                                                                                  Aw hell; they don’t even have TLS set up correctly on https://netsurf-browser.org

                                                                                                                  Does not exactly inspire confidence. Plus there appears to be no keyboard shortcut for switching tabs?

                                                                                                                  Neat idea; hope they get it into a usable state in the future.

                                                                                                                2. 1

                                                                                                                  AFAIK, it doesn’t support “modern” non-standards.

                                                                                                                  But it doesn’t support Javascript either, so it’s way more secure of mainstream ones.

                                                                                                                3. 8

                                                                                                                  No. Modern web standards are too complicated to implement in a simple manner.

                                                                                                                  1. 3

                                                                                                                    Either KHTML or Links is what you’d like. KHTML would probably be the smallest browser you could find with a working, modern CSS, javascript and HTML5 engine. Links only does HTML <=4.0 (including everything implied by its <img> tag, but not CSS).

                                                                                                                    1. 2

                                                                                                                      I’m pretty sure KHTML was taken to a farm upstate years ago, and replaced with WebKit or Blink.

                                                                                                                      1. 6

                                                                                                                        It wasn’t “replaced”, Konqueror supports all KHTML-based backends including WebKit, WebEngine (chromium) and KHTML. KHTML still works relatively well to show modern web pages according to HTML5 standards and fits OP’s description perfectly. Konqueror allows you to choose your browser engine per tab, and even switch on the fly which I think is really nice, although this means loading all engines that you’re currently using in memory.

                                                                                                                        I wouldn’t say development is still very active, but it’s still supported in the KDE frameworks, they still make sure that it builds at least, along with the occasional bug fix. Saying that it was replaced is an overstatement. Although most KDE distributions do ship other browsers by default, if any, and I’m pretty sure Falkon is set to become KDE’s browser these days, which is basically an interface for WebEngine.

                                                                                                                    2. 2

                                                                                                                      A growing part of my browsing is now text-mode browsing. Maybe you could treat full graphical browsing as an exception and go to the minimum footprint most of the time…

                                                                                                                  2. 4

                                                                                                                    And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                                                                                                                    user choice. rampant complexity has restricted your options to 3 rendering engines, if you want to function in the modern world.

                                                                                                                    1. 3

                                                                                                                      When reimplementing malloc and testing it out on several applications, I found out that Firefox ( at the time, I don’t know if this is still true) had its own internal malloc. It was allocating a big chunk of memory at startup and then managing it itself.

                                                                                                                      Back in the time I thought this was a crazy idea for a browser but in fact, it follows exactly the idea of your comment!

                                                                                                                      1. 3

                                                                                                                        Firefox uses a fork of jemalloc by default.

                                                                                                                        1. 2

                                                                                                                          IIRC this was done somewhere between Firefox 3 and Firefox 4 and was a huge speed boost. I can’t find a source for that claim though.

                                                                                                                          Anyway, there are good reasons Firefox uses its own malloc.

                                                                                                                          Edit: apparently I’m bored and/or like archeology, so I traced back the introduction of jemalloc to this hg changeset. This changeset is present in the tree for Mozilla 1.9.1 but not Mozilla 1.8.0. That would seem to indicate that jemalloc landed in the 3.6 cycle, although I’m not totally sure because the changeset description indicates that the real history is in CVS.

                                                                                                                      2. 3

                                                                                                                        In my daily job, this week I’m working on patching a modern Javascript application to run on older browsers (IE10, IE9 and IE8+ GCF 12).

                                                                                                                        The hardest problems are due the different implementation details of same origin policy.
                                                                                                                        The funniest problem has been one of the used famework that used “native” as variable name: when people speak about the good parts in Javascript I know they don’t know what they are talking about.

                                                                                                                        BTW, if browser complexity address a real problem (instead of being a DARPA weapon to get control of foreign computers), such problem is the distribution of computation among long distances.

                                                                                                                        Such problem was not addressed well enough by operating systems, despite some mild attempts, such as Microsoft’s CIFS.

                                                                                                                        This is partially a protocol issue, as both NFS, SMB and 9P were designed with local network in mind.

                                                                                                                        However, IMHO browsers OS are not the proper solution to the issue: they are designed for different goals, and they cannot discontinue such goals without loosing market share (unless they retain such share with weird marketing practices as Microsoft did years ago with IE on Windows and Google is currently doing with Chrome on Android).

                                                                                                                        We need better protocols and better distributed operating systems.

                                                                                                                        Unfortunately it’s not easy to create them.
                                                                                                                        (Disclaimer: browsers as platforms for os and javascript’s ubiquity are among the strongest reasons that make me spend countless nights hacking an OS)

                                                                                                                      1. 2

                                                                                                                        “If you were to go back in time to 1987, this is probably similar to what would have replaced the Amiga if Jack Tramiel had never left Commodore.”

                                                                                                                        Cool project, but I don’t think this is true. Amiga 500 had 512KB of RAM because it was bloody expensive. So did majority of competitors. Nobody would put 1.5MB in a computer at that time because it would severely reduce number of units you could shift for little benefit. Pretty much all software written at that point needed far less than that (even on multitasking Amiga).

                                                                                                                        Also, I believe 65C816 did not run at 14Hz back then. Not many chips did and both Amiga and Atari were running at 7-8Hz.

                                                                                                                        1. 3

                                                                                                                          The A500 could be expanded up to 7 MB though, so I don’t think it’s completely out of line.

                                                                                                                          I wonder if the CPU is actually the W65C816S, which is readily available at 14 MHz. I sent an email to Stefany and asked about it.

                                                                                                                          Edit: it is indeed the W65C816S from Western Design Center.