Threads for rpaulo

    1. 1

      Maybe I’m missing something but couldn’t we make a slow motion video?

      1. 1

        this is an interesting option, currently it’s unrolled in time, that is it shows processing of the whole input phrase at once. It’s possible to show for predicting each word and sequence them into video. might try it later

    2. 4

      Apple is looking for contractors with some networking kernel experience. More information here: https://lists.freebsd.org/archives/freebsd-jobs/2023-August/000042.html

      Sorry for not using the template but for some reason I’m running into issues when I copy it into the form.

    3. 14

      I really wish these stories would kill off this notion that these “AI” systems understand anything. They do one thing, and one thing only: try to create a statistically plausible list of words.

      If you go to chat-gpt and ask it to just repeat a single letter - say ‘A’ - as long as possible it will eventually start producing a pile of unrelated text, because eventually token gap between your request and the next word it should emit becomes too long and ceasing being involved in the “predict the next word in a sentence” game.

      1. 3

        They are a calculator for words only (albeit an extremely capable and useful one).

        That said, it doesn’t need to be AGI to be disruptive.

        1. 16

          I know Simon is popularizing “a calculator for words” but I much prefer “a kaleidoscope for words”. It makes cool new patterns! But don’t try to see what’s outside like it’s a telescope.

          1. 1

            Spot on, it is a calculator and it’s visor iis inside kaleidoscope. Lol

      2. 2

        How could one prove that any system that outputs only text (i.e. a human at a keyboard) is anything more than that?

        1. 1

          Perhaps the wording could be improved: gpt is designed to produce statistics.

          Well, shoot, so is evolution and people. Well, we have usually had longer training and better heuristics :)

        2. 1

          You will be interested in the Chinese room argument from John Searle

      3. 1

        I don’t think that example works in practice because chat gpt will stop producing As once it reaches a pre defined character limit.

      4. 1

        because eventually token gap between your request and the next word it should emit becomes too long

        In the standard transformer architecture there isn’t such a thing as a gap that is too long (modulo context size, but usually models are constrained to generate something shorter than the context size). Transformers are not like RNNs where memory is kept in a single state vector. When a token is generated the model can attend to all previous tokens equally, so there really is no difference between the Nth and the N-100th token (though they are distinguished through positional embeddings, otherwise the model wouldn’t have a notion of word order).

        Most generators (including ChatGPT) sample from the vocab output softmax distribution to increase variety in the output, rather than taking the top-1/argmax. What most likely happens is that at some point another piece than the piece for A is sampled and then the model goes off producing more nonsensical tokens. Another possibility is that e.g. position embeddings dilute the original query, but I my first bet would be sampling.

        (I completely agree with your point of not really understanding language though.)

      5. -1

        I disagree, these systems do have understanding.

    4. 4

      Reading what caius is doing.

      1. 1

        I’m glad it isn’t just me who’s impressed!

      2. 1

        caius?

        1. 1
    5. 4

      Does this prevent installing packages on my home directory as well? That would be annoying.

      1. 3

        I always install in my home directory (I’m not a monster) but having packages in my home directory installed at different times, with different python versions has caused me all kinds of low level annoying problems.

      2. 2

        Use pyenv to build pythons that you control in your home directory.

      3. 1

        Yes. But if you pass ‘–break-system-packages’ you get the same behavior as before, you just need to set a scary-sounding option :-) You can set an alias if you want that.

    6. 1

      It seems like the only problem that needs to be solved is to have some CI code that checks if the Go patch version is available and, if not, skip the vulnerability checker or mark it was a know failure since it’s not possible to correctly test it.

    7. 5

      Since 2008, there has been an IEEE 754 standard for decimal floating point values, which fixes this.

      The fundamental problem illustrated here is that we are still using binary floating point values to represent (i.e. approximate) decimal values.

      1. 6

        Yeah, and Python, Julia’s language of choice, has about the world’s only easily accessible implementation of IEEE 754 decimals. Little known fact, Python’s Decimal class is IEEE 754-compliant arithmetic!

        1. 8

          I was flabbergasted to know that Julia is not Julia’s language of choice.

        2. 1

          Cool! I didn’t realize that :)

          Ecstasy’s decimal types are all built around the IEEE 754 spec as well. Not 100% implemented at this point, though.

        3. 1

          Is this expected?

          >>> Decimal(0.1) + Decimal(0.2)
          Decimal('0.3000000000000000166533453694')
          
          1. 18

            If you want to create a literal Decimal, pass a string:

            >>> Decimal("0.1")
            Decimal('0.1')
            

            When you pass a float, you’re losing information before you do any arithmetic:

            >>> Decimal(0.1)
            Decimal('0.1000000000000000055511151231257827021181583404541015625')
            

            The problem is that 0.1 is not one tenth–it’s some other number very close:

            >>> 0.1 .as_integer_ratio()
            (3602879701896397, 36028797018963968)
            

            Whereas if you create a Decimal from a string, the Decimal constructor can see the actual digits and represent it correctly:

            >>> Decimal("0.1").as_integer_ratio()
            (1, 10)
            
      2. 5

        I mean it solves this loosely. The places where decimal vs. non-decimal matters - certainly where this seems to come up - are generally places where I would question the use of floating vs fixed point (of any or arbitrary precision).

        Base 10 only resolves the multiples of 1/10 that binary can’t represent, but it still can’t represent 1/3, so it seems like base 30 would be better as it can also accurately represent 1/3, 1/6, in addition to 1/2, 1/5, and 1/10. Supporting this non binary format necessarily results in slower operations.

        Interestingly to avoid a ~20% reduction in precision the decimal ieee754 actually works in base 1000.

        1. 2

          “Base 10 only resolves the multiples of 1/10 that binary can’t represent”

          That is quite convenient, since humans almost always work in decimals.

          I have yet to see a currency that is not expressed in the decimal system.

          I have yet to see an order form that does not take its quantities in the decimal system.

          In fact, if there’s any type that we do not need, it’s binary floating point, i.e. what programmers strangely call “float” and “double”.

          1. 5

            I have yet to see a currency that is not expressed in the decimal system. I have yet to see an order form that does not take its quantities in the decimal system.

            Yes, which is my point, there are lots of systems for which base 10 is good for humans, but that floating point in any base is inappropriate.

            In fact, if there’s any type that we do not need, it’s binary floating point, i.e. what programmers strangely call “float” and “double”.

            Every use case for floating point requires speed and accuracy. Every decimal floating point format is significantly more expensive to implement in hardware area, and is necessarily slower than binary floating point. The best case we have for accuracy is ieee754’s packed decimal (or compressed? I can’t recall exactly) which takes a 2.3% hit to precision, but is even slower than the basic decimal form which takes a 20% precision hit.

            For real applications the operations being performed typically cannot be exactly represented in base 10 (or 1000) or base 2, so the belief that base 10 is “better” is erroneous. It is only a very small set of cases where a result would be exactly representable in base 10 where this comes up. If the desire is simply “be correct according to my intuition” then a much better format would be base-30, which can also represent 1/(3^n) correctly. But the reality is that the average precision is necessarily lower than base-2 for every non-power of 2 base, and the performance will be slower.

            Floating point is intended for scientific and similar operations which means it needs to be as fast as possible, with as much precision as possible.

            Places where human decimal behaviour is important are almost universally places where floating point is wrong: people don’t want their bank or order systems doing maths that says x+y==x when y is not zero, which is floating point does. That’s because people are dealing with quantities that generally have a minimum fractional quantity. Once you recognize that, your number format should become an integer count of that minimum quantity.

          2. 2

            For currencies, you can just use integers, floats are not meant for that anyway. Binary is the most efficient to evaluate on a computer.

            1. 3

              Yes, for currencies, you can use integers. Who would want to say x * 1.05 when they could say multFixPtDec(x, 105, 2);

              To some extent, this is why we use standards like IEEE 754. Some of us remember the bad old days, when every CPU had a different way of dealing with things. 80 bit floats for example. Packed and unpacked decimal types on x86 for example. Yay, let’s have every application solve this in its own unique way!

              Or maybe instead, let’s just use the standard IEEE 754 type that was purpose-built to hold decimal values without shitting itself 🤷‍♂️

              1. 4

                [minor edit: I just saw both my wall of text replies were to u/cpurdy which I didn’t notice. This isn’t meant to have been a series of “target cpurdy” comments]

                Yes, for currencies, you can use integers. Who would want to say x * 1.05 when they could say multFixPtDec(x, 105, 2);

                I mean, sure if you have a piss poor language that doesn’t let you define a currency quantity it will be annoying. It sounds like a poor language choice if you writing something that is intended to handle money, but more importantly, using floating point for currency is going to cause much bigger problems.

                And this has nothing to do with ieee754, that is merely a specific standard detailing how the storage bits for the format work, the issue is fundamental to any floating point format: floating point is not appropriate to anything where use are expecting exact quantities to be maintained (currencies, order quantities, etc) and it will bite you.

                Some of us remember the bad old days, when every CPU had a different way of dealing with things. 80 bit floats for example.

                So as a heads up assuming you’re complaining about x87’s 80bit floats: those are ieee754 floating point, and are the reason ieee754 exists: every other manufacturer said the ieee754 could not be implemented efficiently until intel went and produced it. The only issue is that being created before finalization of the ieee754 specification it uses an explicit 1-bit which turns out to be a mistake.

                Packed and unpacked decimal types on x86 for example.

                You’ll be pleased to know ieee754’s decimal variant has packed and unpacked decimal formats - unpacked taking a 20% precision hit but being implementable in software without being catastrophically slow, and packed having only a 2.3% precision hit but being pretty much hardware only (though to be clear as I’ve said elsewhere, still significantly and necessarily slower than binary floating point)

                Or maybe instead, let’s just use the standard IEEE 754 type that was purpose-built to hold decimal values without shitting itself 🤷‍♂️

                If you are hell bent on using an inappropriate format for your data then maybe decimal is better, but you went wrong when you started using a floating point representation for values that don’t have significant dynamic range where gaining and adding value due to precision limits is not acceptable.

                1. 1

                  [minor edit: I just saw both my wall of text replies were to u/cpurdy which I didn’t notice. This isn’t meant to have been a series of “target cpurdy” comments]

                  No worries. I’m not feeling targeted.

                  I mean, sure if you have a piss poor language that doesn’t let you define a currency quantity it will be annoying.

                  C. C++. Java. JavaScript.

                  Right there we have 95% of the applications in the world. 🤷‍♂️

                  How about newer languages with no decimal support? Hmm … Go. Rust.

                  And this has nothing to do with ieee754

                  Other than it actually specifies a standard binary format, operations, and defined behaviors thereof for decimal numbers.

                  So as a heads up assuming you’re complaining about x87’s 80bit floats: those are ieee754 floating point

                  Yes, there are special carve-outs (e.g. defining “extended precision format”) in IEEE754 to allow 8087 80-bit floats to be legal. That’s not surprising, since Intel was significantly involved in writing the IEEE754 spec.

                  ieee754’s decimal variant has packed and unpacked decimal formats - unpacked taking a 20% precision hit but being implementable in software without being catastrophically slow, and packed having only a 2.3% precision hit but being pretty much hardware only

                  I’ve implemented IEEE754 decimal with both declet and binary encoding in the past. Both formats have the same ranges, so there is no “precision hit” or “precision difference”. I’m not sure what you mean by packed vs unpacked; that seems to be a reference to the ancient 8086 instruction set, which supported both packed (nibble) and unpacked (byte) decimal arithmetic. (I used both, in x86 assembly, but probably not in the last 30 years.)

                  you went wrong when you started using a floating point representation for values that don’t have significant dynamic range where gaining and adding value due to precision limits is not acceptable

                  I really do not understand this. It is true that IEEE754 floating point is very good large dynamic ranges, but that does not mean that it should only be used for values with a large dynamic range. In fact, quite often IEEE754 is used to deal with values limited between zero and one 🤷‍♂️

                  1. 5

                    C. C++. Java. JavaScript.

                    C++:

                    struct Currency {
                        ....
                        operator+, -, ...
                    }
                    

                    How about newer languages with no decimal support? Hmm … Go. Rust.

                    You can also do similar in rust. I did not say “has a built in currency type”.

                    You can also add one to python, or a variety of other languages. I’m only partially surprised that Java still doesn’t provide support for operator overloading.

                    And this has nothing to do with ieee754 Other than it actually specifies a standard binary format, operations, and defined behaviors thereof for decimal numbers.

                    No. It defines the operations on floating point numbers. Which is a specific numeric structure, and as I said one that is inappropriate for the common cases where people are super concerned about handling 1/(10^n) accurately.

                    I’ve implemented IEEE754 decimal with both declet and binary encoding in the past. Both formats have the same ranges, so there is no “precision hit” or “precision difference”. I’m not sure what you mean by packed vs unpacked; that seems to be a reference to the ancient 8086 instruction set, which supported both packed (nibble) and unpacked (byte) decimal arithmetic.

                    I had to go back and re-read the spec, I misunderstood the two significand encodings. derp. I assumed your reference to the packed and unpacked was those.

                    On the plus side, this means that you’re only throwing out 2% of precision for both forms.

                    I really do not understand this. It is true that IEEE754 floating point is very good large dynamic ranges, but that does not mean that it should only be used for values with a large dynamic range.

                    No, I mean the kind of things that people care about/need accurate representation over multiples 1/(10^n) do not have dynamic range, fixed/no-point are the correct representation. So optimizing the floating point format for fixed point data, instead of the actual use cases that have widely varying ranges (scientific computation, graphics, etc)

                    In fact, quite often IEEE754 is used to deal with values limited between zero and one 🤷‍♂️

                    There is a huge dynamic range between 0 and 1. The entire point of floating point is that all numbers can be represented as a value between [1..Base) with a dynamic range. The point I am making is that the examples where decimal formats is valuable do not need that at all.

              2. 2

                What is the multiplication supposed to represent? Are you adding a 5% fee? You need to round the value anyway, the customer isn’t going to give you 3.1395 dollars. And what if the fee was 1/6 of the price? Decimals aren’t going to help you there.

                1. 2

                  It never ceases to amaze me how many people really work hard to avoid obvious, documented, standardized solutions to problems when random roll-your-own solutions can be tediously written, incrementally-debugged, and forever-maintained instead.

                  Help me understand why writing your own decimal support is superior to just using the standard decimal types?

                  I’m going to go out on a limb here and guess that you don’t write your own “int”, “float”, and “double”. Why is decimal any different?

                  This whole conversation seems insane to me. But I recognize that maybe I’m the one who is insane, so please explain it to me.

                  1. 1

                    No, I’m saying that you don’t need a decimal type at all. If you need to represent an integral value, use an integer. If you want to represent an approximation of a real number, use a float. What else would you want to represent?

                    1. 3

                      I would like to have a value that is a decimal value. I am not the only developer who has needed to do this. I have needed it many times in financial services applications. I have needed it many times in ecommerce applications. I have needed it many times in non-financial business applications. This really is not a crazy or rare requirement. Again, why would you want to use a type that provides an approximation of the desired value, when you could just use a type that actually holds the desired value? I’m not talking crazy, am I?

                      1. 2

                        What do you mean by “a decimal value”? That’s not an established mathematical term. If you mean any number that can be expressed as m/10ⁿ for some integers m, n, you need to explain precisely why you’d want to use that in a real application. If you mean any number that can be expressed as m/10ⁿ forsome integer m and a fixed integer n, why not just use an integer?

                2. 1

                  My proposal is that we switch to a base 30 floating point format, and that could handle a 1/6th fee :D :D :D

              3. 2

                Being able to say x * 1.05 isn’t a property of the type itself, it’s just language support. If your language supports operator overloading you could use that syntax for fixed point too.

                1. 1

                  Oh, you are using a language with fixed point literals? I have (in the past). I know that C#/VB.NET has its 128-bit non-standard floating point decimal type, so you’re not talking about that. Python has some sort of fixed point decimal support (and also floating point decimal). What language are you referring to?

                  1. 2

                    Oh, you are using a language with fixed point literals?

                    You don’t need to. Strings are a good substitute

                    For Kotlin it doesn’t really even matter what the left operand is

                    fun main() {
                        println("1.05" * 3)
                    }
                    
                    operator fun String.times(right_operand: Int): FixedDecimal {
                        // Do math
                    	return FixedDecimal(); // Return placeholder
                    }
                    
                    class FixedDecimal;
                    

                    https://pl.kotl.in/7FDdqQdSo

                    1. 1

                      So your idea is to write your own custom decimal type? And that is somehow better than using an international well-established standard IEEE-754?

                      I think Kotlin is a nice language, and it’s cool that it allows you to write new classes, but being forced to build your own basic data types (”hey look ma! I invented a character string!”) seems a little crazy to me 🤷‍♂️

                      1. 2

                        The idea is that the type represents an underlying standard as well as its defined operations. You don’t need native support for a standard in order to support said standard

                        Edit:

                        but being forced to build your own basic data types

                        I was giving an example about ergonomics and language support rather than using an opaque dependency

    8. 11

      I find music theory fascinating from a philosophical perspective because:

      • our perception of pitch, and of pleasing combinations of pitches, is mathematical. Our ears/brains experience pitches logarithmically and like small-integer ratios.
      • So music theory is basically simple applied math — integer ratios, modular arithmetic, logs base 2.
      • We expect that mathematical structures will fit together perfectly, because usually they do. When you prove a=b, then a is precisely equal to b, no slop, no rounding error.
      • But when you do the very fundamental exercise described in this article, which Pythagoras was probably not the first to try, you end up with a beautiful structure of 12 notes … but it doesn’t quite fit. By all rights it ought to be a perfect 12-pointed star with all its lovely symmetries, but the damn thing doesn’t close. And yet we use this star, this circle of fifths, as a foundational structure of music in nearly every human culture.
      • This problem has been bothering musicians, and causing real problems, for about 2,000 years. You couldn’t transpose pieces to a different key or play them on certain instruments. Composers couldn’t use certain intervals or harmonies because they sound like shit. In fact no matter what you do you can’t get all harmonies to sound right. Composers were literally engaging in flame wars and nearly coming to blows in 18th-century Europe over this.
      • Our current Western tuning is a kind of hack that’s very symmetrical — 12 equal steps of the 12th root of 2 — but which makes all intervals except octaves slightly wrong. The wrongness is small enough to ignore, and it turns out if you grow up with it the true integer-ratio intervals sound weird and wrong.

      Anyway. And it’s all based on this odd coincidence that (3/2)^12 is almost equal to 2^7. If that weren’t the case, music would be indescribably different.

      1. 2

        Jacob Collier does a great quick demo of how different the integer-ratio vs 12th-root-2 notes can sound: https://www.youtube.com/watch?v=XwRSS7jeo5s

      2. 2

        Rhythm is also fascinating! It is so syntactical

      3. 1

        I don’t think the circle of fifths is used outside western music. Can you explain why you say “nearly every human culture”?

        1. 3

          12 tone equal temperament (which the circle of fifths arises from) has been around for a couple thousand years and has influenced a plurality of non-westerners. Especially after the internet, our music tastes have started to converge across the world. There are of course non-western cultures that don’t use 12 tones and thus don’t have the same circle of fifths, but if you listen to the radio in Eastern Europe, South America, East Asia, etc (big first-world cultural hubs), you’ll find plenty of pop songs structured very similarly to American pop songs. Bad Bunny and Higher Brothers are some examples of converging music tastes imo.

        2. 1

          Sorry, I was being a bit lazy there. What’s universal is the use of simple pitch ratios as musical intervals. Every culture that has any sort of music has discovered & used pentatonic scales (or so I’ve read.)

    9. 2

      Drinking

      1. 1

        Hopefully not alone…

    10. 2

      It’s long weekend in canada and I got a cold. So going to spend most time on the piano with a mask on, prepping for exam next week!

      1. 1

        Which exam? I’ve been doing MTAC piano exams.

        1. 1

          Hey, that’s awesome! Is that a teacher’s cert or as a student? I didn’t know about MTAC. I’m doing my Level 6 RCM exam this Thursday. Went back to piano after a looong time away.

          1. 1

            Student level 5. I started playing piano in 2019 so I’m an adult student.

            1. 1

              That’s awesome! I’m an adult student as well! I can’t see the MTAC syllabus, but based on both RCM and MTAC having 10-11 levels, they should be very similar. I started again in Aug 2020 as well. Best decision ever!

              1. 1

                It’s not really public (you have to buy it) but they can be found on the tuning note website http://thetuningnote.com/mtac/CM/2012%20Level%205%20zRepertoire%20Requirements.pdf

    11. 10

      Eyyy, this is my project. :-) – Happy to answer any questions.

      1. 2

        This is a great project! Btw Albufeira is in Portugal. :-)

        1. 2

          This is the most random comment I have ever received. Thanks for that :-D – It took me a looooong time to figure out that you’re referring to the travel map on my blog. Hehe.

      2. 2

        Hi - this looks nice.

        What is the security story here? Is there a document that shows the flow from curl all the way to the notification on the phone?

        1. 2

          Hi. There’s no documentation page (yet) that describes architecture and flow, though just judging by how often I have ASCII-drawn it, there really should be one :-)

          From the very start, ntfy was designed as a convenience-first app (as simple as possible), which you can see by how simple the curl and POST/PUT requests are. That’s not an excuse, it’s just a conscious choice I made. Because of that, nothing is encrypted at rest (only transport encryption if TLS is used).

          Flow 1 (with Firebase):

          client (e.g. curl) ---[HTTP(S)]---> ntfy server [store in cache] ---> Firebase ---> Android app
          

          Flow 2 (without Firebase):

          client (e.g. curl) ---[HTTP(S)]---> ntfy server [store in cache] ---[HTTP(S) JSON/WS]--> Android app
          

          Flow 3 (iOS):

          client (e.g. curl) ---[HTTP(S)]---> ntfy server [store in cache] ---> Firebase ---> APNS ---> iOS
          

          Messages are stored in plaintext in a SQLite database on the server, unless the X-Cache: no header is passed. All messages are forwarded to Firebase, unless the X-Firebase: no header is passed.

          If you want private messages, you can either wait for the E2E feature (https://github.com/binwiederhier/ntfy/issues/69), which I have already begun developing, and which sadly destroy the ease of use.

          Or you can run your own selfhosted server and add basic auth and ACLs (https://ntfy.sh/docs/config/#access-control).

          1. 3

            For more complex cases, it’s worth looking at what Signal does. They basically treat the notification services as a 1bit signal that there is something pending (I think that they may also send occasional spurious ones to make traffic correlation harder). Once the app receives the notification, it wakes up and polls the real service.

            I couldn’t see anything about efficiency though. The reason most apps use a notification service is to allow a single background service that consumes a tiny amount of RAM to have a single network connection with a very long timeout and then wake up when a notification arrives and prod the system to either forward it to the running app or start the app and forward it if necessary. From the examples, it wasn’t clear how you achieve anything like this, it looked as if the apps were running in the foreground and received the notification directly. I guess you are doing this because. I believe, iOS doesn’t allow background apps to maintain persistent network connections.

            1. 1

              They basically treat the notification services as a 1bit signal that there is something pending

              This is what ntfy does for iOS for selfhosted servers: It sends a poll_request via APNS, and then the app will poll the original selfhosted server. There’s a description of this here: https://ntfy.sh/docs/config/#ios-instant-notifications – iOS is veeeeery limited in what you can do. Everything has to go through a central server, so selfhosted servers are technically not really possible at all. It’s quite sad.

              I couldn’t see anything about efficiency though.

              I responded a bit about this here: https://lobste.rs/s/41dq13/zero_cost_push_notifications_your_phone#c_b6qfnd – Bottom line is that for Android, it’s either Firebase (FCM) or a long-standing JSON stream or WebSocket connection. FCM consumes no battery, and the foreground service consumes 0-1% on my phone for the entire day. If used heavily obviously more

              1. 2

                I see. I was hoping that it was possible to use this stand alone, but I guess that’s just not permitted on iOS. It would be nice if there were an Android service that de-Google’d devices could use as a single thing maintaining a service and multiplex the waiting so that apps using an individual server can still exit and be restarted when a notification aimed at them arrives.

                1. 2

                  It would be nice if there were an Android service that de-Google’d devices could use as a single thing maintaining a service and multiplex the waiting so that apps using an individual server can still exit and be restarted when a notification aimed at them arrives.

                  I think you are talking about what https://unifiedpush.org/ is trying to be. ntfy is a distributor for UnifiedPush-enabled apps.

      3. 2

        Great docs and cool concept. Well done!

        Not a question, but I assume you’re accepting compliments as well :)

        [edit]

        I actually have a suggestion for the mobile apps. Support deep links to reconfigure the notification server, e.g. https://ntfy.sh/configure?base_url=<NEW NTFY SERVER URL>. When accessed, the user is prompted to allow the app to be reconfigured with the new notify server URL. This would allow self-hosters to more easily roll out push notifications using self-hosted instances. Maybe not a priority for you as a fun open source project with no profit motive, but possibly worth considering.

        1. 1

          I always try to write the docs the way I’d want them from other projects: Lots of examples and pictures :D – Thank you for the kind words.

          [edit]

          Support deep links to reconfigure the notification server

          Surprisingly, this has been suggested recently (https://github.com/binwiederhier/ntfy/issues/440). It’s surprising to me, because I don’t quite understand the use case. If you have a self-hosted server, why would you need a shortcut to configure the app that way? Why not just go in the settings and configure it yourself. It’s a step you have to do only once, so it surely can’t be a huge hassle, right?

          I’m genuinely asking, because maybe I don’t quite understand the case. Feel free to answer or +1 the GitHub ticket.

          1. 2

            Cool, I’ll go ahead and follow up with details on the GH issue.

    12. 5

      Company: Apple

      Company site: https://www.apple.com

      Positions: Software Engineer - OS Networking

      https://jobs.apple.com/en-us/details/200370900/software-engineer-os-networking?team=SFTWR

      https://jobs.apple.com/en-us/details/200308413/software-engineer-os-networking?team=SFTWR

      https://jobs.apple.com/en-us/details/200256976/software-engineer-os-networking?team=SFTWR

      https://jobs.apple.com/en-us/details/200308432/software-engineer-os-networking?team=SFTWR

      Location: On-site. Cupertino or San Diego

      Description: We have 2 positions in Cupertino and 2 positions in San Diego for SWEs who want to work on operating system networking. This includes, but is not limited to, software that implements, TCP/IP, firewalls, QUIC, network device drivers, networking APIs (https://developer.apple.com/documentation/network), networking infrastructure, VPNs, wireless/Ethernet/cellular networks, etc.

      Tech stack: C, Obj-C, C++, Swift

      Compensation: Competitive pay and great benefits. The recruiter will cover all of this.

      Contact: You can either apply through the website or you can email me (rpaulo at apple.com) and I will be glad to forward your resumé to the hiring in manager / recruiter. Feel free to contact me if you have questions about these job openings.

    13. 4

      Although the keys of this initial encryption are known to observers of the connection

      I haven’t looked at the specs yet. Is that true? Isn’t that horrible?

      1. 6

        I think it’s fundamentally unavoidable. At the point that a browser initiates a connection to a server, the server doesn’t yet know which certificate to present. DH alone doesn’t authenticate that you haven’t been MITM’d.

        1. 5

          It’s not unavoidable if both parties can agree on a PSK (pre-shared key) out-of-band, or from a previous session - and IIRC, the TLS 1.3 0-RTT handshake which is now used by QUIC can negotiate PSKs or tickets for use in future sessions once key exchange is done. But for the first time connection between two unknown parties, it is certainly unavoidable when SNI is required, due to the aforementioned inability to present appropriate certificates.

        2. 2

          On the other hand, if you have been MitM’d you’ll notice it instantly (and know that the server certificate has been leaked to Mallory in the Middle). And now every connection you make is broken, including the ones they did not want to block. I see to ways of avoiding that:

          1. Don’t actually MitM.
          2. Be a certificate authority your users “trust” (install your public key in everyone’s computers, mostly).
        3. 2

          No, but DH prevents sending a key across the wire, making them known and prevents passive observers from reading ciphertext. Wouldn’t it make sense to talk to the server first?

          1. 3

            Without some form of authentication (provided by TLS certificates in this case), you have no way to know whether you’re doing key exchange with the desired endpoint or some middlebox, so you don’t really gain anything there.

            1. 3

              You gain protection against passive observers, thereby increasing costs of attackers trying to snoop on what services people connect to. Also when you then anyways end up receiving the certificate you at worst retro-actively could verify you weren’t snooped at, which is more than you have when it’s actually that you send a key that allows you to decrypt, which still sounds odd to me.

              1. 3

                What you’re suggesting is described on https://www.ietf.org/id/draft-duke-quic-protected-initial-04.html This leverages TLS’s encrypted client hello to generate QUIC’s INITIAL keys.

          2. 1

            I don’t know how much sense it makes? Doing a DH first adds more round trips to connection start, which is the specific thing QUIC is trying to avoid, and changes the way TLS integrates with the protocol, which affects implementability, the main hurdle QUIC has had to overcome.

            1. 1

              I get that, but how does it make sense to send something encrypted when you send the key to decrypt it with it? You might as well save that step, after all the main reason to encrypt something is to prevent it from being read.

              EDIT: How that initial key is sent isn’t part of TLS, is it? It’s part of QUIC-TLS (RFC9001). Not completely sure, but doesn’t regular 0-RTT in TLSv1.3 work differently?

              1. 5

                The purpose of encrypting initial packets is to prevent ossification.

                1. 1

                  Okay, but to be fair that kind of still makes it seem like the better choice would be unauthenticated encryption that is not easily decryptable.

                  I know 0RTT is a goal but at least to me it seems like the tradeoff isn’t really worth it.

                  Anyways thanks for your explanations. It was pretty insightful.

                  I guess I’ll read through more quic and TLS on the weekend if I have time.

                  1. 1

                    The next version of QUIC has a different salt which prevents ossification. To achieve encryption without authentication, the server and the client can agree on a different salt. There’s a draft describing this approach, I think.

              2. 1

                how does it make sense to send something encrypted when you send the key to decrypt it with it?

                According to https://quic.ulfheim.net/ :

                Encrypting the Initial packets prevents certain kinds of attacks such as request forgery attacks.

      2. 2

        It’s not more horrible than the existing TLS 1.3 :-) I sent out a link to something that may be of interest to you.

      3. 0

        It’s only the public keys that are known, and if they did their job well, they only need to expose ephemeral keys (which are basically random, and thus don’t reveal anything). In the end, the only thing an eavesdropper can know is the fact you’re initiating a QUIC connection.

        If you want to hide that, you’d have to go full steganography. One step that can help you there is making sure ephemeral keys are indistinguishable from random numbers (With Curve25519, you can use Elligator). Then you embed your abnormally high-entropy traffic in cute pictures of cats, or whatever will not raise suspicion.

        1. 1

          This is incorrect, see RFC9001. As a passive observer you have all the information you need to decrypt the rest of the handshake. This is by design and is also mentioned again in the draft that rpaulo mentioned.

          The problems with this are mentioned in 9001, the mentioned draft and the article.

          1. 1

            Goodness, I’m reading section 7 of the RFC right now, it sounds pretty bad. The thing was devised in 2012, we knew how to make nice handshakes that leak little information and for heaven’s sake authenticate everything.

            As a passive observer you have all the information you need to decrypt the rest of the handshake.

            Now I’m sure it’s not that bad. I said “It’s only the public keys that are known”. You can’t be implying we can decrypt or guess the private keys as well? And as a passive observer at that? That would effectively void encryption entirely.

    14. 1

      dscacheutil is unrelated to mDNSResponder.

    15. 2

      And so I assumed I just didn’t get classes.

      I remember feeling. Spent a couple years in college felling this way.

      At this point of my career, I don’t see the added complexity of classes as a problem.

    16. 1

      Very interesting. I had no idea Pom gets Wi-Fi existed.

    17. 6

      Why another file manager? I wanted something simple and minimalistic, something to help me with faster navigation in the filesystem. A cd & ls replacement. So I build “llama”. It allows to quickly navigate with fuzzy searching, cd integration is quite simple. Opens vim right from llama. That’s it. Simple and dumb as a llama.

      1. 6

        llama

        fuzzy search

        Will check it out for that alone.

        1. 0

          I think nnn also has that

          1. 1

            I meant the play on words :D

            1. 2

              Lol, took me a while to figure it out :D

      2. 5

        I think you should reconsider insulting llamas :-) They are not that dumb.

        1. 3

          Have you seen a llama face?)

          1. 3

            That’s not where the intelligence is stored.

            1. 0

              Yes, it’s behind and the dace is a mirror of intelligence.

      3. 3

        And I say, “Hey, Llama, hey, how about a little something, you know, for the effort, you know.” And he says, “Oh, uh, there won’t be any money, but when you die, on your deathbed, you will receive total consciousness.” So I got that goin’ for me, which is nice.

        1. 1

          Thanks :)

    18. 2

      Interesting. I have the exact same trackball for more than a year and never had any problems. I wonder how long you’ve been using it.

    19. 2

      It does sound like a debugging aid given that different opposes generate different frequencies

    20. 2

      I did very similar modifications to my Silvia 6y ago. There’s a lot of information on http://www.pidsilvia.com