Threads for dpedu

  1. 2

    Back when Boot Camp was relatively new on Macs, Parallels desktop had a similar feature where, while running in macOs, it could boot your windows partition in a VM. I think VMware Fusion could do it as well.

    1. 1

      I think this feature is still available in Parallels - I used it back when I still had an Intel MacBook

    1. 3

      I haven’t decided yet if I agree with the conclusion of this article. I recently set up a new bare-metal server using ZFS on Linux, and the fact that the ARC is separate from the normal Linux page cache was bugging me. I found this article while looking for information on possible implications of that issue. I do like ZFS’s unification of filesystem and volume manager (the “layering violation”), the snapshot support, and of course, the emphasis on data integrity.

      1. 6

        Not sure about Linux but on FreeBSD the buffer cache was extended to allow it to contain externally-owned pages. This means that ARC maintains a pool of pages for disk cache (and grows and shrinks based on memory pressure in the system) and the same pages are exposed to the buffer cache. Prior to this (I think this landed in FreeBSD 10?), pages needed to be copied from the ARC into the buffer cache before they could be used to service I/Os and so you ended up with things being cached twice. Now, ARC just determines which pages remain resident from the storage pool.

        1. 2

          On Linux, the free command counts the ARC under “used”, not “available” like the standard Linux page cache.

          Having read all of the comments here, I think I’ll stick with ZFS. But I still need to find out if there are any unusual high-memory-pressure situations I should watch out for.

          1. 6

            FYI, here’s what I’ve observed regarding memory pressure after many years of running OpenZFS on Linux on multiple machines, each with multiple ZFS pools, some using hard disks and others using SSDs (and some using HDDs with SSDs as caches), and also including ZFS as the root filesystem on all systems:

            • Many years ago I ran into a situation where under heavy memory pressure the ARC would shrink too much. This caused the working set of ZFS metadata and/or commonly accessed data to constantly being evicted from cache and therefore the system appeared to almost hang because it was too busy doing I/Os of recently-evicted blocks instead of making useful progress. The workaround was simple: set zfs_arc_min parameter to force arc_c_min to be 1/16th of the system memory (which is 6.25%), rather than the default 1/32th of system memory (which is 3.125%). As an example, on a Raspberry Pi with 8 GiB of memory I add the zfs.zfs_arc_min=536870912 Linux kernel parameter on my boot/grub config (which sets the minimum ARC size to be 512 MiB rather than the default 256 MiB) and on a server with 128 GiB I add zfs.zfs_arc_min=8589934592 (i.e. 8 GiB rather than the default 4 GiB), etc (you get the idea). Since I did this, I never observed this behavior anymore, even under the same (or different) stressful workloads.
            • On recent OpenZFS versions, on my systems, I’ve observed that the ARC growing/shrinking behavior is flawless compared to a few years ago, even under heavy memory pressure. However, the maximum ARC size is still too conservative by default (50% of memory). This meant that under low memory pressure, the ARC would never grow beyond 64 GiB of memory on my 128 GiB server, leaving almost 64 GiB of memory completely unused at all times (instead of using it as a block cache for more infrequently used data or metadata). So I just add another kernel parameter to set the maximum ARC size to be 90% of system memory instead. e.g. On my 8 GiB Raspberry Pis I add zfs.zfs_arc_max=7730941132 (i.e. 7.2 GiB) and on my 128 GiB server I add zfs.zfs_arc_max=123695058124 (i.e 115.2 GiB). Since then, the ARC is free to grow more and use otherwise unused memory as a cache.
            • Although strictly not a memory issue, many years ago I also added the zfs.zfs_per_txg_dirty_frees_percent=0 parameter, which disables some throttling which was causing me problems, although I don’t remember the exact details. This might not longer be necessary (not sure).

            Apart from that I haven’t observed any other issue. Although please have in mind that I use zram swap on all my systems and I don’t use dedup, RAID-Z or ZFS encryption (I use LUKS encryption instead), etc. So your mileage might vary depending on your system hardware/software config and/or your workloads, especially if they are somewhat extreme in some way.

            1. 1

              If you find out, will you leave a link here or PM me? I too do not have a good mental model here and that slightly worries me.

          2. 1

            and the fact that the ARC is separate from the normal Linux page cache was bugging me

            How separate is it? I know that /proc/sys/vm/drop_caches will drop it in the same manner as the normal page cache.

          1. 3

            I like the proof of work idea, but it would be extremely annoying to have such a captcha without anything to fill out. Just simply waiting would be too annoying. Something that happens while you type not so much (except for password manager users like me). Another thought: would it make it better to use input events somehow? Isn’t it so that trusted input events can’t be faked by scripts inside the typical browsers?

            1. 1

              If you wish to experience it yourself, here’s a test showing the captcha being used as a bot deterrent in front of a media file: every time you navigate to this URL it will re-direct you to a new captcha challenge w/ 5 bits of difficulty: https://picopublish.sequentialread.com/files/aniguns.png

              The difficulty is tweak-able. I think i used 8 bits of difficulty and specifically waited for one that took abnormally long when I was capturing the screencast I used as the GIF on the ReadMe page.

              Isn’t it so that trusted input events can’t be faked by scripts inside the typical browsers?

              Are you referring to the ways that facebook attempts to prevent script kiddie bots from interacting on their platform(s)? Yes, a simple version of such a thing may work as an effective heuristic to get rid of non-browser bots and simplistic browser automation bots without being privacy invasive. Maybe that’s a good idea for a feature of version 2 🙂

              1. 2

                It looks like this is tied to IP address + browser user agent. Once I load the above page once in my browser I can hit it as many times and as fast as I’d like with curl provided that I pass the same user-agent from my browser.

                1. 1

                  Heh, did you read the code or find that out yourself? I guess I’m impressed either way :P

                  Yes, that’s how I set it up for this particular “picopublish” app, independent of the PoW Captcha project. If you want to see the bot deterrent over and over, you have re-navigate to the original link without the random token, or else change your UA/IP. I got the idea from the way GoatCounter counts unique visits.

                  1. 2

                    I fiddled with it in the browser, I do some web scraping and from that am pretty familiar with the process of peeling away one option or feature at a time from http requests until the server finally denies the request.

                2. 1

                  Ah yes I see, the five bit version is absolutely bearable. Not much longer than an extensive page change animation. If this is enough to keep bots (I guess it is), I would totally go for it.

              1. 31

                On a technical level it’s implemented very well.

                It is matching against a list, so unlike a general recognition AI, there’s very little chance of misidentification.

                The blocklist and matching process is split between client-side and server-side, so it can’t be easily extracted from the phone for nefarious purposes.

                Apple has spent a considerable effort to cryptographically ensure they know nothing until multiple matches are found. Phone even sends dummy traffic to obscure how many potential matches are there.

                So as far as scanning for the intended purpose, it’s a careful well thought-out design.

                I am worried about governments putting pressure on Apple to add more kinds of unwanted images to this list. The list is opaque, and for obvious reasons, it can’t be reviewed.

                1. 6

                  This is an improvement over their existing policy of giving authoritarian governments access to iCloud keys for their users: https://www.reuters.com/article/us-china-apple-icloud-insight/apple-moves-to-store-icloud-keys-in-china-raising-human-rights-fears-idUSKCN1G8060

                  This technology will allow Apple to expose only content that governments specifically ban rather than having to give them access to everything. We should be celebrating this for both its ability to combat child abuse and that it protects Apple’s customers from over-broad privacy invasion.

                  1. 1

                    This technology will allow Apple to expose only content that governments specifically ban

                    Do governments always make fair and righteous decisions in when deciding what images to ban? I see this situation as disastrous for human rights because you know darn well countries like China will bully Apple into including whatever images they want in that database.

                    1. 1

                      But China including whatever images they want is WAY better for privacy than today when China simply has access to all of Apple’s Chinese users’ data.

                      1. 1

                        That’s not the case, unless you mean to say China bullying Apple into giving them a user’s decryption key? That scenario is possible with or without this system.

                        1. 1

                          This has been the status-quo for the past 3.5 years: https://www.reuters.com/article/us-china-apple-icloud-insight/apple-moves-to-store-icloud-keys-in-china-raising-human-rights-fears-idUSKCN1G8060

                          China demand access to user data so many large American tech companies don’t have a significant presence there. Some American companies that are less committed to privacy comply with the conditions that China places for operating there. It’s a huge market so it’s been a great business move for Apple.

                          Having the ability to scan users’ content in device might be a way to achieve censorship without such indiscriminate access to user data.

                          1. 1

                            The article makes many speculations, but there is nothing concrete regarding the Chinese government having the kind of access you described written in it.

                            Also see this more recent article: https://www.nytimes.com/2021/05/17/technology/apple-china-censorship-data.html

                            Documents reviewed by The Times do not show that the Chinese government has gained access to the data.

                            1. 3

                              Apple user data in China is not controlled by Apple, it’s controlled by GCBD, a company owned by a Chinese regional government. Instead of using standard HSMs they use a hacked up iOS system. Apple’s security chips are vulnerable to local attacks. https://arstechnica.com/information-technology/2020/10/apples-t2-security-chip-has-an-unfixable-flaw/

                              So there’s a government owned company that controls the user data which is encrypted with keys stored in an insecure system. If user data is not being accessed that’s a choice that the Chinese government is making, not a restriction on their access.

                              1. 1

                                GCBD is the Chinese company that provides apple with datacenter type services. This is not the same as “controls the user data”.

                                1. 2

                                  From the New York Times article you linked:

                                  U.S. law has long prohibited American companies from turning over data to Chinese law enforcement. But Apple and the Chinese government have made an unusual arrangement to get around American laws.

                                  In China, Apple has ceded legal ownership of its customers’ data to Guizhou-Cloud Big Data, or GCBD, a company owned by the government of Guizhou Province, whose capital is Guiyang. Apple recently required its Chinese customers to accept new iCloud terms and conditions that list GCBD as the service provider and Apple as “an additional party.” Apple told customers the change was to “improve iCloud services in China mainland and comply with Chinese regulations.”

                                  The terms and conditions included a new provision that does not appear in other countries: “Apple and GCBD will have access to all data that you store on this service” and can share that data “between each other under applicable law.”

                                  So to get around US privacy laws and comply with Chinese surveillance laws a Chinese government owned company is the iCloud “service provider” (with Apple listed as an “additional party”) and per the ToS “will have access to all data that you store on this service”.

                                  It was a great business decision. They’re the only major western tech company making a lot of money from the huge Chinese market. I personally wouldn’t want to work there but the people who do are doing very well.

                  2. 2

                    Could such a feature be “pretty easily” fooled to trigger law enforcement to someone as the article implies?

                    Is it plausible to assume that they scan the cached Telegram/Whatsapp/Browser images? If so, how would it behave if someone sends you a set of known infractor images? (an evil chat bot, for example)

                    1. 6

                      Apple says they scan only images in the iCloud library, so images in 3rd party apps and browsers won’t be scanned, unless you save them or screenshot them to your iCloud library. Of course, Apple devices belong to Apple, not you, so Apple could later decide to scan whatever they want.

                      With the current scheme, to cause someone trouble, you’d first have to have multiple banned images to send to them. I hope obtaining actual CSAM is not “pretty easy”.

                      My big worry was that a plaintext blocklist on the phone could be used to generate arbitrary new matching images, but fortunately Apple’s scheme protects against this — the phone doesn’t know if images match. Therefore, you can’t easily make innocent-looking images to trick someone to save them.

                      1. 3

                        Of course, Apple devices belong to Apple, not you, so Apple could later decide to scan whatever they want.

                        Is there a source for this information?

                        1. 3

                          What’s your source for the “multiple banned images” part? Skimmed through Apple’s technical PDF descriptions a bit but didn’t find that part right away.

                          1. 4
                          2. 2

                            Apple says they scan only images in the iCloud library, so images in 3rd party apps and browsers won’t be scanned, unless you save them or screenshot them to your iCloud library.

                            I believe pictures in a lot of messaging apps are automatically uploaded to iCloud. So you could just send someone some pictures over WhatsApp, email, or whatnot. Not 100% sure of this though; I’d have to check. I disabled all the iCloud stuff because it kept nagging.

                            1. 1

                              That or you can generate adversarial images that trigger known hashes. It isn’t using cryptographic hashes, it is using perceptual hashes.

                              1. 1

                                No, you can’t, because the device doesn’t know if it has got a match.

                                1. 1

                                  And you think there will be no other way to get ahold of any of the perceptual hashes that are being scanned for?

                                  1. 2

                                    What I’m saying is that you can’t easily abuse Apple’s implementation for this. They’ve anticipated that problem and defended against it.

                                    If you get hold of some hashes or banned images from another source, that’s not Apple’s fault.

                        1. 1

                          OOP banter aside, what’s the point of individual examples of code? OOP is just one tool of many and you should pick the best tool for the use case.

                          I’d like to submit https://hub.spigotmc.org/javadocs/bukkit/. It’s oss Minecraft server code and this example is wonderfully relatable to even non-programmers.

                          1. 1

                            Do you have link to the code handy? This seems to be generated documentation. Also, it seems it’s a framework, and not the e2e app. I’m not strict about it, just double checking.

                          1. 4

                            Obligatory: https://www.bay12games.com/dwarves/mantisbt/view.php?id=9195 There was a DF bug where cats were just dying randomly, a fun emergent system bug too

                            1. 2

                              There’s a twitter feed that posts some of the amusing bugs too: https://twitter.com/dwarffortbugs

                            1. 3

                              If you’re not embarrassed by your app then you’ve waited too long to release it

                              I like this too, but it made me go check how many private repos I had.

                              1. 19

                                rclone - you could call it a modern rsync or scp.

                                It seems to be well known in some circles and relatively unknown in others so I figured I’d post it since it’s a great tool with responsive developers. It’s an rsync-like tool that supports many other backends besides rsync or ssh. What I like about it is how highly configurable it is, especially w.r.t. the controls over parallelism and how much more performant over rsync it is in situations where you have many tiny files.

                                1. 3

                                  Strongly seconded. rclone is rsync for cloud storage services. S3. Dropbox. Box. Google Drive. Tons more.

                                  1. 1

                                    I got excited about rclone until I saw it didn’t support iCloud (to be fair, nothing does)

                                    1. 1

                                      Would local file syncing to mounted iCloud folders not handle that need?

                                      1. 1

                                        Not if you don’t use a Mac?

                                  1. 7

                                    Also see Opnsense, a fork of pfSense that adds, among other things, an API.

                                    https://opnsense.org/

                                    1. 2

                                      Yes. I say it as a maintainer of a (n in a sense) competing project: if you want a BSD-based network OS with a GUI, use OPNSense.

                                    1. 7

                                      This article is thoroughly terrible. The fact that this is not a secret has already been discussed at length, but also consider:

                                      • Microsoft runs hundreds if not thousands of software repositories for the open source world already.
                                      • Microsoft runs NTP nodes in the pool you’re almost certainly “pinging”.
                                      • The “solutions” offered beyond simply removing the .list configuration file are truly insane. Host file entries? Chattr? Just why? Stop, he’s already dead! But seriously, this kind of administration is planting land mines.
                                      • Beyond what Microsoft owns directly, they indirectly “run” many more repos owned by projects that use Azure as their cloud platform.
                                      1. 3

                                        We started decompling our object files and going over the symbols line by line to identify why our Swift code size was so much larger. We deleted unused features. Tyler had to rewrite the watchOS app back into objc. - https://twitter.com/StanTwinB/status/1336935373696462848

                                        Deleting features because your code size is too big seems like an antiquated problem and I’m surprised to see a problem like this appear in a modern language that is only a few years old. I don’t know enough about iOS development to tell if Uber was doing anything wrong to cause this. Is this a Swift problem?

                                        1. 4

                                          If I had to guess: they probably have client-side A/B tests (or other such things) that are controlled by a server-side config. This means the implementation for both sides of the A/B test are included in the binary.

                                          If they decided to end the A/B test and roll one side out to 100% of users… they don’t strictly need to get rid of the code for the other side. It’s a lower-priority cleanup task, the kind that can easily fall by the wayside, at least until something like this makes it more urgent.

                                          But because it’s controlled by a server-side config, the compiler and toolchain can’t actually tell whether that code is unused, because the only thing controlling it is something like

                                          if (serverSideConfig.useNewThing) {
                                            doNewThing();
                                          } else {
                                            doOldThing();
                                          }
                                          

                                          As far as the compiler is concerned, doOldThing() is still referenced and “used” even if the server-side always sets useNewThing for everyone.

                                        1. 11

                                          I haven’t desired an OS war debate since about 2005. I understand that this isn’t fair since I’m probably laying down kindling and not asking for a flame war.

                                          I would switch much faster if I had iTerm2 (don’t say tmux, alacritty unless they are 1:1) and a few other tools. I’ve run Gentoo as my main machine and many Linux-es in the past (all I can do for street cred folks). Xcode, the app store, economics or something else apparently just makes high quality UIs possible because it’s not the same? Monodraw, Alfred, Pixelmator and a few others. There are a few I could budge on. The Mac apps are very polished, fonts are the best and usually the UX is good (iTerm’s options boxes are kind of insane). I keep flirting with the idea but then I make a list of stuff I’d miss.

                                          Linux has upsides too. It’s really what I want, a Unix box. I could ditch tiling window manager clones or near-misses and get a full-on tiled thing going (of course the browser kinda of kinks the terminal based flow but whatever). That’s not my issue. My issue is that Linux is great on the server. Linux (unix) excels at text but its desktop and GUI layer has always been weird. I don’t want it to be like this. If Electron was magically as fast as QT (etc) and made it easy to layout GUIs like Xcode, maybe that would be it? I just don’t know what the issues are in the GUI space. Armchair analyst mode though: people pay for mac software.

                                          This just continues to be true: computers suck, macs suck the least. But everything can change with time.

                                          1. 6

                                            I’m not a fan of tmux and alacritty either but found kitty to be a great cross platform alternative to iTerm. At least if it’s the panes and tabs that you want.

                                            Much more pleasant configuration too if you like to keep clean dotfiles.

                                            1. 2

                                              What does iTerm2 do that’s not in something like gnome-terminal?

                                              I use both and I’d like to know about any cool features I’m missing in iTerm2.

                                              They look the same to me from my Linux accustomed experience, what am I missing?

                                              1. 2

                                                Does anything else have “native tmux” yet? That is, tmux windows/panes are just iTerm windows/panes—you don’t need to do any tmux key commands at all. Makes persistent server sessions very nice. I believe the iTerm author implemented the protocol for this in tmux but I’m not sure if any other emulator has adopted it.

                                                1. 1

                                                  iTerm has more customization knobs than gnome-terminal (or any other Terminal emulator I’ve used) by an order of magnitude.

                                                  1. 1

                                                    Same, would love to know what I was missing from iTerm2. I don’t use tmux integration, not sure about other cools features that I missed. But one thing I noticed is it’s significant slower thang the default terminal application.

                                                    1. 1

                                                      Good question so I’ll do my best. Most of this is taste but I hope I can explain a feeling.

                                                      1. The hotkeys are nice (to me). They are quicker than leaders and are basically the same as Chrome tabs. Cmd+T for new tab, Cmd+Alt+Arrows. And of course mac apps flash the menu item and have hints next to them. But that’s iTerm leveraging MacOS.
                                                      2. The pane splitting is easy. Moving panes is easy. Moving panes to windows or the opposite, easy-ish.
                                                      3. Broadcasting input to all tabs is neat (but rarely used). Tmux does this too.
                                                      4. The fonts look nice (because MacOS). I’m sure other terminals have 256-color and image support. iTerm was early on this (to me). Powerline fonts, all the fluff.
                                                      5. The fullscreen has native and non-native options, so it’s quick and has survived the Apple OS changes.
                                                      6. I use a global hotkey for a dev log described here.
                                                      7. You can temp fullscreen a pane with shift+Cmd+enter. It has an overlay telling you you are in this mode.
                                                      8. Like someone said, the customizations are great. Just one example: you can dim panes on unfocus to your liking. Even not graphics dimming, font color dimming. It’s great.
                                                      9. The tmux stuff is neat, a bit weird (root window has to stay open). Haven’t used it a lot.

                                                      I’ve tried the windows options. Putty (not the same thing) hasn’t changed in decades. ConEmu or Hyper is close. Hyper is a bit slow (maybe things have changed). ConEmu is close with WSL. But I’m biased because of muscle memory!

                                                      Sorry, getting off-topic. Back to the OP, I agree in the sentiment. I’m spooked by the changes. It’s consumer facing more and more. But I don’t know if any of these things are nails in the coffin or the community will continue to workaround/adapt. There have been breaking changes on major OS versions for a long time. People working sometimes wait when they optimize for stability. But, OP, I hear you. 🌻

                                                    2. 2

                                                      Yeah, it’s only a matter of time. After a recent upgrade of iTerm2 my whole screen would periodically flicker wildly. Occasionally my machine (2019 Pro) would reboot. I temporarily downgraded to Terminal, and everything settled down. My text mode apps also seemed snappier. My lesson from all this: to always be on the lookout for costs when things change. Even when the change seems pleasant (timestamps on specific lines in iTerm2, tmux integration, lots of other lovely stuff). Because we suck at changing things at scale without regression.

                                                      1. 2

                                                        I agree with this. And your description of Linux feeling different from macOS, at least in terms of GUIs, reminds me of this blog entry: https://blogs.gnome.org/tbernard/2019/12/04/there-is-no-linux-platform-1 IMO, to make a Linux computer feel like a real Unix desktop you need a controlling entity to smooth over the edges. Like Android or Chrome OS or even Raspberry Pi OS. Of course, purists would say “this isn’t the GNU/Linux I know”. They’d be right. But from what I can tell, we don’t even have that option.

                                                      1. 1

                                                        Are they being “cast” in a new role, like in theater?

                                                        Are they being “cast” into a new shape, like molten metal in a mold?

                                                        I read it as one of these two. The latter implies some reorganization of the bits took place, e.g. int to float.

                                                        1. 1

                                                          I feel like this piece makes sense as an argument against monolith applications shoe-horned into Kubernetes, but not so much for an application that fits under the hand-wavy category of “cloud native”.

                                                          1. 1

                                                            Using “git cu” instead of “git clone” will create a gitlab.com/3point2/git-cu directory structure inside your current working directory.

                                                            I really like using git worktree and using it alongside this tool would feel weird. You’d have to put your worktrees somewhere else or come up with some naming convention like gitlab.com/3point2/git-cu_my-branch-name.

                                                            1. 1

                                                              I agree, it’s not a neat fit for worktrees in its current state. A friend of mine who uses worktrees a lot said the same! I’d be interested in doing something to include support for worktrees somehow, so I’ve created https://gitlab.com/3point2/git-cu/-/issues/1 - if you have any thoughts / ideas on what would feel ergonomic to you please leave a comment.

                                                            1. 23

                                                              I hesitate to post a Twitter thread in response to a submission of a Twitter thread (I wish foone would do this on a blog), but this is worth pointing out since the teardown makes it seem like the electronics version is a joke.

                                                              https://twitter.com/V_Saggiomo/status/1301809747042217984

                                                              That said, this device is incredibly wasteful and irresponsible, and that the marketing around it is highly questionable.

                                                              1. 5

                                                                Thanks for sharing the rebuttal, that is a good perspective. I hesitated for a few days before posting that thread as I wasn’t sure how kosher Twitter was here. I always find these sorts of teardowns fascinating because it makes me realize how much “magic” we assume in things we see around us.

                                                                And, yes, I wish this was posted on a blog and not just a Twitter roll-up.

                                                                1. 10

                                                                  In general, submissions of Twitter threads are discouraged, with the occasional exception, mainly because Twitter posts tend to be low content, high impact/drama info tidbits (read: news headlines). In this case I consider it an exception since foone tends to post long threads with lots of information. Still, I wouldn’t make a habit of it.

                                                                  1. 3

                                                                    Thank you, I appreciate the explanation and will certainly be sparing.

                                                                2. 2

                                                                  The problem I see here is that the original read to me as purely technical “Oh wow, it seems unguessably complicated and in the end it’s an LED and a photo sensor” - (I myself would’ve expected measuring a change of current or resistance in the test material, but I would’ve been very wrong)

                                                                  But this “rebuttal” is more condescending like “look at this idiot dissecting this thing where it COMMON KNOWLEDGE how it works” etc.pp.

                                                                  The post by Naomi Wu is insightful, but I don’t get how people can be mad at foone, because I didn’t see any “omg the people who buy this are so stupid”. And that it’s wasteful to be thrown away after one use is a simple fact, the debate whether it’s worth it something completely different.

                                                                  1. 2

                                                                    What is it about this device you think is irresponsible?

                                                                    1. 2

                                                                      That it is single use and mass produced.

                                                                      1. 0

                                                                        So is the manual test strip inside of it?

                                                                        1. 5

                                                                          Single-use electronics is much worse than a test strip, I think. Beyond being overkill, it’s polluting to produce, generates long lasting garbage, and consumes rare materials.

                                                                          1. 1

                                                                            If it was such a waste of “rare” materials, it would be more expensive.

                                                                            1. 4

                                                                              Of course, markets are efficient and always account for externalities 🙄

                                                                              1. 3

                                                                                Ah yes, the market is perfect, of course.

                                                                    1. 2

                                                                      If a user inadvertently visited homebrew.sh, after various redirects an update for “Adobe Flash Player” would be aggressively recommended

                                                                      Heh. This must be a very effective way of convincing people to install your payload. I think I remember seeing this as early as the mid 2000s. And to think it’s still used!

                                                                      1. 26

                                                                        It is not recommended to write if name__ == ‘__main’

                                                                        Man, I hate to see bad advice, and that is really bad advice … there is a reason why the name guard has been a standard idiom of Python for years, and no one should be throwing about recommendations to break that. The name guard exists specifically to keep side effects from happening when the file is imported, which includes when you use the help builtin. Which you should do, a lot.

                                                                        1. 1

                                                                          I’d argue it’s not bad advice depending on the reasoning why. I can assume that, if you’re using this idiom, you’re doing it to control whether a “main” function executes and that you’re probably also using the same file as an executable.

                                                                          Setuptools’ entry_points option should be used instead. Doing this removes the need for the above idiom.

                                                                          1. 2

                                                                            A blanket recommendation not to use any standard and long established language idiom is inherently bad advice, and in Python the name guard remains the idiomatic way to isolate code that is only meant to run if the individual module is executed, rather than imported.

                                                                            A command line entry point (as with setuptools) or a main function is just one such use. So is a self test or demonstration code for a single module library, or for a single module within a larger package. If you’re using unittest then the unittest.main() call at the bottom of the file should always be inside a name guard.

                                                                            Even saying that setuptools’ entry_points “should” be used instead is, arguably, bad advice… that’s only true if you’re packaging with setuptools. Sure, that’s the de facto (but still optional) toolchain for packaging, but that could change, and then you’d be left with a bunch of code that has hewn to the idiom of a dependency rather than to the idioms of the language itself. So does it remove the functional need for the name guard? Sure, for your programs publicly facing entry points… but dropping the idiom doesn’t make your code any better.

                                                                        1. 2
                                                                          1. 3

                                                                            In the demo repo linked from this post, I was able to extract the secret using just strings(1) . No decompilation required!