Threads for licorna

  1. 3

    I’m working on Piccle, a static site generator for photographers. It uses EXIF metadata embedded in your photos to build an explorable web portfolio. (You can see an example gallery on my website, though some pictures are NSFW.)

    As far as I know I’m the only user so far (hence its prerelease tag!) but I’ve been working on it for a while and it’s stable/complete enough for my main uses. Over the past month I’ve focused more on editing photos/tidying my metadata, but I still want to add some extra features soon. I’m also planning to experiment with client-side rendering support – not least because the current rendering pipeline can be drastically simplified if that turns out to be a bad idea.

    1. 3

      I really liked your pictures, great work!

      1. 1

        Thanks! I’m glad you like them. :)

      2. 1

        This is relevant to my interests - I’ve been searching for something I can selfhost as opposed to Flickr, and I like the idea of static generation.

        Many moons ago I used something similar that was supposed to generate a portfolio but it lacked the metadata integration.

      1. 15

        My experience with Bash is: “avoid it at any cost”. Unless you are writting very OS specific stuff, you should always avoid writting bash.

        Bash efficiency is a fallacy, it is never the case. Bash is sticky, it will stay with you until it transforms into a big black-hole of tech-debt. It should never be used in a real software project.

        After years of Bash dependency we realized that it was the biggest point of pain for old and new developers in the team. Right now Bash is not allowed and new patches introducing new lines of Bash need to delete more than what they introduce.

        Never use Bash, never learn to write Bash. Keep away from it.

        1. 4

          What do you use instead?

          1. 8

            Python. Let me elaborate a little bit more.

            We are a Docker/Kubernetes shop, we started building containers with the usual, docker build/tag/push, plus a test in between. We had 1 image, one shell script did the trick.

            We added a new image, and the previous one gained a parameter which existed in a JSON file which was captured using jq (first dependency added). Now we had a loop with 2 images being built tested and pushed.

            We added 1 stage: “release”. Docker now had build tag push, test, tag push (to release). And we added another image, the previous images gained more parameters, something was curled from the public internet, the response piped into jq. A version docker build-arg was added to all of the images, this version was some sort of git describe.

            2 years later, the image building and testing process was a disaster. Impossible to maintain, all errors captured after the images were released, the logic to build the ~10 different image types were spread into multiple shell scripts, CI environment definitions, docker build-args. The images required very strict order of operations to build: first run script build then run script x then tag something… etc.

            Worst of all, we had this environment almost completely replicated to be able to build images locally (when building something in your own workstation) and remotely in the CI environment.

            Right before the collapse, I requested to management 5 weeks to fix this monstrosity.

            1. I captured all the logic required to build the images (mostly parameters needed)
            2. I built a multi-stage process that would do different kind of tasks with images (build, tag, push)
            3. I added a Dockerfile template mechanism (based on jinja2 templates)
            4. Wrote definitions (a pipeline) of the process or lifecycle of an image. This would allow us to say, “for image x, build it, push it into this repo” or “for this image, in this repo, copy it into this other repo”
            5. I added multiple builder implementations: the base one is Docker, but you can also use Podman and I’m planning on adding Kaniko support soon.
            6. I added parallelized builds using multi-processing primitives.

            I did this in Python 3.7 in just a few weeks. The most difficult part was to migrate the old tightly coupled shell-scripting based solution to the new one. Once this migration was done we had:

            1. The logic to build each image was defined in an inventory file (yaml, not great, but not awfull)
            2. If anything needs to be changed, it can be changed on a “description file”, not in shell scripts
            3. The same process can be run locally and in the CI environment, everything can be tested
            4. I added plenty of unit tests to the Python codebase. Monkeypatching is crucial to test when you have things like docker build in the middle, although this can be fixed by running test using the noop builder implementation.
            5. Modularized the codebase: parts of the generic image process pipeline are confined to its own Python modules. Everything that’s application dependant lives on our repo, and uses the other modules we build. We expect those Python modules to be reused in future projects.
            6. It is not intimidating to make changes, people are confident about the impact of their changes, meaning that they feel encouraged to make changes, improving productivity**

            Anyway, none of this could be achieved by using Bash, I’m pretty sure about it.

            1. 13

              It sounds to me like your image pipeline was garbage, not the tool used to build it.

              I’ve been writing tools in bash for decades, and all of them still run just fine. Can’t say the same for all the python code, now that version 2 is officially eol.

              1. 3

                bash 3 broke a load of bash 2 scripts. This was long enough ago that it’s been largely forgotten.

                1. 1

                  I agree with you, the image pipeline was garbage, and that was our responsibility of course. We can write the same garbage in Python no doubt.

                  Bash however, does not encourage proper software engineering, definitely, and it makes software impossible to maintain.

            2. 1

              I can confirm this. I’ve had to replace a whole buildsystem made in bash with cmake roughly 2 years ago and bash still contaminates many places it should not be involved in with zero tests.

            1. 1

              I have the lastes Macbook Pro 16 with 32GB of RAM. This is provided by work. I’m one of those Docker guys working with Kubernetes and Go. My setup is mostly configured by Nix so it’s all setup and ready now.

              Now, my experience with this computer is worst than my experience with my previous one, which was worst than the previous one… all the way back 10 years. It seems each iteration of a Macbook was worst than previous, not beause of hardware, but software. The hardware is pretty decent (best in class I have to say), but software and OS-wise is inclined to bells&whistles, homogeneity with iOS and the “apple” experience: “product managers over engineers”.

              I still think this is the best developer machine, and I’ve using that for a decade. However 10 years ago, thinking about using Linux as your main work laptop didn’t make much sense, because of the set of enterprise crap that you have to use back then, which was not compatible with your OS most of them, but now, even Zoom works “fine” (it works as bad as in every other os, so we are fine), and besides these client apps, all the rest is webapps which are compatible with Firefox.

              These days I don’t mind moving to Linux (not FreeBSD though) and I know I would enjoy the ride. Maybe something to try on Christmas week?

              1. 6

                In my organization mostly everything is Java these days. Everybody works with IntelliJ and they depend on it to do a lot of things. I can see my colleagues navigating their code, going to definitions and implementations very easily, clicking on small green “play” buttons to start tests and right clicking on files to “add to git” or something.

                In the other hand, I’m an Emacs person and first, I’m not involved in the Java codebase (thankfully). I do most of my work in Go and Python. The support I have from my environment is what I need, code completion, navigation, documentation right there. Above all, a distraction free environment, where the important asset is the code. Your environment should allow you to see your code, to read it and to modify it.

                With Emacs and a few key combinations I can run my unit tests, run integration tests, check everything on the CI environment, and of course all the git (magit) fancyness. With regards to this one, I think magit is superior to what IntelliJ people does.

                I’m not trying to be arrogant here, I know my limitations as a developer, but I like seeing my Editor (and the rest of the tooling) as what a sharp knife is for a chef, or a good set of quality tools for a artisan, or a well tuned violin for a musician. For a craft that takes years to master, the only capable tool is one that allows you to focus on your craft, on what you are inventing and thinking about, it can’t take you away from that, from the context it gives you to be on top of the text that’s the body of your work. And the sensibility which you can use to produce your work, the fidelity of your thinking. This is text that your are producing, so you better have a calibrated tool to do it.

                Of course, all this can be done by a full IDE, but that means that most of it is unnecessary.

                What’s to miss about an IDE? A menu command to implement a missing method from a class? A way of renaming my function and all of their uses? Their definitely handy, but they are not used often, and they don’t work to make your code more readable, more maintenable, etc.

                I’m also a musician, and I’m basing these lines on both experiences to conclude that mastery is achieved with the help of simple tools.

                1. 5

                  Their definitely handy, but they are not used often, and they don’t work to make your code more readable, more maintenable, etc.

                  I personally find that powerful refactoring tools help tremendously with readability and maintainability by reducing the cost (and more importantly, the risk) of uncluttering the code when it gets messy over time. Even though I don’t use those tools much when I’m writing brand-new code, their availability has a more subtle effect on the code I write: knowing that I can painlessly and safely perform certain kinds of structural changes as needed means I can keep the initial implementation simple and focused rather than having to try to anticipate and design for future changes to save future me the pain of manually shuffling things around.

                  1. 1

                    Yeah, agreed. Knowing that I can easily rename a function or a field means that I have to spend less time worrying about whether it’s really worth it to rename it; just hit my key combo for ‘rename this thing’, type in the new name, and I’m done. And you can get this in Emacs or vim just using whatever LSP integration you like.

                  2. 2

                    I’m an Emacs person who used to work at a Java shop - I had to give in and use Intellij, I could never figure out how to configure Emacs to be as powerful as Intellij when dealing with a large codebase.

                  1. 1

                    People is suggesting keeping your gmail account “alive” for a while, but in the case of that account being bound to something that you own, like your Git commits somewhere, it means that you’ll have to keep that account safe, forever.

                    I have two questions:

                    • Is there a way of changing your commit history to reflect to a new email address that does not belong to a centralized corporation but to you, in the form of a domain you own.
                    • Is it possible to use another identification mechanism, a signature that is not bound to an email address? An email address requires infrastructure to work, and that eventually could belong to someone else, like your the domain your email is part of
                    1. 2

                      Is there a way of changing your commit history to reflect to a new email address that does not belong to a centralized corporation but to you, in the form of a domain you own.

                      Yes in theory, however that changes all the hashes so no in practice.

                      1. 2

                        in my experience, just start committing with the new address and update any mailmap and authors files. can’t do anything about published history…

                        1. 1

                          You could use git filter-branch to rewrite the entire git repository to replace your old e-mail address with your new one, but that will change the hash of every commit so it will be a terrible experience for anyone who has an existing clone of your repository. I think it’s not worth it.

                          1. 1

                            Is it possible to use another identification mechanism, a signature that is not bound to an email address? An email address requires infrastructure to work, and that eventually could belong to someone else, like your the domain your email is part of

                            In GitHub, you can choose to keep your email private and use something to the tune of username@users.noreply.github.com. See the details here

                          1. 5

                            Interview prep, got my coding phone screen next Thursday.

                            Will spend a good amount of time cranking through LeetCode problems.

                            1. 3

                              mind sharing what company are you interviewing for?

                              1. 3

                                Uber, Amsterdam

                              2. 1

                                Luck! I don’t know where you are in your career path but something I wish I’d learned sooner: Don’t forget that you’re evaluating them as much as they’re evaluating you :)

                              1. 3

                                2 weeks ago I posted in here and I realized that it was a good as a #standup exercise for the weekend and for personal time. I also realized that when I got bored I could resort to do something as simple as checking my last comment in here for an idea on what to do next.

                                I ended up investing some time in my personal project (learning Rust) and doing activities with the kids, while understanding that “there’s no rush”. Use your time, get some rest, do not try to do everything; but anything you do, make sure you are enjoying it.

                                So for this weekend:

                                • goal: get rest and enjoy my time with the family
                                • stretch goal: keep learning (and enjoying) Rust
                                1. 3

                                  Taking care of the kids, 2 of them: 5 and 2 years olds. If I get to have a bit of time for myself (like now) I’ll probably read something about Rust and continue working on my pet project, written in Rust. Days are sunny but cold, maybe a bit of bike practice with the little one.

                                  Going to town for an ice cream after lunch and a few games of chess with the kiddo before dinner.

                                  #simplelife