Threads for tout

    1. 3

      I have a few DIY services as well:

      • a baby tracker I wrote for our newborn, to track feeding/changing/pumping/pediatrician visits
      • a “home” tracker, for basic management of recurring maintenance
      • a simple HTTP server for my wife and my Keepass DBs

      I run it all off of a little Dell mini-PC I picked up from eBay for like $90.

      I’m looking to pick up a file/photo manager but haven’t settled on a good ine yet (any suggestions?)

      I’ve also got a family and personal website on a tiny GCP instance.

    2. 10

      I’m very happy about this: fairly minimalist, but just enough to make it useful for a wide range of servers. It’ll make an article I wrote about different approaches to HTTP routing in Go redundant, which is excellent. :-) I was also impressed by how Jonathan (author of the proposal and proof-of-concept implementation) handled feedback – he’s not a pushover, but he made several small improvements to it based on the feedback (GET also matching HEAD, added Request.SetPathValue, making it return HTTP 405 where applicable, and so on).

      1. 3

        Just want to say how often I come back (and send others to) your routing article. Thanks for it!

        1. 1

          You’re welcome! Thanks for sharing it.

    3. 3

      karabiner-elements and Hammerspoon are high on my must-have list.

      My Karabiner config is pretty minimal but just to set up a Caps -> Shift+Ctrl+Alt and some basic media shortcuts is nice.

      Hammerspoon config is fun and easy to play with especially if you’ve used AutoHotkey in the past. I have simple window management, global mute, and a few hotkeys set up. The world is your oyster here.

    4. 10

      I built a CI system a while ago; I haven’t quite finished/released it yet, but the concepts are pretty simple:

      1. GitHub sends a webhook event.
      2. Clone or update the repo.
      3. Set up a sandbox.
      4. Run a script from the repo.
      5. Report back the status to GitHub.

      And that’s pretty much it. Responsibility for almost everything else is in the repo’s script (which can be anything, as long as it’s executable). The CI just takes care of setting up the environment for the script to run in. You can build your own .tar images or pull them from DockerHub.

      Overall there’s very little “magic” involved and actually works quite well. Aside from various minor issues, one of the big things I need to figure out is to at least have a plan to add cross-platform support in the future.

      Perhaps this won’t cover every CI use case – and that’s just fine; not everything needs to cover every last use case – but it probably covers a wide range of common use case. I just want something that doesn’t require having a Ph.D. in YAML with a masters in GitHub actions. I used Travis before but I ran out of free credits after they changed their pricing strategy a while ago; so I tried using GitHub actions but setting up PostgreSQL for the integration test failed for reasons I couldn’t really figure out, and debugging these kind of things is a horrible time-consuming experience: make a minor change, pray it works, push, wait a few minutes for the CI to kick in, deal with that awful web UI where every click takes >2 seconds, discover it failed again for no clear reason, try something else, repeat 20 times, consider giving up your career in IT to become a bus driver instead. Setting up PostgreSQL and running go test ./... really shouldn’t be this hard.

      At any rate, writing the initial version of the above literally took me less time than trying to set up GitHub actions. One of the nice things is that you could write a program to parse and run that GitHub Actions (or Travis, Circle-CI, etc.) YAML file if you want to – it’s really flexible because, as you wrote in the article, the basic idea is to just provide “remote execution as a service”.

      1. 5

        so I tried using GitHub actions but setting up PostgreSQL for the integration test failed for reasons I couldn’t really figure out, and debugging these kind of things is a horrible time-consuming experience: make a minor change, pray it works, push, wait a few minutes for the CI to kick in, deal with that awful web UI where every click takes >2 seconds, discover it failed again for no clear reason, try something else, repeat 20 times, consider giving up your career in IT to become a bus driver instead. Setting up PostgreSQL and running go test ./… really shouldn’t be this hard.

        Kinda seems like the point of the article, you should be able to run exactly what’s in the pipelines locally to troubleshoot, but at that point why not collapse it into the local build system and unify them

      2. 3

        How does this compare to sourcehut’s CI? They use YAML, but you can mostly avoid it. For example here are Oil’s configs, which mostly invoke shell scripts:

        https://github.com/oilshell/oil/tree/master/.builds

        I think you are describing a “remote execution service”, as the blog post calls it. That’s basically what I use sourcehut as.

        I think such services are complementary to what I described in a sibling comment. Basically a a DAG model (as the OP wants) and an associated language in “user space”, not in the CI system itself. If most of the build is in user space then you can debug it on your own machine.

        1. 2

          I didn’t look too closely at sourcehut as I don’t like sourcehut for various reasons.

          I don’t think you need any sort of DAG. If you want that, then implement it in your repo’s build system/script. The entire thing is essentially just “run a binary from a git repo”.

          1. 3

            I didn’t look too closely at sourcehut as I don’t like sourcehut for various reasons.

            Hi, Martin. May I know what are those reasons?

            1. 3

              I don’t really care much for the sourcehut workflow, and I care even less for the author and his attitude. I don’t really want to expand on that here as it’s pretty off-topic, but if you really want to know you can send me a DM on Twitter or something.

              1. 1

                but if you really want to know you can send me a DM on Twitter or something.

                i will even do it before you reply :)

          2. 2

            Yes that’s compatible with what I’m doing. Both my Travis CI and sourcehut builds just run shell scripts out of the git repo. And then they upload HTML to my own server at: http://travis-ci.oilshell.org/ . So it could probably run on your CI system.

            I want parallelism, so I ported some of the build to Ninja, and I plan to port all of it. Ninja runs just fine inside the CI. So I guess we’re in agreement that the CI system itself can just be a dumb .


            Although, going off on a tangent – I think it’s silly for a continuous build to re-clone the git repo every time, re-install Debian packages, PyPI packages, etc.

            So I think the CI system should have some way to keep ephemeral state. Basically I want to use an existing container image if it already exists, or build it from scratch if it doesn’t. The container image doesn’t change very often – the git repo does.

            Travis CI has a flaky cache: mechanism for this, but sourcehut has nothing as far as I can tell. That makes builds slower than they need to be.

            1. 3

              Although, going off on a tangent – I think it’s silly for a continuous build to re-clone the git repo every time, re-install Debian packages, PyPI packages, etc.

              So I think the CI system should have some way to keep ephemeral state. Basically I want to use an existing container image if it already exists, or build it from scratch if it doesn’t. The container image doesn’t change very often – the git repo does.

              Yeah, the way it works is that you’re expected to set up your own image. In my case this is just a simple script which runs xbps-install --rootdir [pkgs], frobs with a few things, and tars the result. You can also use DockerHub if you want, golang:1.16 or whatnot, which should be fine for a lot of simpler use cases.

              You can then focus on just running the build. The nice thing is that you can run ./run-ci from your desktop as well, or run it on {Debian,Ubuntu,CentOS,Fedora,macOS,…}, or use mick run . to run it in the CI.

              Setting these things up locally is so much easier as well; but it does assume you kind of know what you’re doing. I think that’s a big reason for all these YAML CI systems: a lot of devs aren’t very familiar with all of this, so some amount of abstraction makes it easier for them. “Copy/paste this in your YAML”. Unfortunately, this is a bit of a double-edged sword as it also makes things harder if you do know what you’re doing and/or if things break, like my PostgreSQL not working in GitHub (and besides, you can probably abstract all of the above too if you want, there’s no reason you can’t have an easy-image-builder program).

              Splitting out these concerns also makes a lot of sense organisationally; at my last job I set up much of the Travis integration for our Go projects, which wasn’t strictly my job as I was “just” a dev, but it was a mess before and someone had to do it. Then after the company got larger a dedicated sysadmin was hired which would take of these kind of things. But sysadmins aren’t necessarily familiar with your application’s build requirements, or even Go in general, so their mucking about with the build environment would regularly silently break the CI runs. Part of the problem here was that the person doing all of this was extremely hard to work with, but it’s a tricky thing as it requires expertise in two areas. I suppose that this is what “devops” is all about, but in reality I find that a lot of devops folk are either mostly dev or mostly ops, with some limited skills in the other area.

              When this is split out, the ops people just have to worry about calling run-ci and make sure it runs cleanly, and the dev people only need to worry about making sure their run-ci works for their program.

              Anyway, I should really work on finishing all of this 😅

              1. 1

                That makes sense, but can you build the image itself on the CI system?

                That’s a natural desire and natural functionality IMO. And now you have a dependency: from run-ci to the task that builds the image that run-ci runs on! :) From there it is easy to get a DAG.

                Oil has use case discussed above too: you build on say a Debian image, but you want to test on an Alpine image, FreeBSD image, OS X image, etc. And those images need to be built/configured – they’re not necessarily stock images.

                That sounds like a DAG too.

                So I think there is something like an “inner platform” effect here. If you build a “simple” CI system, and lots of people use it, it will turn into a DAG job scheduler. And if you’re not careful, it might have the unfortunate property of only being debuggable in the cloud, which is bad.


                I have noticed a similar design issue with cluster managers. A lot of times people end up building a cluster manager to run their cluster manager: to distribute the binaries for it, authenticate who can do so, to run jobs that maintain the cluster itself, etc.

                So a CI is supposed to run build systems, but the it turns into a build system itself. I think a lot of the ones people are complaining about started out small, with tiny configuration (just like sourcehut), and then they grew DAGs and programming languages in YAML :-/ If sourcehut wants to satisfy a lot of use cases, it’s probably going to run into that problem.

                1. 1

                  but can you build the image itself on the CI system?

                  Sure, there’s nothing really preventing you from doing that.

                  I suppose you could see it as a DAG; your oil repo depends on oil-images which builds the images, which depends on a base alpine/freebsd image as a bootstrap. Personally I think that’s shoehorning things a little bit; it’s a very “linear” graph: oiloil-imagesalpine|freebsd|macOS and probably not really worth thinking about in terms of a DAG IMHO.

                  At the end of the day I think that no matter what you do, if your requirements are somewhat complex then your solution will be too. There’s tons of CI systems out there, and while I feel many are a bit lost in the zeitgeist of YAML programming, most are probably built by teams which include people smarter than me and if they haven’t found a good way to solve complex use cases then I probably won’t either. So the best we (or rather, I) can do is let you solve your own complex use case without interfering too much, which will usually be easier than applying a generic solution to a complex use case.

                  1. 1

                    Yeah I’m not sure what the right solution is, just nodding my head at the porous line between CI systems and build systems. Neither Travis CI or sourcehut have a DAG, so I think for all practical purposes, it should be kept in “user space”, outside the CI system, and in the build system.

                    I do think the “ephemeral state” problem is related and real. Travis CI has cache: but it’s flaky in practice. I’m toying around with the idea that images should be stored in something like git annex: https://news.ycombinator.com/item?id=26704946

                    So it would be cool if the CI system can pull from git annex, and then a CI job can also STORE a resulting image there, for a future job. I’m not sure if any existing systems work that way. I think they mostly have CANNED images – certainly sourcehut does, and I think Travis CI does too.

                    So in that way maybe you can build a DAG on top, without actually having the DAG in the CI system. If you can “reify” the image as part of the CI system itself.

                    1. 2

                      The way I do cache now is mount /cache, which is shared across the repo and you can do with that as you wish. It’s extremely simple (perhaps even simplistic), but gives people a lot of flexibility to implement some cache system based on git annex for example.

      3. 3

        Run a script from the repo.

        This is how I try to use GitHub actions, by keeping the yaml minimal and only launching the single script there. Here’s a representative example: https://github.com/matklad/once_cell/blob/master/xtask/src/main.rs

        The bit where this annoyingly falls down is specifying the host environment. I can’t say, from within the script, “run this on windows, mac, and Linux machine”, so this bit still lives in yaml. The script, if needed, contains match current_os.

        A more complex case for this failure is if I need coordination between machines. I have a single example for that (in rust-analyzer’s ci). Release binaries for the three OSes are build by three different builders. Than, a single builder needs to collect the three results and upload a single release with multiple artifacts.

        Though the last example arguably points to the problem in a different system. Ideally, I’d just cross-compile from Linux to the three OSes, but, last time I checked, that’s not quite trivial with Rust.

        1. 3

          Ideally, I’d just cross-compile from Linux to the three OSes, but, last time I checked, that’s not quite trivial with Rust.

          Back when I maintained a bunch of FreeBSD ports I regularly had people send me patches to update something or the other, and they never bothered to actually run the program and do at least a few basic tests. Sometimes there were runtime errors or problems – sometimes the app didn’t even start.

          That it compiles doesn’t really guarantee that it also runs, much less runs correctly. Rust gives some harder guarantees about this than for example C or Python does, but if you try to access something like C:\wrong-path-to-user-profile\my-file on startup it can still crash on startup, and you’ll be shipping broken Windows binaries.

          For my Go projects I just set GOOS and hope for the best, but to be honest I have no idea if some of those programs work well on Windows. For example my uni program does some terminal stuff, and I wouldn’t be surprised if this was subtly broken on Windows. Ideally you really want a full Windows/macOS/etc. environment to run the tests, and you might as well build the binary in those environments anyway.

          1. 3

            I do test in different envs. Testing is comparatively easy, as you just fire three completely independent jobs. Releases add a significant complication though.

            You now need a forth job which depends on the previous three jobs, and you need to ship artifacts from Linux/Mac/windows machines to the single machine that does the actual release. This coordination adds up to a substational amount of yaml, and it is this bit that I’d like to eliminate by cross compilation.

            1. 1

              Can’t you use test job to also build the binaries; as in, IF test succeeded THEN build binary? Or is there a part that I’m missing?

              1. 1

                Yeah, I feel like I am failing to explain something here :)

                Yes, I can, and do(*) use the same builder to test and build release artifacts for a particular platform. This is not the hard problem. The hard problem is making an actual release afterwards. That is, packaging binary artifacts for different platform into a single thing, and calling that bundle of artifacts “a new release”. Let me give a couple of examples.

                First, here’s the “yaml overhead” for testing on the three platforms: https://github.com/matklad/once_cell/blob/064d047abd0b76df31b0d3dc88d844c37fc69dd1/.github/workflows/ci.yaml#L5. That’s a single line to specify different builders. Aesthetically, I don’t like that this is specified outside of my CI build process, but, practically, that’s not a big deal. So, if in your CI platform you add an ArpCI.toml to specify just the set of OSes to run the build on, that’d be totally OK solution for me for cross platform testing.

                Second, here’s the additional “yaml overhead” to do release:

                Effectively, for each individual builders I specify “make these things downloadable” and for the final builder that makes a release I specify “wait for all these other builders to finish & download the results”. What I think makes this situation finicky is the requirement for coordination between different builders. I sort-of specify a map-reduce job here, and I need a pile of YAML for that! I don’t see how this can be specified nicely in ArpCI.toml.

                To sum up:

                • I like “CI just runs this program, no YAML” and successfully eliminated most of the YAML required for tests from my life (a common example here is that people usually write “check formatting” as a separate CI step in YAML, while it can be just a usual test instead)
                • A small bit of “irreducible” YAML is “run this on three different machines”
                • A large bit of “irreducible” YAML is “run this on three machines to produce artifacts, and then download artifacts to the fourth machine and run that”.

                Hope this helps!

                (*) a small lie to make explanation easier. I rather use not-rocket-science rule to ensure that code in the main branch always passes test, and release branches are always branched off from the main branch, so each commit being released was tested anyway.

                EDIT: having written this down, I think I now better understand what frustrates me most here. To do a release I need to solve “communication in a distributed system” problem, but the “distributedness” is mostly accidental complexity: in an ideal (Go? :) ) world, I’ll be able to just cross-build everything on a single machine.

      4. 1

        nektos/act lets you test GitHub Actions locally.

        There’s even a (quite large) Ubuntu image available that mirrors the Actions environment.

    5. 4

      I immediately thought that I’d trip up the loops and gave up on that, so my next thought was to store them in an array of ints as e.g. 0b111, 0b1001001 etc, and first & them with the position of all the Xs and then Os, which is kinda like the hashing solution. I think I did something similar during this year’s Advent of Code. Maybe I should try getting an interview at google :P

      1. 8

        Yeah, there are only 8 win states & you can “and” those with the current board state & test for equality. Job done. (You’d have to do this twice; once for each player.)

        Much faster than faffing about looping over the current board state.

        1. 2

          How would you encode the board? My immediate naive thought was to use the low 9 bits of two 16-bit integers, one for each player. To play, you toggle a bit in your board from 0 to 1, after checking that it is not already 1 in either version. Your win check is then just 8 bitwise and operations (which, if you’re lucky, your compiler will splat the board into a 128-bit vector register and do in parallel). I wonder if there’s a better way of encoding the tri-state logic in the game though. A 9-square grid that has three states per grid has just over 14 bits of information in it, so in theory you can encode the entire state of the game in a single 16-bit integer with some space left over, but I don’t immediately see a way of doing so that makes testing for winning states easy.

          1. 1

            9 bits to record if a specific space is occupied, 9 bits to denote who is occupying.

            for example, lower 9 are for occupancy, upper 9 are for ownership:

            0b000000000000000000
            0│1│2
            ─┼─┼─
            3│4│5
            ─┼─┼─
            6│7│8
            
            0b100100100101101101
            O│1│X
            ─┼─┼─
            O│4│X
            ─┼─┼─
            O│7│X
            
            0b100100100110101100
            0│1│X
            ─┼─┼─
            O│4│X
            ─┼─┼─
            6│O│X
            

            https://gist.github.com/justintout/736b51e6e5dd655c87d91cbab6773c5e

          2. 1

            Personally I’d just use an array of an enum of Empty | X | O and then reduce it to those two ints. If you wanna do something more complex maybe use 18 bits for both the board and the win states and use 0b00 for Empty, 0b01 for X and 0b10 for O, and either have two win state tables or just encode the win states for X and right shift by 1 when checking for O.

          3. 1

            If I was going to use logical operators, then two bits per location seems reasonable. Board state fits easily into a 32-bit word, Everything fits in cache & the test takes a whole 16 machine cycles or so, since the CPU will pipeline the reads from cache.

            But, given the size of the problem, you could just use an array of chars & it wouldn’t make that much difference.

        2. 1

          Yeah, that’s the algorithm I was trying to describe. I’d just woken up with a bit of a hangover so I guess I wasn’t clear enough.

    6. 2

      I feel like we just need something that’s like Caddy v1 [1] but for VPNs that just works: it should have very little setup overhead and just do everything for you (e.g. generate public/private keys, certs, etc) but still be able to be more flexible with larger configurations.

      This isn’t the first environment-assuming-auto-install script I’ve seen for insert generic complicated VPN software here and I don’t want more of those; I know I can’t just ask for free software and have it be made [2] but I don’t know much crypto and rolling your own is dangerous.

      [1] Caddy v2 is bloated and doesn’t really respect v1’s simplicity IMO.

      [2] There’s dsvpn but it seems the author has stopped maintaining it and it was quite unreliable when I tried it.

      Edit: Another concern is cross-platform: only the big and bulky VPNs have mobile clients right now.

      1. 2

        there’s dsvpn

        Runs on TCP (first bullet point under features)

        Eh, no thanks. At that point I’d much rather just use openssh as a socks proxy.

        TCP over TCP is unpleasant, and UDP and similar protocols over TCP is even worse.

        It seems likely the future of vpn will be built on wireguard. But it needs something like zerotier.com for some “virtual secure lan” use cases.

        Tailscale.com does a bit of the zerotier stuff for wireguard - but zerotier has (AFAIK) smarter routing - local lan traffic stays local and encrypted. (if you have two laptops at home, a vps in the cloud - all on the same zerotier vpn - all traffic is encrypted, but traffic between the two laptops is routed locally. And things like bonjour/mDNS works across all three machines).

        1. 4

          FWIW, Tailscale also routes traffic intelligently, so LAN traffic will remain local (assuming the devices are able to talk to each other, of course). Tailscale does have public relay nodes as a last resort fallback, but on well-behaved networks, all traffic is p2p on the most direct path possible.

      2. 2

        Check dsnet, which was posted here a few weeks ago: https://github.com/naggie/dsnet it is basically a simpler UI for wireguard, which I like so far.

      3. 2

        There’s dsvpn but it seems the author has stopped maintaining it […]

        The GitHub repo currently has 0 open issues, so I’d rather call it mature instead of unmaintained.

        […] and it was quite unreliable when I tried it.

        Maybe give it another chance now? It works perfectly for me.

      4. 2

        seems like streisand fills the gap of easy-but-still-configurable setup. not entirely one-click but aimed toward a less technical crowd and holds the user’s hand decently well.

      5. 1

        This looks fantastic, thanks for putting this together. I’m particularly interested in the prospect of Wireguard support, is that waiting until that’s merged into OpenBSD proper? (If I can avoid needing any Go on my machines I’m happy).

    7. 3

      A cloud-free IoT device framework/os. There’s so many cheap Chinese IoT devices out there that are just taking some off the shelf software and tossing it on lightly customized hardware. If there were some software that didn’t require a server to operate I have to imagine there’d be some that would pick it up and could slowly start to change consumer IoT from a privacy & security nightmare to what it was originally supposed to be.

      Unfortunately, managed to finagle my dream project at my day job into existence so all of my mental energy has been going into that. (Which coincidentally, is making a cloud-focused IoT platform a little less cloud-focused.)

      1. 1

        Have you heard of/used Homebridge? I think its main thing is HomeKit-specific (so, Apple products), which works for me, but it also has a web UI available where you can manage your IoT devices too.

        I have an odd collection of Philips and Xiaomi smart devices and am able to keep them all safely off the internet and controllable through all our devices at home, it’s nice!

      2. 1

        I absolutely agree with this.

        Offline, local control is one of the big selling points for BLE, especially with the mesh spec finalized and (at least starting to) be more and more common. Getting consistent hardware/implementations/performance, on the other hand, still feels way too difficult. Similar can be said for Weave - makes a ton of sense but is genuinely not a fun thing to work with.

        I’m not sure why but I find the DIY systems (Home Assistant, openHAB) abrasive and, for me at least, flaky.

    8. 6

      Something I love to add to ~/.inputrc is the following:

      "\e[A": history-search-backward
      "\e[B": history-search-forward
      

      This gives forward and reverse search (Ctrl+R and Ctrl+S) on your up and down arrow keys, so you can quickly search through history with prefixes.

      1. 4

        This is the single greatest usability improvement to the terminal that I’ve found. It changed my entire experience when I found it a few years ago. This and the history storage changes are the first thing I get new users to add.

    9. 1

      💼: Getting pulled into mobile projects as a bit of a relief valve.

      🖥️: My friend came to me with a project that I think is a perfect candidate for event sourcing (pretty strict audit requirements), so I’m starting to learn the intricacies there. I have another Flutter project to put more work to that I feel behind on.

      🏡: We want to mount a big wall-to-wall mirror in a nook in our dining room. I want to DIY but I’m worried about walls not being square and getting a tight enough measurement for the glass manufacturer.

    10. 17

      For me, it’s the “bundling” situation. I’ve never worked on a production project where “the webpack” wasn’t the most hated part of the stack. Webpack’s source code is a rat’s nest, so I feel very unmotivated to understand the situation or improve it from within. There’s tons of projects in this space that seek to simplify, but they never seem low-risk enough to migrate to: what if I actually do need to customize?

      So, I’ve been stuck with webpack pains for far too long. Maybe Airbnb or Facebook will open-source their Metro (FB bundler used for React Native) configs one day and we can escape…

      1. 7

        at $WORK we have swapped to Rollup for most of the browserland stuff I work with because it’s so much less headache, the entire codebase is grokable, and you can trivially extend it to do custom stuff if needed.

        I do agree bundling is problematic, but for me the worst part is Babel. I’d rather write ES5 than work with Babel and their constant breaking changes everytime I upgrade something.

      2. 3

        I seldom do front-end work, but bundling is my biggest frustration. Yeah, how come I do what I think is the exact same set up for a new project, and webpack fails me. Sometimes I yarn a particular version, it works, sometimes it doesn’t. Sometimes install –dev sometimes not. Eventually get the new project working, but never know how. Repeatability seems illusive.

        1. 1

          Babel could be all or some of my problems as well…

      3. 2

        Parcel has been nice to work with wrt packaging

      4. 1

        very astute observation. The whole build pipeline for non-trivial webapps is just slightly better than the world of no-packages (eg like C++).

        FB’s metro perhaps is also not a panacea, it does not let me, for example, have source files in directory folders I want (instead it is tied to node’s source file search algorithm).

      5. 1

        From the outside, I have a bit of trouble even understanding what bundling is all about. What does Webpack to that Grunt doesn’t? And then, what does Grunt do that make doesn’t?

        1. 6

          While you can bundle with both Grunt and Webpack, they are entirely different approaches to the problem, and unless you are working on something with only trivial Clientside JS, Webpack’s approach is superior.

          In short, Webpack (and other modern bundlers) understands require and import statements, so once it is given the location of your entrypoint JS, it walks the dependency tree to convert all needed dependencies into an optimized bundle for the browser. With Grunt, you can provide a list of files, but you need to manually manage the order of any cross-file dependencies, because it will fundamentally only concatenate the files together.

          1. 1

            I’ll just add that this even a non-“trivial Clientside JS” project can pull this off well enough with grunt, but when you get to a big project with many people most of which have no idea what is all this about, it becomes a problem.

    11. 6

      I’m getting married!

      1. 1

        Congratulations.

    12. 5

      I placed some Bluetooth beacons around my apartment building, so I plan to build something that does something with whatever data the beacons manage to see / collect.

      1. 4

        Using estimote beacons – they should only really pickup info from my devices since they are registered. Yeah, I guess sorta the point is to show that one can place random tech on water pipes in common areas and no one really will give it a second look. Yes, I also wonder if I can use them to sniff for other devices that might come and go. It’s all for fun and to scare myself a bit.

      2. 3

        Last I heard about BT beacons was some startup that gave them away at a ton of hackathons, but since then I haven’t seen anything about them. Is there still better hardware for this stuff now and are there any use cases you’re considering? I could never get them to work well, nor think of anything interesting or productive to do with them.

        1. 2

          I think Estimote is all about asset tracking. I’m hoping to learn more about them this weekend. :D

          1. 2

            How did working with them go? I’m starting to scope out ideas for my next project.

            1. 1

              Got sidetracked and wasn’t able to do anything beyond setting them up in Estimote’s cloud service, and sticking them behind utility pipes.

        2. 1

          I think as companies adopt the SIG mesh spec beacons (and general low-power sensors) are going to come back around. Proximity is a big limitation.

      3. 2

        I’m planning to do a similar project. What equipment are you using?

      4. 2

        I’m wondering if that sort of thing needs permission. Or are you going the “asking for forgiveness is easier” route?

        1. 1

          Probably ask forgiveness if anyone ever finds them