Threads for davidbanham

  1. 1

    This seems pretty unnecessary (you could use the mute button in the conferencing app, or use the same touch controller thingy as a HID device plugged into the computer that would mute/unmute the mic’s capture device, which I think even changes the color of the light on it), but it’s nicely done nonetheless.

    1. 2

      I’d argue that a hardware mute button beats both of those options for usability, especially given how much some of us use it.

      For the first - what if you’re working in another app?

      For the second, you’d need a helper app running on the PC to manage the mute status, which adds complexity and a point of failure. It’s been a while since I last looked at the USB audio class spec, but I’m pretty confident mute status doesn’t go out on the wire, so you’d need another indicator on your button. (Not a bad thing, IMO.)

      Personally I went out of my way to add a hardware mute button to my desk mic, with a nice big red tally light to help me keep track of the mute status. It’s nice to have something that just works and always will.

      1. 1

        For stuff like this, dedicated buttons beat kludged-together software every time, for me. Yep.

        1. 1

          For the second, you’d need a helper app running on the PC to manage the mute status, which adds complexity and a point of failure.

          Sure, but not too much of one. It should be possible to make it stupid enough that it never breaks, and set it to always run if the controller is plugged in.

          It’s been a while since I last looked at the USB audio class spec, but I’m pretty confident mute status doesn’t go out on the wire, so you’d need another indicator on your button.

          It is (or it can be anyhow), USB Audio calls out mute and volume as supported kind of “controls”. Of course you don’t need to have any controls for a USB mic, you can let everything be done in software, but the Yeti does expose mixer controls, and I believe (I don’t have access to mine right now, so I’m going on memory) that the mute button on the device itself just sets the gain control to -∞dB and then restores it, and that doing the same thing from the host side will make the light on the mute button turn red.

          I’m not arguing too seriously here, I’m just saying… you could probably get by without opening the thing up, and given my hardware skills, it’s the way I would have gone.

          1. 2

            It should be possible to make it stupid enough that it never breaks

            Call me a stupid hardware engineer, but in my experience getting stuff like this to be completely bulletproof (eg over suspend/resume etc) can be non-trivial!

            USB Audio calls out mute and volume

            Thanks for teaching me something new today (:

        2. 1

          Yep.

          I have the same mic. I have a dedicated button on my keyboard bound to mute all mics via pulseaudio. The Blue notices this… somehow and changes the light on the hardware button. I was surprised the first time I noticed.

          It also changes an indicator on my status bar so I know whether it’s hot or muted without looking away from my screen. It mutes every mic attached to the computer, so just in case something has bound to the webcam mic by accident I’m still safe.

        1. 16

          One of the neat things about SQLite is that it can actually import CSV data directly. No need for the Go script.

          Start sqlite, then:

          .mode csv .import some_data_file.csv your_table_name

          It’ll automatically generate a schema for you, too. To see what it did:

          .schema your_table_name

          You can import as many more files as you like so long as the schema is compatible.

          You can also use the “export” command in csv mode to dump your SQLite table out to a CSV file. It’s a real Swiss Army knife for this kind of small-to-medium data processing.

          1. 5

            Yeah I used the Go script because there are 4 separate CSV files, and I wanted to combine them into one table. I’m sure that could be done with sqlite cli too, but this method was faster for me. Plus I needed to dust off my Go chops, haven’t built anything for a while.

            1. 3

              what I always found unfortunate is that these features are only part of the sqlite-cli, not the sqlite library. It would be nice to do these things via a database driver (in python or java or whatever)

              1. 2

                You can easily implement yourself a function to do that from within SQLite via UDFs in the language of your choice… here’s a quick & dirty implementation in Python (probably don’t use this as-is in a “real” setting):

                import csv
                import sqlite3
                
                DATABASE_PATH = "data.db"
                
                def write_table(tablename, write_path):
                    connection = sqlite3.connect(DATABASE_PATH)
                    select = f"select * from {tablename}"
                    with open(write_path, "w") as f:
                        writer = csv.writer(f)
                        for item in connection.execute(select):
                            writer.writerow(item)
                    return 1
                
                if __name__ == "__main__":
                    connection = sqlite3.connect(DATABASE_PATH)
                    connection.execute("CREATE TABLE IF NOT EXISTS test(field1, field2)")
                    connection.execute("INSERT INTO test VALUES (1, 2), (3, 4), ('testing', 'onetwothree');")
                    connection.create_function("write_table", 2, write_table)
                    connection.execute("select write_table('test', 'test.csv');")
                

                SQLite is pretty magical

                https://docs.python.org/3.8/library/sqlite3.html#sqlite3.Connection.create_function

                There’s probably a more refined way to do this using C-extensions so that it can support arbitrary queries rather than just tables and views, but I’ll leave that to the reader?

                1. 1

                  That’s cool! I will remember that the next time I try to do that from python.

            1. 11

              I hate how Go’s and Deno’s approach to dependencies of just pasting the URL to the web front-end of the git hosting service used by the library seems to be taking off. I think it’s extremely useful to maintain a distinction between the logical identifier for a library and the physical host you talk to over the network to download it.

              1. 4

                I like the idea of using URL fragments for importing. There’s a beautiful simplicity and universality to it. You don’t need a separate distributed package system—any remote VCS or file system protocol can work. However, it needs to be combined with import maps, so that you can hoist the location and version info out of the code, when desired. And there should be support/tools for explicitly downloading dependencies to a local cache, and for enforcing offline running. This is the approach I plan to take for Dawn.

                1. 2

                  This strikes me as problematic as well. LibMan in .NET is the same way. npm audit may be flawed, but npm itself at least provides a mechanism for evaluating common dependency chains for vulnerabilities.

                  Ryan Dahl and and Kit Kelly drew the opposite conclusion in their work on Deno. They believe that a central authority for package identity creates a false sense of security and that washing their hands of package identity altogether is the solution. Deno does at least have a registry for third party modules of sorts, but installation is still URL based.

                  1. 1

                    Think of it like this. The URI is just a slightly longer-than-usual package name. As a handy side-effect, you can also fetch the actual code from it. There’s nothing stopping you from having your build tools fetch the same package from a different host (say, an internal package cache) using that URI as the lookup key.

                    The big benefit is that instead of having to rely on a single source of truth like the npm repository, the dependency ecosystem is distributed by default. Instead of needing to jump through hoops to set up a private package repository it’s just… the git server you already have. Easy.

                    1. 4

                      The problem is that it’s precisely not just a slightly longer than usual package name. It’s a package name which refers to working web infrastructure. If you ever decide to move your code to another git host, every single source file has to be updated.

                      I have nothing against the idea of using VCS for distribution (or, well, I do have concerns there but it’s not the main point). But there has to be a mapping from logical package name to physical package location. I want my source code to refer to a logical package, and then some file (package.toml?) to map the logical names to git URIs or whatever.

                      I don’t want to have to change every single source file in a project to use a drop-in replacement library (as happened with the satori/go.uuid -> gofrs/uuid thing in the Go world), or to use a local clone of the library, or to move. Library to another git host.

                      1. 1

                        It’s a package name which refers to working web infrastructure.

                        But that’s true about more classical packaging systems, like Cargo. If crates.io goes down, all dependency specifications become a pumpkin.

                        It seems to me that deno’s scheme allows to have roughly the same semantics as cargo. You don’t have to point urls directly at repos, you can point them at some kind of immutable register. If you want to, I think you can restrict, transitively, all the deps to go only via such a registry. So, deno allows, but does not prescribe, a specific registry.

                        To me, it seems not a technical question of what is possible, but rather a social question of what such a distributed ecosystem would look like in practice.

                        1. 1

                          If you want to complain that rust is too dependent on crates.io then I agree of course. But nothing about a Rust package name indicates anything about crates.io; you’re not importing URLs, you’re importing modules. Those modules can be on your own filesystem, or you can let Cargo download them from crates.io for you.

                          If your import statement is the URL to “some kind of immutable register” then your source code still contains URLs to working web infrastructure. It literally doesn’t fix anything.

                        2. 0

                          Well, Go has hard-coded mappings for common code hosting services, but as a package author you can map logical module name to repository location using HTML meta tags. Regarding forks, you can’t keep the same logical name without potentially breaking backwards compatibility.

                          1. 3

                            The HTML meta tag solution is so ridiculous. It doesn’t actually fix the issue. There’s a hard dependency in the source code on actual working web infrastructure, be it a web front-end for your actual repo or an HTML page with a redirect. It solves absolutely none of the issues I have with Go’s module system.

                    1. 3

                      Just have your clients reconnect every 10 minutes or so?

                      1. 2

                        Just restart whichever server has the highest load every 10 minutes. Solves memory leaks too.

                        1. 1

                          Or listen to a certain response from the server and only then reconnect. That way you only need to reconnect on demand. Still this is a hackish kind of way to “solve” the problem

                        1. 3

                          I also really struggle with technical interviews. The timed take-home tests are probably my weakest area right now, it is something about that clock ticking that quite literally makes my hands start shaking. I have been writing code for 20 years, the last 10 years in some kind of lead/staff role. I have recently worked my way through all the exercism puzzles and yet… add a clock and I can’t function. These are new to me since they weren’t around the last time I interviewed - or not as prevalent.

                          I don’t think it is about pressure, one area I thrive is when there is a critical outage and the solution requires quick and decisive actions - that doesn’t bother me.

                          Anyway, FWIW - I really like your suggestions here but what I have come to terms with is that if that is how they want to hire and if these are the things they care about - I probably am not going to be a good fit for the company. It does not take all of the sting out of the rejections, but it does help in reflection. I keep doing them because I do believe I can get better if I practice. But in the meantime, I just trust that the right opportunity will come along.

                          1. 3

                            That’s really interesting as many people are the exact opposite. The stress of not having a time limit causes anxiety because you’re constantly wondering how much time and energy the rest of the applicants are spending on this challenge and whether your submission will look bad in comparison.

                            I have a horse in this race as I run what I believe to be the first time-limited technical challenge tool that allows you to use your own editor/IDE. - https://takehome.io

                            I think what this underscores is the author’s point about building flexibility into your process that can allow for the different preferences of your candidates. There’s no one-size-fits-all interview technique.

                            1. 2

                              How do you, at takehome, make sure candidates don’t fiddle with the git history so they maybe look like they performed better than they did?

                              1. 1

                                What do you mean by fiddle?

                                If just rebasing, moving commits around, etc. to make the commit history look Good(TM), when you’re worried that your natural workflow looks unprofessional e.g. if it’s anything like mine:

                                git add -A && git commit -m "more stuf" && git push
                                git add -A && git commit -m "dang" && git push
                                git add -A && git commit -m "ttt" && git push
                                

                                I don’t see that as a problem. The end result is a clean commit history - whether it happened during the writing of the code, or after the fact, is irrelevant - the candidate demonstrated that they could A) write a functional POC and B) release it with (eventually) good commit history.

                                Also, the above disorganized git commits I find are almost unavoidable for “first commit” style POCs, when there is no codebase.

                                Once the codebase is established, maybe then creating clean commits is expected as a skill, when working with a team on a large codebase - but even so you can do dirty crap in a branch and then fix the commits before creating a PR.

                                1. 1

                                  My understanding of takehome is you use git (partly) to make sure assignments are done inside a desired schedule. But since you can edit git history, people can easily use three hours but edit the history to make it look like they used two.

                                  But maybe I misunderstood what takehome uses git for?

                                  1. 1

                                    Ah, I see - didn’t think of time limit cheating.

                                2. 1

                                  Takehome has a custom git server implementation. When the candidate fetches the repository from their unique URL a timer starts. When that timer expires, the git server stops accepting pushes from the candidate into that repo.

                                  While the submission window is open they’re welcome to fiddle with the history all they like. All I care about is that they didn’t spend longer than allowed writing their response.

                                  1. 1

                                    That makes sense. I thought you where relying on normal git history to track time.

                            1. 8

                              It feels like all our best engineers are tied up building programs-for-programmers-to-program-programs-for-programmers.

                              Many programmers are under-employed by the ‘real world’. They actually really want to do the type of work that you do, but don’t know how to find those opportunities, or it involves a significant amount of risk. So they solve their own problems, because they are denied that from 9-5.

                              I think there’s a huge opportunity here: a lot of what we call business is not actually that hard, it’s just veiled in credentials and bullshit. If someone set out to make a course to commoditize the MBA (as it were) and teach programmers how to do market research, talk to potential users, and be create their own opportunities, we could create a small, thriving culture of indie technologist-business types.

                              At the age of 37, I’m already sick of working on other people’s dreams.

                              1. 5

                                Risk is definitely a factor. Over the course of my career I’ve created space to experiment with these kinds of things. That’s a privileged position and I acknowledge that.

                                I actually did an undergrad business degree years ago. I don’t think there’s much of that in these products though. Maybe it helped shape my mindset, though. “Market research” in this small context is just “talking to people”. It’s looks much less like the market research I learned in uni than it does the user interactions discussed in the agile manifesto and the original extreme programming principles.

                                Reading a few lean startup books and then giving it a go a few times is probably all the MBA you need at this level. Managing a multi thousand dollar company is much easier than a multi million one.

                                1. 2

                                  Thanks for the reply. Any lean startup books you like?

                                  Meant to also say, thanks for the article. Looking forward to more entries!

                                  1. 3

                                    Thanks! Personally I haven’t read any lean-specific books. I leant on HN posts, blogs, podcasts, etc for that stuff.

                                    The early Free Agents podcasts are good.

                              1. 8

                                My main question is, how do you get to that place, and how do you find those clients? And how do you weed the promising ones from “do me good for cheapest possible”: is it just by putting hourly rate high enough? I’d guess you are, or at least started as, a consultant/freelancer? I suppose this requires some particular personal traits, being ok with working in non-9-to-5 environment, chasing clients etc.? Also, I really respect and admire what you’re doing; I think a lot of things you gloss over are not that easy to achieve.

                                1. 4

                                  I think the way to find opportunities is to talk to lots of people and keep your eyes open. The opportunities are out there but often we look past them.

                                  The steel distributor gig was posted on the jobs channel of a Slack community I’m a part of. I got in touch and pursued the opportunity.

                                  The ski club thing came about because I offered to help out on the board of my club. They approached me because they knew I had tech skills and was personable.

                                  Just try and network as much as you’re able to create flow, I guess.

                                  As for weeding out time wasters a high day rate definitely helps. Clients get a lot more focused about what they need when it hurts a bit. The trade-off is that you need to make sure you’re equally focused and you make every work day count. Split your down time between looking after your family/self and investing in future revenue streams, too.

                                  1. 2

                                    Without bias: that sounds like a typical consulting business, with the commensurate sales & business skills required.

                                    1. 2

                                      Hm; so, isn’t your story here basically what every freelancing/consulting programmer is doing?

                                    2. 2

                                      Same here, I would like to know how to find this type of clients too.