Threads for abstract777

    1. 8

      I will add this article as my prompt for Chatgpt resume writing.

    2. 1

      I had to ask bing AI bot to give me a code example how to connect to veilid and write and retrieve info. Was it all an hallucination? It looked nice and easy. I couldn’t find anywhere some basic examples on veilid’s website or GitHub. Anyone spot code and api examples? Please share a link. Project is promising at a glance. Have use for it as I rolled my own distributed / router system but for my specific project.

      1. 3

        It’s in a very very early stage at the moment. I’m sure better documentation is on the way.

        In fact, when I first heard of this, the documentation was pretty much nonexistent. It’s improved since then. Not in a usable manner yet for sure, but I have noticed progress.

        1. 1

          I see. That makes sense. I found this veilid chat app demo for python in a fit of rage. Just couldn’t believe there wasn’t any examples. The location wasn’t obvious.

          I autogenerated this possibly semi-working pseudo code from chat example code on how to write to and read from veilid’s DHT. The chat example was little too much stuff going on for me to parse:

          import logging
          import os
          import veilid
          
          LOG = logging.getLogger(__name__)
          
          KEY_TABLE = "veilid-demo"
          
          # Retrieve connection details from environment variables with default values.
          API_HOST = os.environ.get('VEILID_API_HOST')
          API_PORT = int(os.environ.get('VEILID_API_PORT'))
          
          def store_key(conn: veilid.json_api._JsonVeilidAPI, key: str, value: str):
              """Write a single key to the keystore."""
              tdb = conn.open_table_db(KEY_TABLE, 1)
              key_bytes = key.encode()
              value_bytes = value.encode()
              LOG.debug(f"Storing {key_bytes=}, {value_bytes=}")
              tdb.store(key_bytes, value_bytes)
          
          def load_key(conn: veilid.json_api._JsonVeilidAPI, key: str) -> str:
              """Read a single key from the keystore."""
              tdb = conn.open_table_db(KEY_TABLE, 1)
              key_bytes = key.encode()
              LOG.debug(f"Loading {key_bytes=}")
              value = tdb.load(key_bytes)
              LOG.debug(f"Got {value=}")
              return value.decode() if value is not None else None
          
          def main():
              # Connect to the Veilid API using environment variables
              conn = veilid.json_api_connect(API_HOST, API_PORT)
          
              while True:
                  choice = input("Choose: (w)rite, (r)ead, or (q)uit: ")
                  if choice == 'w':
                      key = input("Enter the key to write: ")
                      value = input("Enter the value to write: ")
                      store_key(conn, key, value)
                      print("Data stored successfully!")
                  elif choice == 'r':
                      key = input("Enter the key to read: ")
                      value = load_key(conn, key)
                      if value:
                          print(f"Value: {value}")
                      else:
                          print("No data found for the given key.")
                  elif choice == 'q':
                      print("Exiting.")
                      break
                  else:
                      print("Invalid choice.")
          
          if __name__ == "__main__":
              logging.basicConfig(level=logging.DEBUG)
              main()
          
          1. 1

            I appreciate you doing that digging. I’ll have a look at this later. Looks surprisingly simple.

    3. 1

      This seems great similar to yggdrasal I also wonder why they don’t talk much about the underlying transport. You don’t need nat if it’s an IPv6 network. I do understand that they want to establish essentially a non-traceable overlay network, but the underlay is just as an important.

      1. 3

        “if it’s an IPv6 network” - well, sometimes it isn’t, so things like tailscale or veilid have to hole punch… because they want to be used by everyday devices :)

      2. 1

        Checked it out. There isn’t any real guides or documentation with practical examples on their GitHub or webpage. Veilid I see promised the ability to read and write objects across the network using content IDs of some sort, so that’s a big difference I suppose.

    4. 3

      This is very useful. Some great ideas in the blogpost, especially about embedding summaries and autogenerated questions on documents.

    5. 0

      Veilid touts all the improved feature sets anyone could want, I think, for a non-blockchain distributed framework. DHT with onion routing of internode messaging, and end-to-end and storage encryption. Easy to host a node.

      Had a captivating chat with Bing about Veilid. Here’s what I gathered (or didn’t):

      Quizzed Veilid’s resilience against potential disruptions like excessive node messaging or DHT overloads. Bing pointed to:

      1. Node rate limiting - Seems logical.
      2. A reputation system where nodes rate each other on various interaction metrics. But, I couldn’t find any tangible evidence for this in the source code.
      3. A mentioned PoW algorithm for data messages (messages hop in onion routing scheme) / almost certain not true, yet granted a very compelling hallucination.

      With these uncertainties, how robust is Veilid, especially its onion routing? Based on my limited experience, a central authority seems a plausible safety net. But without tangible stakes for nodes, could a powerful adversary just spin up gazillions of nodes and disrupt things? Nodes are cheap to host on Veilid, I believe. Reminds me of challenges faced by platforms like Tor not to recently. Thoughts?

      1. 4

        Who is Bing? Are they affiliated with the project?

        1. 2

          I assume they mean Bing Chat, the search engine’s chatbot.

    6. 1

      Private LLM. Enjoying the hype, but want to build my own mini expert to bounce ideas off in highly technical programming / complex areas of engineering.

      1. 5

        I submitted khoj as a story here a couple of days ago. It lets you run a local LLM that can ingest your own documents (PDF, Markdown, and a few other formats), with local indexing and chat. It also has some stuff that lets you run it on your own machine somewhere but query it from mobile devices.

        It looked interesting and it’s the first local AI assistant thing that I’ve seen. pushcx deleted the story because ‘Personal productivity is off-topic.’

        There’s an open issue on the llama.cpp repo for building a local copilot-like assistant too, which looks like it has a few people working on it. I’m really looking forward to what it produces.

        1. 1

          Checking this out now - I went down the route of llama2 (i.e starting from scratch) - much to learn and your project is basically exactly what I want! Gonna have a play and then take next steps (if any). Thanks for putting it together!

          1. 1

            your project is basically exactly what I want

            It’s not my project, I just saw it and thought it looked cool. I was very sad to see that it’s off-topic for lobste.rs.

            1. 1

              Yeah not sure how it is off topic. Had a good play - I need to make some changes (no GPU usage, being able to specify a model to use) - but it’s very good. After I fed it my data it suddenly became very knowledgable which is what I was after!

        2. 1

          Khoj is a standout, and it’s why I’ve been nudging my FileBot users (just a few folks that I personally know) towards it. It’s eerily similar to what I imagined for FileBot – wild, right? Your mention here was the first time I heard of Khoj.

          FileBot does hold its ground, especially in producing detailed answers across multiple files and being transparent about sources. In rarer cases, I find FileBot more accurate, but it’s about neck-and-neck. But these days, I’m a Khoj-FileBot pendulum. Khoj, with its dedicated team, gets my vote for a primary tool.

          Stumbled upon Arx recently – a nifty tool for file anonymization & de-anonymization. But it’s oddly obscure; haven’t bumped into anyone using it. If it’s as slick as it says, why isn’t it part of Khoj or other easy-to-use projects?

          Speaking of which, I’m cooking up an auto-anonymization and de-anonymization layer, targeting 99.99% precision in scrubbing and restoring docs. Think: set-and-forget, integrated quietly with tools like Khoj or FileBot, using a mix of CRTD-techniques and pattern-recognition. I’m yielding good results, and it can handle more abstract privacy concerns. I have to combine and fine-tune my scripts into a single easy to use little python library.

          Here’s the kicker: It’s a privacy shield for folks using API endpoint LLMs like OpenAI’s, keeping data from straying into the wild. It’s poised to be the unseen guard in an age when big players might offer superior, cost-effective LLMs compared to small open-source options.

          Encountered Arx or a superior, easy-to-integrate alternative? Eager to swap notes.

          1. 1

            My former team at MSR (now Azure Research) is doing some work on privacy preserving machine learning, but it turns out to be an incredibly hard problem. Adding fairly small amounts of differential privacy have quite large impact on utility. I suspect that this is an intrinsic property. In theory, differential privacy removes things from your sample set that are not shared across the population, but if you actually knew what things were shared then you would not need ML, you could build a rule-based system for a tiny fraction of the compute cost.

            A lot of their focus is running these models in TEEs. We recently had a paper about some work we did with GraphCore on adding TEE functionality to their IPUs. The most recent NVIDIA chips have something based on this work. This should let a cloud provider build a model with a large training set and then let you fine tune in in a TEE with fine-grained egress policies so that you can guarantee that none of your personal data ever leaves the device except to your endpoint. Deploying this kind of thing at scale is still a little way out though.

            1. 1

              I see. The degradation from training on privatized data makes a lot of sense. The fine-tuning in the trusted executions environments sounds promising though. Let’s see how this experiment goes!

        3. 1

          Saved that link, looks interesting. Thanks for sharing!

    7. 2

      You may not always get what you want, but you’ll get what you honked for.

    8. 1

      Diving into a side project on one of my undercover GitHub account (don’t tell anyone 😉) called FileBot. It interfaces with local files using pure LLMS, without any embeddings. Currently, wrangling about 100 medium-sized files - pretty good when using on a specific folder or something. Some work buddies stumbled upon it and found it nifty for sifting through docs and codes. Curious about non-embedded-based document retrieval strategies or have experience with OpenAI and LLMS for local files? Do reach out. Just another weekend in the rabbit hole, as one does.

    9. 3

      Following this project now. Last week I helped a scientist friend make some latex tables. Looks great, but there is a bit of a learning curve and took longer than I wanted. I was thinking that there should be something just like this using markdown or something. More programmatic.

    10. 29

      For those looking for the lies, they’re listed at the end.

      While I don’t disagree with most of the article, I think it’s worth stating the contrapositive case: golang is a language optimized for cranking out web services at Google with moderate to low business logic complexity where correctness is generally pretty ill defined. Tasks very similar to that are also suitably cranked out in golang. It’s not perfect, and certainly there are certain kinds of error that you’d like to be able to prevent or predict (for example with integrated modeling). I think of it as a compiled python2, with the additional goal that there really actually should only be one obvious way to do it, and readability is achieved by explicitness even when that is very, very verbose. The further you get from that kind of work (need for rich data structures, non-networked boundary, crappy network, complicated concurrency, real-time and high performance work, numerical work or any other domain where expressiveness is a huge performance or productivity win), the worse golang will fit your needs.

      I don’t consider golang a joy, but I think it is largely successful at fulfilling that mission. Not optimal, certainly, but successful. And there’s a lot of code to be written that largely slots into the golang shaped hole.

      Somewhat ironically, it looks like Python is overtaking golang on the safety story with a much more expressive optional type system.

      As a practical marker, I think the bugginess of kubernetes shows that writing kube bumps up and slightly over the complexity level that golang is fitted for.

      1. 16

        This is largely what my view on Go has evolved to over the years since it fully addresses the problem domain that it’s designed for: make a language that’s as simple and explicit as possible with the goal of trivializing the individual programmer’s impact on the project. If any given Go programmer is just as useful as any other and the language is easy to pick up in the first place, people are entirely expendable. It addresses Google’s internal needs perfectly, it just so happens that people outside of Google also use it. However this doesn’t free the language from criticism as it’s still rather… shoddy in the design realm, but any of those criticisms would fall on deaf ears. A much more relevant (and concerning, in my opinion) criticism is on Google’s approach to people and the use of a language to commoditize them.

        1. 10

          A much more relevant (and concerning, in my opinion) criticism is on Google’s approach to people and the use of a language to commoditize them.

          This has been a corporate fever-dream for decades, at least. They tried to do it with Java, too.

          I had a professor who taught a software engineering course. This was a dude who had worked in industry. He claimed that eventually, programming would be similar to a food service job. A designer would make some UML models and stuff, and then hand them off to highschool kids who would write the code for minimum wage. Among the guy’s other ludicrous claims: eventually we’ll be writing programs in XML! I thought he was kind of a silly assclown. He did teach an excellent course on databases however.

          1. 18

            Among the guy’s other ludicrous claims: eventually we’ll be writing programs in XML!

            We kinda do! The actual technology is XMLs easier-to-type cousin YAML, but we absolutely write programs in a data structure language.

            1. 3

              but we absolutely write programs in a data structure language.

              Yeah, and we absolutely hate it (looking at you, Ansible)

              1. 2

                Indeed we do (looking at you, CI files)

                1. 3

                  Shit, that, too. Not only is it programming with yaml, is programming a sort of state machine you can’t test anywhere other than in production.

                  I don’t think I have a truly love-hate relationship with anything as much as I do with CI

            2. 3

              We kinda do! The actual technology is XMLs easier-to-type cousin YAML, but we absolutely write programs in a data structure language.

              Point well taken. I’ve written my share of ansible config. In another sense, the “equivalence of code and data” sense, a data representation language is just code for a very limited kind of machine. We hope it’s limited, anyway! TCP packets are programs that run on the machine of a TCP stack.

              I don’t think that’s what he had in mind. I think he was imagining something more like C++ but with XML tags and attributes to represent the syntax.

        2. 10

          Replacing scarce and hard-to-train programmers with easy-to-learn languages or tools has been a theme for as long as I have been in the commercial software development business (late 90s). The craze for ML-assisted development is just the latest iteration of that.

        3. 3

          The opposite of this is if every programmer feels like an amazing magician and writes their own DSL in Common Lisp and uses macros everywhere.

          This can be good for the self esteem of inidividual programmers, but is horrible for hireability, teamwork and being able to tell what a screenfull of code does without extensive digging.

        4. 1

          This was always my suspicion about Go. The commoditization. On the positive end, that could be a strength for open source projects as it lowers the bar for participation. I can only imagine how CoPilot and the likes will amplify this over the next coming decades.

          My strong intuition within a generation is that most ‘coders’ will only submit pull requests (or issues) for features, fixes, etc., along with some bots, and the new paradigm of social-driven automation (hybrid of human coders and AI bots) will update the PR by generating the code and even the tests. It will be as much an art as a science in submitting good issues or PRs that get the results you want. Whomever ‘fulfills’ the issue or PR (updating it to completion) will be rewarded for it, sorta like a bounty or something.

      2. 9

        web services […] with moderate to low business logic complexity where correctness is generally pretty ill defined

        For better or worse, this describes a sizable percentage of software projects.

      3. 8

        golang is a language optimized for cranking out web services at Google with moderate to low business logic complexity where correctness is generally pretty ill defined

        Sounds like the same pitch as PHP and look where the public opinion of that language went, fractal of bad design and all that.

        Sometimes it feels to me like Go is the place where people can follow all their old PHP practice at slightly better performance and legitimized by the fact that it’s “the language Google uses internally”

        1. 3

          PHP has it’s reputation, and still runs the majority of the web. For better or worse.

          I wonder if in ten years people will look at Go like we look at PHP now.

      4. 6

        I really don’t like go. For the reasons the OP lays out and more. But your contrapositive case is dead on.

        When you have to crap out web services with stringy typing, json, and lack of abstraction - go is fine.

        And for completeness - the go linker and it’s ability to ship static binaries trivially is fantastic.

    11. 2

      A technology ‘coming to its own’, like something that starts climbing or dominating the charts of StackOverflow annual surveys and the average corporate developer (Java coders and a like) or intended users, feels pressured or are motivated to try it? More bottom up. Or like when business managers will actually push the tech from the top-down in their organization? Also is it end-user facing? Or infrastructure (less fan-fare and less prone to investor mania)?

      Nix (bottom up) and things like ChatGPT and CoPilot fit the criteria one way or the other, except there is lacking a risk-off funding environment or infrastructure stuff generally doesn’t get as much press. So CoPilots and ChatGPTs type of things if I had to pick, but the big push by capital allocaters - just funding anything that breaths and moves related to aforementioned - is not going to happen until the financial markets begin to roar again.

      We’re at, I believe, the very beginning of a Nix super-cycle where all sorts of technology will be implemented on top of it and towards more user-friendly, UI oriented approach to manage or use it. I think it will be bigger than ChatGPT stuff in a way, but more behind-the-scenes where it becomes ubiquitous but generally unknown by the public. Infrastructure. Think Nix has hit critical mass and will be rolled out, in some form or fashion, quietly and strategically by larger and larger organizations globally, with a feedback loop as more Nix dev-friendly services and tools are built.

      1. 2

        I’ve seen multiple people here mention Nix, which I’ve never heard of, and it’s a… 20 year old package manager? There are an unbelievable number of package managers, what makes Nix something that warrants mention?

        1. 8

          It has fully reproducible builds, easy mixing and matching of different package versions on the same system without polluting the global namespace and can even run alongside other package managers.

          For instance, when I got started at my current job I was using Nix on Debian so I could have a local dev environment for the project in which Java is available, without having to install Java system-wide (ugh).

          You can pin specific versions of packages in each environment, so you should never run into any bitrot due to system updates.

          It is essentially used for solving the same problems people use Docker for, but without needing VMs.

        2. 6

          I agree with @sjamaan, but also there is NixOS, which is an entire OS built around nix. It’s a declarative operating system, from boot and hardware configuration to users and configuration of packages.

          People talk about building cattle, and then try to use stuff like Chef, Puppet or Ansible to configure them in a “declarative” way, but it’s not actually declarative under the hood a bunch of wacky magic happens to try and turn it into a declarative system. Nix does a much, much better job of making it actually declarative.

          NixOS gets us really close to the holy grail of totally declarative systems that make everything reproducible from the source to running binaries on a system.

          1. 3

            Good point re: “declarative” deployment tools. I have struggled so much just getting reproducibility working properly in Ansible. Even “simple” things like managing cron jobs would be impossible since existing jobs wouldn’t get removed if you removed it from your recipe. That’s such a mindfuck! And getting it to work requires so many artificial order-sensitive stateful constructs you might as well be doing it all from a shell script.

        3. 1

          Nixpks repo is an explicit dependency graph, cryptographically. You can’t just add any old package. It must be built with other packages, all the way up, so if I’m not mistaken, it took awhile to get a lot of old critical software compiled where it was difficult to re-compile, and it’s a huge network metcalf type of thing.

          Good things sometimes take time to build. However, it has hit critical mass imo.

    12. 1

      Every maintainer of critical open source projects has a right to get funded.

      1. 2

        This is not a critical project. It’s a faster linker. LLD is already fast enough that linking stopped being a bottleneck that I care about years ago, so mold would not save me a measurable amount of time.

        1. 4

          While I agree with you that mold is not critical, it is still a significant productivity boost in the same sense that an IDE might be, and people willingly pay licenses for those. For any kind of iterative work where a small change means 1 compilation unit + linking (or linking of multiple libraries, in case others depend on the one being modified), it really significantly improves the development experience.

          1. 2

            The problem here is that this is so broad that any organisation with legitimately ask “why should I foot the bill and not someone else?”. And there’s always 100 open places where you could make gains of that kind.

            Not taking away from your general point, but that’s something that’s hard to work through and find the people on the other side willing to help you succeed, because it needs to be a specific kind of person with a specific kind of mindset.

    13. 2

      May want to check if someone built one with https://hypercore-protocol.org/. Probably not really big if it exists.

    14. 1

      I am trying to make a reproducible development environment for both my personal and work laptops and for other newcomers to my team (they are using an unmaintained Ansible playbook currently). The idea is still in the early stage, though. Perhaps Nixpkgs can achieve my goal?

      Looking for guidance/advice.

      1. 1

        I use a salt setup for my own machines,masterless, just running salt-call. I even manage my dotfiles and ssh config with it! Can definitely recommend, either ansible or salt. I like salt because I can break up stuff into nice Stateless parcels, and decide at the top level which machine gets what (one repo checked out on each machine I personally use).

        I run this all the time, most weeks I’ll run it once because I want to change some setting or record a new personal alias. So it stays maintained. I had to test it when a disk died with no backups, and the onboarding story was as good as could be expected.

      2. 1

        As in a general dev environment for everything including IDE or specific project/repo?

        1. 2

          Almost everyone except me uses Ubuntu {18,20,22}.04. A few are using Windows, but we can ignore them as they are not the ones who will develop heavily.

          As in a general dev environment for everything, including IDE or specific project/repo?

          Almost every project depends on Docker. We have some customised settings of Docker and Dnsmasq, and we use a tool called dnsdock for discovery/communication among containers. We also have some internal CLI tools for bumping project versions - so I guess we can refer to this as a general dev environment.

          In terms of specific project/repo, we have already managed them with Docker (we will stick to it as eventually, they will be deployed to a Kubernetes cluster). So I will leave it alone for now.

          1. 1

            That’s basically we’re our team is at. We have specific environments that are reproducible (to varying degrees) using docker or nix.

            Everyone has their own general environment. Some run Linux, Mac, or Windows. We don’t want to force everyone to use Mac, Arch or Ubuntu, or even the same IDE. I want to get a devops person in the next few months who is really good or is willing to learn nix-lang. Idea is that they can setup general environments with NixOS - manage the flakes and other ‘formulations’ with different downstream versions for different teams for the specific needs. Get everyone to use NixOS or at least dual boot.

            Among other responsibilities , they’d be a reproducibility engineer of sorts. It’s not practically for us to expect everyone to become ‘experts’ at nix. Hopefully the fine folks at Determinate Systems or elsewhere will continue to innovate, so we can fire the reproducibility engineer (joking, or they can drop that title and work on higher level automation). If anyone wants to install or configure something that requires writing a flake or something, they could get help from that engineer or ask them to do that. Anyway, that’s my vision.

            1. 1

              What you said worried me - we don’t have the resources to hire a reproducibility engineer. I don’t want to be that one either. What I am expecting is there’s a script (glad to be the one who writes it) that can bootstrap (almost) the same dev environment every time instead of newcomers taking several hours to ask for help from senior engineers. Maybe as time goes by, the script needs to be tweaked subtly - but our ‘general dev environment’ is quite stable so hopefully not.

      3. 1

        My first advice would be to use Ansible… ¯\_(ツ)_/¯ Anything more detailed starts with “what OS/etc are you starting with?”

        Docker would be a solution too, but I can’t stand Docker, so.

        1. 1

          Almost everyone except me uses Ubuntu {18,20,22}.04. A few are using Windows, but we can ignore them as they are not the ones who will develop heavily. So I think you can consider this as Ubuntu-only :D

      4. 1

        Maybe better just do something like github codespace - they do support templating to setup once for the team? since switching to M1 some old libs doesn’t work anymore force me to use github codespace (then I clone other repo there as well make my personal env - not using a lot so no inconvenient so far)

        1. 1

          That will be a massive project, and I am unsure if those legacy repos will support this. Plus, I need to ask for lots of approvals as I am a dev more than DevOps/infra.

    15. 3

      Correct me if I’m wrong, but doesn’t git basically work in some way using linked lists to do certain things?

      1. 6

        Architecturally, git is a key-value database of objects that represent an acyclic graph of commits and a tree of directories/files. A simple case of linear commits is a linked list indeed, but that’s not the programming-language-level linked list that the post is about.

        1. 2

          Okay that makes sense about commits. How did you learn about the inner-workings of git?

          1. 3

            I’ve found the official (free) book to be an excellent source.

            https://git-scm.com/book/en/v2

            Obviously not every part is relevant to you, skip what isn’t, but I found it generally well written and useful.

            1. 3

              This is another great resource for learning how git works internally: http://aosabook.org/en/git.html

              Implementing something with libgit2 is another good way to learn the finest of the details. It’s thrilling to make a program that can construct a branch, tree, commit, etc and then have git show it to you.

          2. 3

            I’ve learned it the hard way, but these days there’s a bunch of tutorials about git inner workings. I highly recommend learning it, because it makes git make sense.

        2. 1

          but that’s not the programming-language-level linked list that the post is about.

          The only difference I see is that it’s implemented in the file system instead of in memory.

          1. 2

            The arguments against linked lists are about memory cache locality, lack of CPU SIMD autovectorization, CPU pipeline stalls from indirection, etc., so the memory vs file system difference is important.

            Linked lists on a file system are problematic too. Disks (even SSD) prefer sequential access over random access, so high-performance databases usually use btrees rather than lists.

            git’s conceptually simple model is more complicated when you look at implementation details of pack files (e.g. recent commits may be packed together in one file to avoid performing truly random access over the whole database).

            1. 1

              Thanks for that context!

      2. 4

        Yeah, Git is similar to a Merkle Tree, which shares a lot in common with a single linked list, in that from HEAD you can traverse backwards to the dawn of time. However it differs because merge commits cause fork/join patterns that lists aren’t supposed to have.

        1. 1

          Interesting. I was looking into how to reproduce merge commits (oids) from someone else’s working tree that push to the same bare repo (e.g. on Github). I was forced to calculate a sha256 to verify that the actual committed files are the same between to working trees. Know there must be a lighter more efficient way. Probably would be a real nasty looking one-liner though.

    16. 1

      We went with a linked list in a recent thing we launched. The nodes are actually referenced by cryptographic hashes - the hashes uniquely identify the data structure for a whole bunch of other subsystems.

      1. 2

        Do you protect from getting a stack overflow when dropping a long chain?

        1. 1

          Good question. Each node is generated by an actual human that ultimately did something intentional to trigger in the UI - it takes a lot of non-trivial work by them. Generally the list won’t be more than what you can count on your fingers. Should put a cap just in case of a malicious user or some weird corner case of some script someone is using goes haywire - but sorta protected right now because essentially someone would have to ddos a popular third party service that has an excellent rate limit system - no risk of stack overflow in practical terms. When we create an independent system to interact that creates linked list nodes, would be a concern I suppose.

    17. 1

      Just make sure on the laptop or desktop you also create a bare repo. Then push to or pull from bare from either desktop or laptop. Probably put bare repo on desktop. If your working on desktop, you don’t have to keep your laptop on if you don’t want.

    18. 1

      We manage one project in development with nix (node2nix), but it’s deployed with a dockerfile on flyio.

    19. 1

      Does this system still allow for community projects? It seems like it is built on the idea that source code is always owned by people, which is largely bogus.

      1. 2

        I think I understand what you mean. Like there is a fear that only people with VotePower count and anyone outside is not welcome? If so that’s not what Turbosrc is.

        In the status quo without Turbosrc, a maintainer is said to “own” a project. In fact you can “transfer ownership” of a repo on Github. We know that doesn’t mean the maintainer actually “owns it” if it’s under an open source license.

        Turbosrc can democratize maintainership, so it’s more community than before.

        Anyone who doesn’t have VotePower can still make pull requests. Instead of just a maintainer merging or not, a community merited by VotePower does it. Today, most people who make pull requests can’t merge code, so they’re not losing anything with Turborsrc. Now, those people could get a piece of the control to merge also. It only adds a layer of benefits. Democratizing.

    20. 6

      In a previous blog post we outlined a vision for rewarding contributors, self-sustaining open source projects, and improving security.

      No they didn’t. It was a 3 paragraph meta-complaint with no vision or even real content.

      Still smells like a token scam, only this time they’re selling shovels.

      1. 3

        I kept thinking this too but they make it clear that you can “swap it out” for something else.

        1. 2

          That’s correct. But just to be clear, you don’t “swap out” blockchain for non-blockchain. Turbosrc is not a blockchain. So someone would have to fork Turbosrc and make VotePower a crypto token, and some other things. It’s perfectly useful as is and without Web3 capabilities. The point of Turbosrc is to allow voting by ‘stakeholders’ on pull requests - how VotePower is recorded (database or blockchain) is a means, not an end.

      2. 1

        It reads like the idea is to sell you on web3 votepower, web3 being an ethereum or whatever token. I guess it lets you buy and sell repository access?