1. 2

    Add spot instance price to my simple api: https://ec2.shop

    I now feel amazing please to do curl -L 'ec2.shop?filter=.large' and see price different for on-demand,spot of large instance.

    1. 1

      Would there be a way to get spot instance pricing as well? (No idea what the API for these is)

      1. 2

        Yes, it has an unofficial JSONP api but It changes every 5 minutes and very dynamic. I haven’t had the method to run that yet but I definetely going to implement it.

      1. 2

        Is there a reason I should use this instead of https://ec2instances.info?

        1. 3

          ec2instances.info is great

          I made this because ec2instances.info is kind of slow and don’t have a curl interface. It’s too verbose as well. Most of time I just want to see instance type, cpu, network, mem and a price.

          I like to quickly do this

          curl -L 'ec2.shop?filter=m4.large,c4.large'

          Right from my terminal. Plus it also has JSON so I can do thing like this:

          curl -L 'ec2.shop?' -H "Accept: json"
          1. 2

            Curl is nice and everything, but isnt the rea hacker way to dump it all into excel and filter?

            1. 3

              Real Hackers use a magnet and a steady hand.

              1. 1

                Real hackers use butterflies.

                Or magnetized butterknives.

        1. 1

          It’d be nice if this had multi-sort so you could e.g. find out the cheapest machine with a given number of cores, but I guess you could always implement it yourself.

          1. 1

            ah great idea. I can implement it. The service is a very simple Golang service btw https://github.com/yeo/ec2.shop

          1. 1

            Many databases recommend this approach I belived. Such as NFS, or MongoDB

            1. 1

              Thanks. This is so usefl where it generate code for lot of platform.

              1. 1

                I love Authy for its syncing feature but it’s kind of slow and having sync issue. Where I deleted on mobile but still showing up on desktop.

                So I have been develop my own desktop app for this. It used a local SQLite db, encrypt with aes gcm using your own pass pharase. Then you can sync this SQLite db on dropbox, google drive, ir icloud drive and immediately have it available on other desktop.

                For mobile, you can enable desktop app to sync to my server, it sync encrypted data and I cannot access your code. When on mobile, it syncing data back from server, then you enter the same pass pharease to decrypt it client side.

                In other words, your token is encrypted at rest and also encrypted in transit.

                I don’t want to hijack this thread with my own project so don’t post a link here but if anyone want a fast, native, desktop app to manage your own 2FA code, ping me.

                1. 1

                  I’m working on a native, cross platform desktop MFA app. Think of Authy app but run on desktop too, and no electron.

                  The UI is still ugly but the functionality is there. If you want to try it out, head to: https://github.com/yeo/bima

                  Or email me [this-username]@getopty.com

                  1. 1

                    I recently use https://fyne.io/

                    I come from a background of never do GUI before to a working app in one week. Which I’m very sastify.

                    The best: it works on win/linux/mac/ios/android. I havent tried Android/Win yet. But my simple app were able to run on mac/linux/ios just fine.

                    All dependencies are automaticlly download/ I don’t have to install extra thing outside(like QT etc)

                    1. 1

                      personally I like time machine a lot. I liked the idea of easily looking at a file and see what version I have so I can restore. This happen a lot with my config/dotfile where I move thing around and accidently change/delete…Save me quite a few time.

                      Plus, I can also easily browse the directory of timemachine bakcup and see exactly directory/files(lastest backup version) layout of what I have

                      I have been looking at tool like https://restic.net/ but they did given’t me these two above things.

                      1. 1

                        I had been using Time Machine, as well as Duplicity, on the 2013 Macbook Pro I’d been using as a daily driver. TM was fine for those backups that get done without having to think about doing it, so great.

                        Where having a portable backup shone for me though was when that MBP needed expensive repairs that I was no longer looking to put into a machine that was designed to be un-upgradeable. For about $6, I got an adapter to convert the SSD into a regular laptop drive, and it’s now in my Linux-powered ThinkPad.

                        Duplicity worked well enough for getting my home directory back into a new Linux install, where the Apple-specific nature of TM was never going to be anything but miserable.

                        I’m using restic now by the way, as it is blazingly fast, with backups that are encrypted, and that can be both remote, dumb servers (e.g. SFTP only) as well as local USB drives.

                      1. 2
                        1. 1

                          On the positive side, the article mentions a tool, WhiteSource, that seems to include watching the dependencies of dependencies for upgrades; on the negative side, it’s too costly for me, starting at $4k.

                          1. 2

                            As far as I know the price you mentioned is for enterprise. They provide free tools for developers. If you want to find and fix vulnerabilities use Bolt. For updating dependencies you can use Renovate. I used only Bolt and never tried Renovate. Renovate is mentioned in the article https://github.com/marketplace/whitesource-bolt https://github.com/marketplace/renovate

                            1. 1

                              dependabot is free and run by github now


                            1. 7

                              When I started to use Linux, it’s SysVInit. Then ubuntu upstart. Then a bunch of similar thing: monit, supervisor…

                              All of them has to figured with log file, pidfile, wrapper script to start your process etc. On top of that you have to roll your own log rotate process, and if your process is too spammy, you may have to manually add cron entry to invoke logrotate daily, plus the permission. UpStart is one step up, but still not very reliable to me. I had case where upstart will not refresh my config. Systemd is the only system that I tried that work on first try and bundle everything I need. It even has nice thing like ProtectHome/PrivateTmp out of the box…

                              Probably for advanced users systemd suck but given me as a backend dev, I just need a Linux box and a thing that auto restart services for me, and also handle log properly without me have to manually write log rotate and signal the process.

                              1. 1

                                I used to use Hugo but it became too bloated and I write my own tool: baja[0] . Here is a demo of my sites[1] and my company[2]

                                1. 1

                                  Working on a UI for my personal newsletter [0].

                                  Up until now, I have been edit YAML file manually, invoke build command to generate static file, email and build a docker image then deploy. At the time to fan out email, I need to invoke yet another cli…

                                  It’s PITA :( so I’m trying to add a new UI like mailchimp so my friends can help me edit my English and auto send out based on schedule.

                                  1. 1

                                    I love this project. Since it’s original in Ruby is something I’m very familiar with.

                                    However, I don’t quite understand how the code gen works? Would you love to put up some comment on how you generate binary with inkoc ?

                                    1. 2

                                      The parser produces an AST, which is then converted into an intermediate representation called “Typed Intermediate Representation”, TIR for short. I know, I am not very creative when it comes to naming. This IR is very close to the VM instruction set, uses registers (just like the VM), and stores the types of input/output registers. For example, 10 + 5 would be converted to something like this:

                                      SetLiteral  0, 10        # 0 is the target register
                                      SetLiteral  1, 5         # 1 is the target register
                                      SendMessage 2, 0, "+", 1 # this translates to 10.+(5)

                                      Of course the actual implementation is a bit different from this simple example. For example, literals (integers, floats, etc) are stored separately and the SetLiteral instruction refers to them by their index in the storage list.

                                      Once we have the list of instructions for a block of code, we convert this IR into a flat list of “Instruction” objects. These objects just use integers to represent the various arguments, instruction types, etc. So from the above, we’d go to something like this (using Ruby here):

                                      literals = [10, 5, "+"]
                                      Instruction.new(type: 0, arguments: [0, 0])       # SetLiteral 0, 10
                                      Instruction.new(type: 0, arguments: [1, 1])       # SetLiteral 1, 5
                                      Instruction.new(type: 1, arguments: [2, 0, 2, 1]) # SendMessage 2, 0, "+", 1

                                      This is done here.

                                      Once we have this list, we convert it to a binary format. This is done here.

                                      You can find some more information about the bytecode and instruction set here:

                                    1. 3

                                      This OnGress company keeps repeating about their “50 pages whitepaper” but in order to get that you need to fill in a form that requires an email address and phone number. https://info.enterprisedb.com/Performance-Benchmarks-PostgreSQL-vs-MongoDB.html

                                      When you benchmark, you have to make sure you eliminate external overhead. This OnGress use a driver didn’t support connection pooling…They said “I disagree, because this means that this is the performance that Lua users get. “…So yeah, they are benchmarkching Lua driver after all.

                                      1. 2

                                        When you benchmark, you have to make sure you eliminate external overhead. This OnGress use a driver didn’t support connection pooling…They said “I disagree, because this means that this is the performance that Lua users get. “…So yeah, they are benchmarkching Lua driver after all.

                                        This is the full paragraph.

                                        Sysbench is written in Lua. Because of this, MongoDB concludes that “any reasonable tester would have looked for an alternative benchmark”. I disagree, because this means that this is the performance that Lua users get. Moreover, it tells the world how much MongoDB actually cares about Lua users, specially if this driver (which was created by MongoDB) remains unmaintained, and no other official Lua driver has been created.

                                        It seems like they used the official MongoDB driver, which seems totally reasonable. If Mongo’s own driver sucks and skews the test results then I agree that it isn’t necessarily fair to conclude the Mongo database itself is slow, but I do think it’s reasonable to conclude that Mongo doesn’t care much about Lua users.

                                        Having said that, Mongo is slow IMO. My employer is a paying customer on their hosted platform. I’m always surprised how slow it is doing anything that isn’t a lookup by ObjectID or when an index already exists.

                                      1. 1

                                        I used this trick so you don’t have to change remote path or anything. Create a file kgit in ~/bin

                                        #!/usr/bin/env bash
                                        ssh -i $HOME/.ssh/[other-personal-ssh-key] $1 $2

                                        Whenever I need to switch to personal account:

                                        export GIT_SSH=$HOME/bin/kgit

                                        This tell git when cloning/pushing via SSH it will use the right personal ssh key.

                                        You can also create a shell function and put it into your rc file:

                                        kgit () {
                                        	export GIT_SSH=$HOME/bin/kgit

                                        Now simply run kgit in your shell.

                                        1. 2

                                          improve my static builder site: https://github.com/yeo/baja which I used to power some of my personal site such as https://axcoto.com or https://getopty.com.

                                          Hope to get it into good state and release to public.

                                          1. 1

                                            I started programming in Python about two years ago and this was one of the major painpoints for me. I use Ruby with rbenv a lot and I’m glad I will be able to achieve similar flow in Python. Thank you for sharing!

                                            1. 2

                                              I started programming in Python about 15 years ago and this was (and still is to this day) the major pain point for me.

                                              1. 1

                                                I’m glad it helps you! I’m a Ruby dev and struggle with it quite a bit too.