I don’t get why you wouldn’t do at least a basic git setup. Init, add, and a single commit when you’re done. Sorry, it’s just one of the first things you mentioned.
Of course, I could, but I’ve a lot of work so can’t get to it, also currently I don’t want to maintain a git repo. When you have a git repo, you start getting ideas about workflows, branches, etc etc, it’s too much for this point in time. So maybe people in the same situation as me would find the setup useful.
Write the three line script, commit every so often… the power of git to let you undo a mistake is great, and you don’t need to go about setting up some github repo, just use git locally and push to either a client git server or dump as you do now.
This week I’ve been spending time to answer an SO question and also wrote a module called matchgrowth for estimating the growth rate of a certain recorded resource usage. I kindof confirmed my intuition about the usefulness of symbolic calculations alongside numerical computations. I will probably invest more time into SO questions as I think some of the questions deserve thought and I learn a lot by trying to answer them.
I found some errors in my openings, but sometimes I don’t agree with chesscoach. An example would be 1.c4 g6 2.g3 and now chesscoach tells me g3 is -1.4 points wrong while 2. g4 is the correct choice. I might want to play an english or english-like opening, and -1.4 seems too much.
As a result of your feedback, we made some improvements to our website and released an update just now :)
The new features are:
Thanks again for your valuable feedback :)
Cheers ! https://chesscoach.network/
I think it’s a waste of time and effort. I, personally do not appreciate this kind of troll, sarcasm bug report. It’s the same with people posting unrelated gifs in github issue and think it’s funny.
I was being very obvious(through exaggeration), whereas a troll would hide it, I didn’t. The core idea is the sad state of ads that are making it more and more distracting and annoying to view online content.
To see a search engine company, whose primary goal it was to make content easily found, now making it harder to actually consume and distract from the content, is disingenuous, and it conflicts with the essence of the product on a logical level. IOW “Hi, I’m G, I’ll let you find the content, but once found, I’ll make it hard for you to consume the content”.
But it does indeed turn out to be a waste of time.
a search engine company, whose primary goal it was to make content easily found […]
Google isn’t a search company, it’s an ad broker and delivery company.
it conflicts with the essence of the product on a logical level
No it doesn’t, it makes advertisers and Google more money.
No it doesn’t, it makes advertisers and Google more money.
Well, for me the essence of their product is that I, as a user search for something, and then I read/watch the content uninterrupted. That would be ideal.
This ranking is interesting, but since I’m not an expert in competitive programming, especially at such a high level, could someone please interpret the results of this year? How are they special compared to past years, etc? Anything remarkable? Anything that catches your eye and why?
Administrative rules are here: https://ioi2019.az/en-content-14.html (with changes in red (eg strike through), compared to 2018) . Seem to be minor changes only.
The problems (tasks), are here https://ioi2019.az/en-content-27.html
In terms of anything special…
Well.. The team from neighboring country Armenia, did not attend this year. I suspect this is because of not-very-good geo political relations between then host country and Armenia. They were in all the prior Olympiads.
Gennady Korotkevich, from Belarus, is still #1 hall-of-famer https://stats.ioinformatics.org/people/804 He also, seem to be, still, the only one that solved all problems all correctly https://stats.ioinformatics.org/olympiads/2011 (I did just manual review though, I might be wrong).
All problems seem to be in combinatorics, dynamic programming, graph theory. I think, partially, it is because solutions to those are easier to grade fairly.
It would be very very interesting to see the actual coded solutions. But I have not found where they are.. yet.
Java often includes the algorithmic complexity, e.g. https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html
I missed that constant-time performance comment - which is very helpful, but wish it was called out a little more explicitly. I wish the memory/runtime complexity were called out as explicitly as the load factor that is liberally sprinkled throughout.
I think the most underdocumented area in terms of computational complexity might actually be machine learning, deep learning and the like. I’ve never seen any talk about complexity there.
I would pick one from these two options for server-side pdf generation:
using pdflatex (comes with the latex distribution called texlive in Ubuntu, in particular the texlive-base package)
using pandoc to convert a markup document to pdf (I think this uses pdflatex as well)
If you go with the first option, you may need to have some templates ready and fill them with data.
The second option is more convenient since writing markup (for example markdown) is easier.
++ for pdflatex! Tex stuff isn’t the easiest to do initially (and some templating engines don’t work well with it), but the results are consistent and the layout options are vast!
Yes, this is one of the ones I was really considering. It seems a reasonable effort, especially if starting from a dockerized texlive.. Since you mentioned it, what sort of issues you encountered with the templating engines ?
It was with Jinja2, I was trying to ansible'ize some generating of invoices. Ansible was puking when processing the tex file.
Further in the past, I used perl to dynamically build the tex files (no templates).
unless I’m missing something YaCy comes close to what you’re describing
I looked into YaCy and Faroo before starting to think about this. Faroo is not even close. YaCy on the other hand is close enough but comes with a number drawback such as that it is very slow (due to fraud and spam protection) and does not rely on AI/ Learning but instead on conventional ranking methods. With that said it might be a good codebase or concept to start from.
YaCy also works conventionally by building an index and then traversing that index to respond to queries but what I have in mind (and have yet to start experimenting with) is No Index. Just a large scale global neural network that holds the information within that network. Now this comes with a million issues but on the other hand since you don’t fully understand the impact of your node on the search result (since is is part of the larger global network) you cannot in theory manipulate the search results..
PeARS is also worth looking into (recently funded by Mozilla).
About the NN, I’m not sure what that would do..
since you don’t fully understand the impact of your node on the search result you cannot in theory manipulate the search results
Not sure about how that would translate into practice.
YaCy on the other hand is close enough but comes with a number drawback such as that it is very slow
Reaching feature-parity with YaCy would require a ton of effort. But, you can always fork YaCy and try out your ideas and see how it goes.
Yes, org-mode comes with any Emacs install. Since it’s plain-text, it can be versioned with Git (or any other version control system). Org documents can be exported to a variety of other different formats including Word (via org->odt->doc/docx) and PDF (via org->tex->pdf).
Orgmode features code-blocks (the equivalent of reStructuredText’s code directives ). These are extremely useful since they can be run inside the document, and their result can be included in the document, either in the form of syntax highlighted source code or the results of that source code (data, images, or anything else that the code block generates) or both. For the code blocks, a wide variety of languages are supported (some of those languages are DSLs for drawing diagrams, sketches, for example you can design simple UI mockups via plantuml code blocks).
So you can include diagrams about systems, UML use-cases, it also has tables (fully equipped with excel-like formulas), you may also use state-diagrams to describe the test procedures you mentioned.
If you have larger documents to write, you may also benefit from
#+INCLUDE directives that org-mode provides you with. This helps separate your document into sections/chapters.
Other use-cases include:
keeping track of upcoming events through the use of a built-in agenda
Having said this, Word and Excel remain the #1 word processor and spreadsheet programs out there. Orgmode, as complete as it may be, is tailored for technical people who can take advantage of its features.
excellent article, I wish there were more of these articles around for different types of teams and the composition and interactions between roles
the blog post starts from a problem and describes a solution to that particular problem. sure, other people want to optimize for something else, everyone can, they are free to do that. if you believe a different problem should be solved, that’s interesting to me, and I look forward to reading your problem and your solution to it.
It is not much of an exaggeration to say that trying to find algorithmic or technical solutions to humane and social problems is one of the biggest risks to humanity’s mid-term future.
Someone else said it better: ‘any sufficiently complicated technical problem is a political problem first’.
This for sure is a nice exercise in algorithms for matching purposes and for automation, but dealing with humans with a blunt algorithm is a terrible mistake, you must approach human decisions with human sensitivities.
One nice feature of FIFO’s is that if you write less than
PIPE_BUF bytes, the write is atomic. This is not necessarily so useful in the 1 producer multiple consumer case, but in multiple producers one consumer case it means the file lock can be removed alltogether. Unfortunately,
PIPE_BUF is small on OS’s like FreeBSD (512 bytes, I think), but Linux is around 8kb, I think.
The other thing to note is that the FIFO is not doing fan-out. Each consumer is destructive to the FIFO.
All-in-all, I think FIFOs are an underused tool in a lot of software. The kernel is pretty efficient at shuffling data around, the cost is a a syscall to read/write to a FIFO, but it’s a nice way to connect components and maintain process boundaries. Unless someone is doing really high throughput/low latency work where disrupting the CPU cache is a serious problem, using FIFOs to connect components will probably have no negative effect on performance.
Thank you for your comment. FIFOs can be a bit limited in terms of atomic writes, and in they are indeed not recommended for multiple consumers. I’ve extended the post to add more details about some of the limitations.
btw, FB has also announced a Parse-compatible API server based on nodejs and express (BSD license). It seems that users are happy with this one as well
Going to spend some time looking at self hosted continuous integration solutions. I’m unhappy with jenkins and not very interested in moving to one of the hosted ones out there.
I’ve set up and used buildbot, it was a smooth experience.
I have the beginnings of a trivial CI server on github. The thing that annoys me with existing solutions is that configuration is usually outside your project.
Cloud solutions (eg. Circle) usually require configuration inside your repo. That’s how it should be.
This is an oath about things 99% of programmers have 0 control of.
If this oath is meant to make us all feel bad about the current state of software “engineering”, it does a good job.
That doesn’t seem true to me at all. These, especially, seem like things I’ve tried to stick to that others ignore:
1 & 2 are simple: do no harm, insert no back doors.
5 - Don’t take shortcuts that don’t agree with structure of existing systems.
6 - Team productivity is more important than your personal productivity. It’s fine to just-get-shit-done, but not at the cost of other’s ability to do their work.
8 - Others rely on your estimates. If you’re not sure how something will work, simply don’t make an estimate of how long it’ll take to code it up.
Hmm, it’s the perceived compliance to the rules in the oath that the programmer has no control of. The actual compliance is always measured by a manager/teamlead via KPIs and rules he imposes.
Why not just upload it to a secret S3 bucket? A password and an unguessable URL are pretty similar in a scenario like this.
I rarely use AWS these days(I used to, but not now), if I forget about the bucket then I get charged like one cent or whatever. Actually it would be cool if S3 offered self-expiring buckets, not sure if it does, or I forgot..
If you have an S3 bucket with zero files in it, it doesn’t incur storage charges. You can put a lifecycle rule on an S3 bucket that will automatically expire files after say 30 days.
In the AWS console, create a new bucket for your temporary files. Open the Management tab. Click “Create lifecycle rule”. Name it “expire after 30 days”, tick the action “Expire current versions of objects”, set “Days after object creation” to 30, tick the action “Permanently delete noncurrent object versions”, set “Days after objects become noncurrent” to a small number like 7 (I can’t remember offhand but this option might not be available if you don’t turn on bucket versioning, which is fine.) Click “Create Rule”.
You can also make those rules apply only to objects with a specific prefix, so you could have separate folders like “30-days/” and “90-days/” with a different lifecycle rule applied to each if you want to have a couple of different timeouts.