1. 1

    Blackfynn is hiring. We power neuroscience by linking all kinds of data like EEG, fMRI, and pathology data, and organizing that data so that investigators can concentrate on doing science instead of tedious data management. We use Python & Scala, graph databases and Postgres. We work out of our office in Philadelphia. email me with any questions: jim@blackfynn.com

    1. 12

      I built a company on a .net stack a long time ago; it was fine. The biggest problem is SQL Server used to cost a lot of money and was kind of a waste in the face of postgres. As far as the programming ecosystem; C# as a language is great, but so much of the standard library and 3rd party libraries are old and crusty. Some were legitimately great though, like Dapper for talking to the DB, and ASP.net MVC was a decent web framework.

      1. 8

        Yup. Licensing cost is the elephant in the room here. SQL Server, Windows Server itself, IIS, it all adds up pretty quick, and if you’re running a startup, paying for such licenses at scale is maybe not so attractive.

        1. 10

          Exactly. Its unclear why you’d pay for SQLServer and IIS when postgresql and apache or nginix cost nothing to license. Many people would argue that the latter are superior anyway. There are plenty of companies that will provide paid support for those products too.

      1. 3

        i use swagger with akka-http using annotations to automatically generate the swagger doc from the scala source code. mostly works pretty well, but occasionally the scala data structures gets mangled in translation, which is pretty annoying. but then i recall what i did before, which was manually documenting stuff or worse, nothing at all.

        1. 2

          I like how they completely ignore Tinkerpop/Gremlin.

          1. 15

            “Please stop doing X” posts are a dime a dozen and rarely offer actionable advice more prescriptive than the simple “stop doing this thing.”

            Let’s get more posts that say, “Before doing creating something new, here’s how to evaluate what exists for your use case.”

            1. 2

              Agreed. There’s no concrete information in this post, and a lot of uncited or made up narrative. And it’s condescending. I expect more from posts on Lobsters.

              1. 2

                Please stop writing new serialization protocols

                also, why does the author feel the need to call people monkeys?

            1. 4

              I mean, if you’re in Python, yeah. If you’re using a functional language this is seldom the case.

              1. 2

                Yup. My scala functions are rarely more than a few lines.

              1. 2

                “‘Oh just thread your application.’ Anyone that says that is basically an idiot, not appreciating the problems.”

                always hard. somewhat easier with immutable data structures

                1. 1

                  Happy Birthday Lobsters! I’m grateful for the decent and civil space where people can share interesting stuff.

                  1. 13

                    Whenever I read about people’s first hand encounters with Steve Jobs, I’m always impressed by what a jerk he was.

                    1. 1

                      I’m more impressed by the crystal clear memory of their sensitivity to control, authority, and power dynamics. If this many people can recall vivid memories of just one such interaction and the contest over power differential between peers, regardless of the fame of the individual involved in the incident, then what does that mean about our lives on a daily basis?

                      1. 2

                        “ then what does that mean about our lives on a daily basis?”

                        I’d say it means more about people’s lives working at Apple or any managers trying to emulate him. People doing demos expect conflict. They have to be ready to justify something or offer a fix. Ingalls was brilliantly capable of doing that.

                        1. 1

                          I vaguely suspect that people who are sensitive to these factors are more likely to end up getting interviewed.

                      1. 3

                        I’m working on a video course (for Pluralsight) about dealing with temporal data in PostgreSQL. It’s hard going but these things always are, and the topic is fascinating to me. There’s a lot of research and I’m learning interesting things about Postgres in the process.

                        1. 2

                          I made a Udemy video course a couple years ago, I think one of the hardest things I did, as organizing, scripting, filming all by myself was not an easy task, considering I never did a similar thing, but I read somewhere that the best way to teach something is just right after you learn it. And I found also a big boost to learn new things I would have never found out otherwise.

                          1. 2

                            I actually found that it’s easier to write a book. It may be longer in terms of word count (a typical course for me would be 30K-40K words which is roughly 120-150 book pages) but it’s just writing, whereas a course is writing + recording + editing, with the additional constraints of having to match the video to the narration.

                          2. 1

                            I absolutely love PostgeSQL’s Range datatype for timeseries stuff.

                            1. 1

                              Yes, range support is awesome but one drawback of using ranges for periods is that it’s not going to be forward compatible with SQL:2011 temporal features (if/when those get implemented), because the standard is based on a pair of columns denoting the start and end of a period.

                          1. 2

                            I’ve been working on a streaming time series service. The client requests a time span, a list of channels, and some resampling parameters, and my service returns resampled waveform data for each channel sent as a stream over a websocket.

                            This week I’m incorporating neural spike events, and working to get streaming input working from a neurostimulation / sensor device.

                            For fun, this weekend I sat down with my daughter and made some silly pictures with the haskell Diagrams library, which I have to say, is the most elegant, well thought out API I’ve ever seen. We made a big smiley face.

                            1. 2

                              So I taught students Python, Java, HTML, CSS, and JavaScript — real programming languages that you type, not silly blocks

                              I have a hard time taking an article seriously when the author describes HTML (on its own) as a programming language. It’s a markup language. It’s not Turing complete unless you manage to hack it with CSS.

                              1. 17

                                You’re not wrong, of course. As a formalism, “real programming language == “Turing complete” is a useful guidepost. But for the purposes of addressing the problem of how to teach kids about software, I doubt it matters much. I’m reminded of the old linguistics adage that “a language is just a dialect with an army”.

                                My kid is almost 5, and most of her computer experiences are on the iPad, which is good and bad.

                                I remember my first interactions with a computer - a floppy disk full of games in BASIC that my dad brought home from the office. It was perfect because I could play the games, and when I got bored & curious, I could look at the source code, and hack out little modifications to let me cheat. It was a great introduction.

                                So I’m a little sad that the games that she plays are total black boxes that don’t offer the same opportunity for peering behind the curtain, as it were. The article gave me a lot to think about as I try to introduce my daughter to “real” programming ideas, for various values of “real”.

                                Mainly I’m thinking about, what’s going to be most useful to her? What’s going to be fun? Where do those things intersect, and how should I present those ideas?

                                1. 6

                                  I also have a 5yo. Scratch seems to be a good first step. It provides lots of examples​ which can be modified, just like that gorilla.bas on my dads PC. However, reading skills are necessary so it is too early at the moment.

                                  I believe Jupyter notebooks with Python might be another step. It is quite interactive and images are easy. The wealth of the Python ecosystem should make it more useful for serious stuff. Maybe simulating Pokemon fights or whatever my kids are interested in by then.

                                  Also, my students made some Android games to subtly teach lambda calculus.

                              1. 5

                                Why are the slot machines purely relying on pseudorandomness? There seem to be some sources of “true” randomness that they can use to seed the PRG without requiring additional hardware, e.g. the trailing digits of exact timing of various events (so, for instance, pressing a button or inserting a dollar bill exactly on the hour and 0.001 seconds after the hour lead to completely different results - I highly doubt any human, even an app-assisted one, is capable of being precise enough on timing to exploit that).

                                1. 3

                                  In fact since they are dedicated machines to generate the randomness, they could just use some hardware which does generate true randomness.

                                  Honest question, why don’t they just do that? Is it because they can’t prove the true randomness of the source?

                                  1. 2

                                    Newer machines have TRNGs but these older ones that were being exploited do not and instead rely on PRNGs. Implementing a TRNG on old machines would not be worth the hassle according to the article.

                                  2. 1

                                    Came here to ask the same question. I suspect the answer has something to do with regulations that require verifiable, deterministic behaviour. I do wish someone with experience writing slot machine code could chime in.

                                    1. 1

                                      I’m taking a wild guess here based on how this happened elsewhere. The possible reasons I see are:

                                      1. Programmer could make games but didn’t know cryptography or care about security. Just used what they learned in a programming book on random numbers.

                                      2. Cost minimization. Simple RNG can run on dirt-cheap CPU (even MCU). Might add a dollar or ten to profit of each machine if this philosophy is applied throughout its development.

                                    1. 1

                                      I’m building a graph store backed by redis and fronted by a rest api. Its going to hold some medical/neuroscience data, so the security requirements are a bit special. eventually we’ll be running some machine learning algos over it, to build “annotating” edges, so this is all foundational from the fun stuff that comes next.

                                      1. 1

                                        For my “old job” I’m doing medical insurance claims ETL. To get ready for my new job, I’m learning about streaming epilepsy detection for EEG data following the results here: http://ieeexplore.ieee.org/xpl/login.jsp?reload=true&tp=&arnumber=1020545&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D1020545

                                        It’ll probably just be a toy (or maybe just a proof of concept) at first since I don’t know much about EEGs (yet!)

                                        1. 11

                                          No mainstream language today allows you to write a library to extend its type system

                                          hmmmm that’s the most interesting sentence in the article. There’s alot of ways to extend type systems, so I’m not sure what the author has in mind exactly. I seem to remember somebody adding some flavor of dependent types into Haskell, which would certainly qualify.

                                          1. 9

                                            Idris type providers do not exactly extend the type system, but they allow types to be defined in arbitrary new ways. One of the motivating examples is record types for a database that it connects to at compile-time.

                                            The Common Lisp Metaobject Protocol is almost the exact opposite thing, an attempt to allow libraries to define new class semantics. One of the motivating examples is implementing multiple inheritance (in a different way than it’s already implemented) to interact better with C++ libraries.

                                            Neither of these really extends the process of type checking. For example, you couldn’t implement linear types on top of either of them. That would be a fascinating area of research, but over my head for now.

                                            1. 3

                                              Idris type providers

                                              Haskellers already do this with Template Haskell, generating domain types from the database schema. F# popularized the nomenclature and technique without offering a general faculty such as you have in Haskell and Idris.

                                              1. 3

                                                I think F# did a much better job offering tooling support around it–it’s very different from Template Haskell in that regard.

                                                But given the state of tooling in Haskell, all of F#’s compute-types-on-the-fly and auto-complete goodness probably hasn’t been on the radar.

                                            2. 6

                                              I’m also not sure it’s true. Common Lisp has ASDF packages to add algebraic data types and exhaustive pattern matching 0

                                              Maybe Lisps are the exception to the rule, here? But as you say, Haskell’s type system has also been extended.

                                              1. [Comment removed by author]

                                                1. 2

                                                  agreed. (though Clojure seems to be pretty popular)

                                              1. 3

                                                I’m building a healthcare analytics system aggregating disease/diagnosis, prescriptions/procedures, and patient demographics over a large population of (anonymized) health insurance claims. Its not going to un-fuck the american healthcare system, but hey - more data shouldn’t hurt either.

                                                1. 1

                                                  Have you looked at the stuff Docgraph has?

                                                  1. 1

                                                    Docgraph

                                                    No I didn’t know about that! thanks for the link. I’ve actually used many of the same reference/source datasets, though.

                                                    1. 1

                                                      Get in touch with them–they’re pretty friendly.

                                                1. 14

                                                  I hadn’t actually thought of this as on-topic here, until I realized we have the cogsci tag… :)

                                                  I have a longstanding disagreement with this position, just to admit my bias up front. And I’m aware that the below views are controversial. I don’t intend to get into any fights about them, but I want to offer them.

                                                  Also, I’ve met people who find the brain-as-computer metaphor to be deeply upsetting, emotionally. If that’s you, fair warning that I go into some detail below, and I do advise skipping the rest of this or at least making sure you feel prepared.

                                                  Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

                                                  Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

                                                  But then we keep seeing studies like the one described in this SciAm article, beginning to investigate exactly that symbolic representation. I can’t find them right now, but I’ve seen similar news items go by with regard to visual and spatial memory.

                                                  Certainly I’ll readily agree that “algorithm” suggests a system that works more often than it fails, and that’s not how I’d describe the brain. I’ll give this author that part of the discussion.

                                                  The brain is architecturally very different from a silicon computer. It has far more in common with a distributed system than with a single CPU core; all sorts of neurological phenomena come down to things happening out of sync with each other, and I’m not sure any have ever come down to an invalid pointer. (Glossolalia is interesting to think about in that context, but there’s no reason to suspect that there’s any analogue of pointers or addresses. But surely I don’t need to explain how suggestive the nature of neurons is as far as the possible mechanisms for structured information.)

                                                  Humanity understands the brain even less than it understands the security implications of Intel’s management engine. That present lack of understanding makes it easy to get away with claims that it’s fundamentally incomprehensible, that it does do certain things that it manifestly doesn’t, and that it doesn’t do things that it manifestly does.

                                                  Surely, processing information is the one thing we can be certain the brain actually does!

                                                  1. 7

                                                    In the Scientific American article you link, people were shown dots, and depending on the number of dots, a different part of a “numerosity representation” responded.

                                                    I think whether the neural response is indicative of a representation is the sort of thing the aeon article is questioning. On the one hand we have empirical data - a stimulus elicits a neural response. On the other hand we have a theoretical framework - the researchers call this correspondence between stimulus and brain state a representation. Certainly there is a correspondence between the dots and brain activity. But is this a representation?

                                                    As an aside, one of the main reasons we talk about representation in the first place is to explain how we make inferences. To explain this, we say we have representations of objects in our mind, we can manipulate these representations, making them interact with each other according to their properties, and in this way we can infer something about the world.

                                                    Normally, when we program, to represent something we think about a type and properties of a type. What are the properties of numbers shown by this experiment? How do these number representations interact with other representations to produce reasoning? How do they interact with each other to produce addition or multiplication and so on? How do they let us reason about larger numbers? Are these correspondences present for all interactions with all numbers, or just some?

                                                    There’s no empirical answers to these questions in the article – what’s offered on the empirical side is a raw correspondence between stimulus and brain state. To call this a representation is a theoretical move on the researchers' part to explain what they’ve found. This is okay I guess, but they aren’t actually testing whether the correspondence they’ve found is a representation in the computational sense or not.

                                                    1. 2

                                                      Sorry for taking so long to reply; this deserved an answer this morning, but work…

                                                      I think the heart of what you’re saying really comes down to the same question of “what is a representation, and why does this particular definition of it matter to the original question”, which has meanwhile been discussed insightfully elsewhere in the thread. I can see that we differ on it, but I don’t have a reply that hasn’t already been made, so I’ll let this thread stay high-signal. :)

                                                    2. 2

                                                      One problem is that we don’t have consensus among the various participants on what “processing” or “information” means.

                                                    1. 8

                                                      “the lyf so short, the craft so long to lerne.”

                                                      1. 4

                                                        With a nod to Chaucer, be grateful it’s only ten years. My glassblowing instructor passed on to me that his Venetian instructor joked it takes two lifetimes to master the craft.

                                                      1. 1

                                                        building monster ETL pipeline with Spark and Scala. I just learned last night how to use implicits to add custom methods on DataFrames, and it really cleaned up my code. Spark is definitely more expressive and more fun to write than hadoop jobs, but it also seems less stable. (random OOM problems if you’re not super careful)