I read the original paper on modular implicits last week and really hope it makes to the mainline branch soon. My only dislike is having to mark modules as being implicit, rather the compiler checking against all modules in scope. I suppose this makes compilation faster, but I’m wondering if it’s a case of premature optimization.
Interesting. I think I’d like that as a default as long as excluding some modules from the lookup could be an option (explicit module?).
Hi @raph, thank you for this enlightening talk and the wonderful work you (and other contributors) have put into Xi. I am a little confused about xi-mac. From your talk, it seems like the xi-mac front-end is pretty independent of the Xi core. The xi-mac GitHub page says that it uses CoreText, but from your talk it seems like you had to work around CoreText and implement your own text rendering that is fast enough, is that correct?
If xi-mac is not just a wrapper for CoreText, would it be possible to have xi-mac (or some subset of it) forked off into a general-purpose text rendering system? I think something like that could make for a very useful re-usable component that could be used for multiple editors, terminals, document creation platforms etc.
We should update the readme. It still uses CoreText for shaping, but not for painting pixels.
I can imagine splitting off the TextPane into a separate module, depending on whether other applications would find it useful. Certainly it would be possible to use it for a terminal, but alacritty exists and iTerm2 has a new Metal renderer, so I’m not sure who exactly would use it.
Wouldn’t such a component be useful for anything that draws large amounts of text? RSS or PDF readers, Email clients, Markdown previewers, WYSIWYG document editors?
I’m curious about how prospective employers outside the web-frontend/app development would evaluate someone with her skillset and experience. First let me add the disclaimer that I am a researcher and have never had or hired for a “traditional” programming job.
My hypothesis is that candidates with Computer Science (or equivalent) degrees from reputable universities can be expected to have some “stock” skills: basic to moderate programming skills, knowledge of data structures and algorithms, some systems programming experience, and perhaps some more specialized knowledge in webdev or graphics or compilers or machine learning etc. depending on what higher level courses they took. As a potential employer, I can be fairly confident that such a candidate could pick up whatever the current hip technology is in a couple weeks and work on a variety of projects across my system. With the proper initial training and guidance, such a candidate could work on a front-end app, or a back-end server, or maybe even on mission critical parts of my distributed system (under the guidance of a senior engineer), depending on how good they are and what prior experience they have.
By contrast, for someone like the author, unless my project is specifically involving JavaScript, Angular, ReactJS, I wouldn’t be comfortable hiring them, or I would have to put in lots of time and money into training them. The university degree experience has done a lot of training and evaluation of a potential employee than a bootcamp, or that I can easily learn from a GitHub resume.
There’s no reason that people can’t be self-taught at systems/data structures/algorithms topics, and while the author might not have that experience, I think that there are many people without degrees that do. That being said, showing that off in a résumé screen or interview is probably more important for candidates without CS degrees.
I can be fairly confident that such a candidate could pick up whatever the current hip technology is in a couple weeks and work on a variety of projects across my system.
I think people often underestimate the depth of knowledge that’s possible in things like this. React is extremely complicated, and while I’m sure I could throw together a webapp using it in a day or so, knowing all of the tools/patterns/idioms available, and knowing how to effectively debug and diagnose problems is a skill that I think would take a lot longer to learn. I’ve definitely made statements along these lines before towards webapp development/react/etc, but the fact is, frameworks are skills just like any other, and there’s no reason to expect that you can learn all of a large/complicated framework in a couple weeks.
I like the list of examples, but I think there are two main points that get buried which the author should have brought out. First, for many things, a command line (or at least a command line with some kind of suggestion facility) is a superior interface to a GUI.
Second, since modern operating systems aren’t designed to explicitly support command line interfaces, apps are left to do it on their own, which results in a hodge podge of various partial solutions. Take for example, the author’s final remark about how the key bindings in various apps conflict. The issue of what keybindings are used to trigger various actions should be disjoint from the actions themselves (and should be completely user customizable).
It would be great if OSs support the Emacs notion of keymaps so that you could have layers of keybindings and a clear way to define your own, rather than letting individual apps have a free-for-all. Spacemacs is an example of a (partially) integrated system that gets a rich command-line interface right.
Indeed I personally feel that the future of social media is decentralized. I’ve been enjoying Mastodon quite a bit.
I wish I had the wherewithal to build the capability to view the toot streams of different instances into the client - that would make it PERFECT :)
Yeah, I’m all for experiments and interesting concepts, but I would really love to see FLOSS developers put more time and energy into supporting a decentralized social network like Mastodon or Diaspora. I’m sure there’s lots of great and interesting new clients and interfaces we can build on top of Mastodon, rather than spawning yet another nice social network.
Interesting, I’ll give it a go. I know for a fact that people are going to gun it down because it’s electron though.
I mean… it’s a web browser running in a web browser. We’re in, “Yo dawg, I heard you like Chrome tabs, so I put a Chrome tab in your Chrome tab so you can consume memory while you consume memory.”
Fun fact, Servo is currently also like this, the official servo binary only renders one single page, and the GUI is implemented as browser.html. But someone already made a Cocoa based GUI :D
This is interesting. I’ve been wondering what would happen if a browser were better integrated with the operating system rather than being standalone monoliths. Personally, I don’t like the apps-in-browsers model and would prefer to see services heading back to standalone apps with the browser used mostly for browsing. It would be nice to have things like passwords, messenger accounts, etc. be handled by the operating system. The OS could handle logging into things, and then you could just fire up a single browser window to look up URLs and webpages as needed. Having a lightweight renderer that focuses on quickly rendering a single page would be great for this.
That was kind of the dream of Nautilus, wasn’t it? But if electron seems sluggish today, you can imagine how well this played out in 2001.
The dream of the browser for the web and files was realized by Windows 98. Turns out it wasn’t a great idea after all.
I guess the alternative is stripping down the Chromium or Firefox’s source, or write an interface around either of their engines. If you think of it as Chromium with rebuilt UI, I guess Electron makes a little sense as it’s already done the stripping and documenting/exposing how to build on what’s left.
Hasn’t the problem of “high-level language that compiles to fast programs” already been solved by optimizing compilers like MLton ?
I think Standard ML ought to be considered a contender. It’s also a very small language, but can be compiled to extremely efficient code. It’s principal advantage is that there is a complete formal semantics for it—there is no “undefined behavior.” There isn’t a lot of passion for it these days but I often wonder if things would be nicer if it had caught on. I really like the module and functor system.
I think most of Standard ML’s good ideas are in more widespread use in the form of OCaml. Standard ML does have a full program optimizing compiler in the form of MLton, but I don’t think there’s an equivalent for OCaml. That being said, I’m not sure OCaml is as performant as C, and it’s also started to gather some cruft as it gets older. It would be interesting to see a “fresh start” ML-style language designed for this day and age.
Not quite whole program optimization but kinda close is the flambda version of the OCaml compiler. What would you consider cruft in OCaml? In my experience the cruftiest parts are actually the oldest parts, the newer things are quite alright.
I think that the functor syntax is a little messy, especially for multi-argument functors. Relatedly, the new support for first-class modules is great, but it’s a little hard to figure out the first time around. I also really wish that there was support for first class constructors (I think Standard ML has this), and though the new ppx stuff is really cool, I think the resulting extensions can be a little ugly.
Hmm, now that you mention it, these are all valid points, thanks. Curiously, even OCamls predecessor, Caml Light had first class constructors. For OCaml there are a number of “solutions” to this, but the fact that there are multiple already illustrates the fact they are missing from the core language.
[Comment removed by author]
Maybe http://www.ats-lang.org/ ?
It allows not only functional programming. Here the section “What is ATS good for?”:
ATS can greatly enforce precision in practical programming. ATS can greatly facilitate refinement-based software development. ATS allows the programmer to write efficient functional programs that directly manipulate native unboxed data representation. ATS allows the programmer to reduce the memory footprint of a program by making use of linear types. ATS allows the programmer to enhance the safety (and efficiency) of a program by making use of theorem-proving. ATS allows the programmer to write safe low-level code that runs in OS kernels. ATS can help teach type theory, demonstrating both convincingly and concretely the power and potential of types in constructing high-quality software.
Are the dependent types really that readable to people who have a hard time with C? I wouldn’t suggest it for ease of use unless talking to someone who worked with theorem provers complaining about speed of extracted code.
[Comment removed by author]
Probably even more so for 8-bit coders who might not imagine a world past assembly or C. Much less ATS on their hardware.
So there’s a lot of fine print on this.
Let’s compare Haskell or OCaml to C and Python. The compiled FP languages are all going to be less code than C and better performance than Python. I would argue that they are likely to perform similarly to the C code, and they are likely to be less code than Python as well. I think most of the cost of FP is up-front: you have to learn it and figure out how to apply it. You can definitely make programs that outperform C, and it will usually be less work than writing them in C. But the tradeoffs are already set in a good place.
I think people tend to overestimate C’s performance. First, C is not the performance ceiling of all programs. Many programs are amenable to cheap parallelization, which is difficult to do in C but easy in Go or Haskell. Second, to write C is itself a lot of work. Getting to the same level of completion with C is often more work than getting there in other languages. The time you save getting to working with OK performance is time you can spend on improving performance of a working program with other languages.
What do you do if you get to completion with C and the program still doesn’t perform? You have to tear out a substantial chunk of C and rewrite it. But if you profile your code (which you should do anyway) you will probably find that only a small chunk of it actually has performance ramifications. This is why there are programs that are ostensibly written in Python that perform so well. It’s easy to profile the code, find the critical section, move that section out to C and use it as a library from some other language. Well, you can do that from OCaml or Haskell too. And often it is that glue code that unites your critical section with the real world, that is both tedious to write in C and error prone, as well as unlikely to matter from a performance standpoint.
According to mre_ on chat, it’s Kefa.
This looks really great. I love software that makes us better and actually helps in our jobs, rather than getting in our. I see there’s a desktop version ($20) for MacOS and Windows. I’m seriously considering buying it, but I would love it even more if there was some way to integrate it with Emacs or Vim.
Another great service that helps is Grammarly, it helps me a lot during my writing.
If you feel like using it, this is a referral link: http://gram.ly/x6MF
It will send you a weekly email with an interesting report about your writings.
I am not the best communicator with english text, and when I’ve used macs in the past Hemingway has really helped me to be more clear. Now that I’m on linux, readers of my writing suffer x_x
Maybe I should use WINE…
The author of the book is also the designer of the Triplicate typeface, which is beautiful. I cannot, however, bring myself to spend $89 for a font when Go Mono and Inconsolata are free. I know. I’m a bad person.
That being said, paying $99 for the book gets you a Triplicate font license, which I’m more willing to do. The book is worth paying for and there aren’t good free alternatives.
Triplicate is beautiful, but I don’t think I could ever use it for typesetting code though, it’s almost too beautiful for that. I’ve been using Fira Code Mono for my code lately and loving it.
Triplicate is very nice but I do tend to switch my monospaced font periodically (typically when a new hotness catches my fancy), which puts me off spending any money on one. Like @basus, I’m also currently using Fira Code Mono :)
I personally find ML-style modules to be a stricter, but stronger abstraction for libraries than typeclasses.
Also, I don’t quite understand the final point in the post: needing to look up viewSomething to understand viewCity etc. Isn’t the whole point of writing modular code that you don’t have to really care about what the implementation details are? If you need to understand the implementation entirely (as you might for a rendering function), having typeclasses vs modules vs some other abstraction mechanism becomes mostly irrelevant.
Yeah, maybe I didn’t explain that last bit so well. I think my point was that you can already write things in a type-class style (by ensuring each record has a contract of record -> Html msg), but that that style does not give you much extra in terms of readability nor in boilerplate. Boilerplate is the problem that most people are facing with Elm right now - and the introducing of typeclasses may be either harmful or overkill for that problem
It’s been discussed, and actually effect-modules have a syntax inspired by OCaml’s parameterised modules.
It seems like we’re conflating two separate issues: (1) the value of postings that are not related to computer science and related technologies and (2) the posting of clickbait, press releases and the like instead of links to high-quality “original” sources. Given that one of the top-ten articles on the first page right now is essentially a press release about Google’s burrito delivery drones, problem (2) doesn’t seem to be contained to the ‘science’ tag. More generally, many of the original observations about why science articles seem problematic could easily apply to other tags as well.
I would personally like to see better moderation and screening of clickbait/press release style posts. If we’re going to keep art posts because “they’re simply delightful and edifying”, that should be justification enough to keep the science tag as well.
Like some other people, my definition of “spare time” is a little fuzzy. When I’m not actually working on code or papers, I try to work out 3-5 times a week and cook most of my meals my home. I’ve also been trying to be a better adult my keeping my apartment in good condition by making sure everything’s neat and tidy. I’m not sure if I’d call that spare time though, I think of it as “maintenance time”.
Some of my recreational time I spend reading. I got back into graphic novels a few years ago when my roommate lent me Superman: Red Son. I also try to stay up to date with the New Yorker, though I rarely read the whole thing. I spend a decent chunk of time socializing, though it’s often at the same time as meals, or movies, or outdoors activities.
As someone who doesn’t use Macs anymore, I’m happy to see more cross-platform apps and less Mac-native ones :P
But seriously: why are these people saying that just because there was a flaw in the sandbox, the sandbox should be abandoned completely and all desktops should go back to what’s basically the X11 “security” “model”?!
You can do everything that could shoot the user in the foot without actually shooting them in the foot. It’s not hard to provide access-controlled APIs for all the things. And the Mac is already good at this. They’ve had the Accessibility API for years – manipulating windows and stuff requires the user to grant permission. XPC (if I remember the name correctly) for opening files and stuff is a great idea.
The unix world is finally catching up, with freedesktop D-Bus Portals (currently used by Flatpak). And D-Bus can also used… you guessed it – to request screenshots. The window system can display its own UI that makes it clear to the user that a screenshot is being taken.
I agree. I found this post hard to follow. I don’t know the details of the Mac sandbox technology, but it’s quite a jump from needing to be able to do interesting, complicated things and abandoning a proper security model altogether. I think Android (or at least OnePlus’ OxygenOS version) actually has a good example of this: apps are isolated and sandboxed (I think), but things like the global sharing menu make interaction between apps very easy and seamless. Of course, there are some apps (like Facebook) that don’t use the system menus, but there are going to be some bad actors on any platform.
I think the core point the author is making is not that sandboxes should be dropped because of this security issue, but because the overlap of “things the sandbox allows apps to do” and “things I want my apps to do” is really small, probably too small to cover many features people want to use.