1. 5

    Also, stop with the flat and “clean” design. If there’s something your users are supposed to click on, make it look like something you can click on. Make links that look like links, buttons that look like buttons, etc. Even lobsters fails at this, there’s a menu at the top of the page but it doesn’t look anything like a menu, it’s just a horizontal line of gray words.

    1. 3

      Um… those gray words are all just links to other pages. No hamburger menus on Lobsters!

      1. 1

        Also, the names of the words make a user think they might be menu options. Then, the user hovers over them to see the link icon appear. There is an investigate step there vs links that are obviously links which is a usability loss. I don’t the loss is significant, though, given nature of our community. We’re technologists and explorers. Heck, the whole point of the site was coming to look for good links. :)

        1.  

          Still, a feedback as simple as “reduce opacity or add an underline on hover” would go a long way in showing the user there’s an interaction “here”.

          1.  

            Submit a pull request? https://github.com/lobsters/lobsters

            1.  

              Didn’t know that was an option (well, I never looked into that anywyas).

              I’ll keep it under hand for when I find time to do so, thanks.

        2.  

          If it changes state on the server, make it a button. Otherwise make it a link.

        1. 6
          1.  

            Well, so one of my Berlin Rust Hack & Learn regulars is porting rustc to Gnu Hurd. I can switch soon, year of the desktop is 2109.

            1.  

              The fact that I can’t tell if this is a joke or a typo makes it a better joke.

              1.  

                Both. I made the typo and decided to’s too good to be fixed.

            2. 2

              QNX, Minix 3, or Genode get you more mileage. At least two have desktop environments, too. I’m not sure about Minix 3 but did find this picture.

              1.  

                If I remember correctly Haiku also has microkernel.

                1.  

                  I thought that BeOS was microkernel based on what so many said. waddlespash of Haiku countered me saying it wasn’t. That discussion is here.

                  1.  

                    Haiku has a hybrid kernel, like Mac OS X or Windows NT.

                  2. 1

                    Don’t MacOS and iOS both use variants of the Mach microkernel?

                    1. 4

                      They’re what’s called hybrid kernels. They have too much running in kernel space to really qualify as microkernel. Using Mach was probably a mistake. It’s the microkernel whose inefficient design created the misconceptions we’ve been countering for a long time. Plus, if you have that much in the kernel, might as well just use a well-organized, monolothic design.

                      That’s what I thought a long time. CompSci work on both hardware and software has created many new methods that might have implications for hybrid designs. Micro vs something in between vs monolithic is worth rethinking hard these days.

                      1. 5

                        That narrative makes it sound like they took Mach and added BSD back in until it was ready, when the evolution of Mach was that it started as an object-oriented kernel with an in-kernel BSD personality and that was the kernel NeXT took, along with CMU developer and Mach lead Avie Tevanien.

                        That was Mach 2.5. Mach 3.0 was the first microkernel version of Mach, and that’s the one GNU Mach is based on. Some code changes were backported to the XNU and OSFMK kernels from Mach 3.0, but they were always designed and implemented as full BSD kernels with object-oriented IPC, virtual memory management and multithreading.

                        1.  

                          Yeah, I didn’t study the development of Mach. Thanks for filling in those details. That they tried to trim a bigger OS into a microkernel makes its failure even more likely.

                          1.  

                            I don’t follow the reasoning; what failed? They didn’t fail to make a microkernel BSD, as Mach 3 is that. They didn’t fail to get adoption, and indeed it’s easier when you’re compatible with an existing system.

                            1.  

                              They failed in many ways:

                              1. Little adoption. XNU is not Mach but incorporates it. Whereas Windows, Linux, and BSD kernels are used directly by large, install bases.

                              2. So slow as a microkernel that people wanting microkernels went with other designs.

                              3. Less reliable than some alternatives under fault conditions.

                              4. Less maintainable, such as easy swaps of modules, than L4 and KeyKOS-based systems.

                              5. Due to its complexity, every attempt to secure it failed. Reading about Trusted Mach, DTMach, DTOS, etc is when I first saw it. All they did was talk trash about the problems they had analyzing and verifying it vs other systems of the time like STOP, GEMSOS and LOCK.

                              So, it was objectively worse than competing designs then and later in many attributes. It was too complex, too slow, and not as reliable as competitors like QNX. It couldn’t be secured to high assurance either ever or for a long time. So, it was a failure compared to them. It was a success if the goal was to generate research papers/funding, give people ideas, and make code someone might randomly mix with other code to create a commercial product.

                              All depends on viewpoint of or requirements for OS you’re selecting. It failed mine. Microkernels + isolated applications + user-mode Linux are currently best fit for my combined requirements. OKL4, INTEGRITY-178B, LynxSecure, and GenodeOS are examples implementing that model.

                      2. 3

                        Yes, but with most of a BSD kernel stuck on and running in the same address space. https://en.wikipedia.org/wiki/XNU

                    1. 7

                      Bad idea, it should error or give NaN.

                      1/0 = 0 is mathematically sound

                      It’s not mathematically sound.

                      a/b = c should be equivalent to a = c*b

                      this fails with 1/0 = 0 because 1 is not equal to 0*0.

                      Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.

                      There is a subtlety because some people say (X) and others say (Y)

                      • (X) a/b = c should be equivalent to a = c*b when the LHS is well defined

                      • (Y) a/b = c should be equivalent to a = c*b when b is nonzero

                      If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.

                      It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.

                      It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                      1. 14

                        I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.

                        1. 8

                          This is explicitly addressed in the post. Do you have any objections to the definition given in the post?

                          1. 13

                            I cover that exact objection in the post.

                            1. 4

                              It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values

                              That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.

                              1. 4

                                Those truthy/falsey values are an often source of errors.

                                I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.

                              2. 4

                                1/0 is integer math. NaN is available for floating point math not integer math.

                                1. 2

                                  It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.

                                  I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.

                                  1. 7

                                    It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be +?, /?, *?, -?.

                                    https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff

                                1. 2

                                  I was wondering when something “practical” would come of sandsifter! Nice find.

                                    1. 4

                                      I’m a little disappointed that the author, whose username starts with “haskell” and whose profile says “I love types and monads” didn’t make use of https://github.com/matterhorn-chat/matterhorn .

                                      1. 1

                                        I have an intuition that a Prolog program works like a SAT solver, is this an accurate view?

                                        1. 4

                                          Pretty close… the technical differences are mostly about using heuristics. Here’s a nice paper about implementing a reasonably-efficient toy SAT solver in Prolog:

                                          1. 1

                                            A while back, hwayne submitted this article on how SAT works with code examples in Racket Scheme. It was one of better ones I’ve seen. You might want to start with a primer on propositional logic first, though. Lots of them to Google/DuckDuckGo for.

                                          1. 13

                                            There’s a quote I like that I can’t remember from where:

                                            Thirty years ago “it reduces to 3SAT” meant the problem was impossible. Now it means the problem is trivial.

                                            1. 2

                                              I wrote something vaguely like that a few years ago, though I’m sure I wasn’t the first to observe it:

                                              SAT, the very first problem to be proven NP-complete, is now frequently used in AI as an almost canonical example of a fast problem

                                              1. 1

                                                Why is that? Because computers are much faster, or better algorithms?

                                                1. 3

                                                  We have faster hardware and better algorithms now, yes. But the real reason is because early theoretical results which emphasized worse-case performance had scared the entire field off even trying. These results based on complexity classes are true, but misleading: as it turns out, the “average” SAT instance for many real-world problems probably is solvable. Only when this was recognized could we make progress on efficient SAT algorithms. Beware sound theories mis-applied!

                                              1. 18

                                                I suppose I know why, but I hate that D is always left out of discussions like this.

                                                1. 9

                                                  and Ada, heck D has it easy compared to Ada :)

                                                  1. 5

                                                    Don’t forget Nim!

                                                  2. 3

                                                    Yeah, me too. I really love D. Its metaprogramming alone is worth it.

                                                    For example, here is a compile-time parser generator:

                                                    https://github.com/PhilippeSigaud/Pegged

                                                    1. 4

                                                      This is a good point. I had to edit out a part on that a language without major adoption is less suitable since it may not get the resources it needs to stay current on all platforms. You could have the perfect language but if somehow it failed to gain momentum, it turns into somewhat of a risk anyhow.

                                                      1. 4

                                                        That’s true. If I were running a software team and were picking a language, I’d pick one that appeared to have some staying power. With all that said, though, I very much believe D has that.

                                                      2. 3

                                                        And OCaml!

                                                        1. 10

                                                          In my opinion, until ocaml gets rid of it’s GIL, which they are working on, I don’t think it belongs in this category. A major selling point of Go, D, and rust is their ability to easily do concurrency.

                                                          1. 6

                                                            Both https://github.com/janestreet/async and https://github.com/ocsigen/lwt allow concurrent programming in OCaml. Parallelism is what you’re talking about, and I think there are plenty of domains where single process parallelism is not very important.

                                                            1. 2

                                                              You are right. There is Multicore OCaml, though: https://github.com/ocamllabs/ocaml-multicore

                                                          2. 1

                                                            I’ve always just written of D because of the problems with what parts of the compiler are and are not FOSS. Maybe it’s more straightforward now, but it’s not something I’m incredibly interested in investigating, and I suspect I’m not the only one.

                                                            1. 14
                                                          1. 2

                                                            I still wish there was something equivalent to this on *nix. I mean, urxvt is fine and all, but…

                                                            1. 1

                                                              Alacritty is what you’re looking for: https://github.com/jwilm/alacritty

                                                              1. 2

                                                                Or if you want something more mature and featureful, less Rusty bleeding-edge, try Kitty: https://github.com/kovidgoyal/kitty

                                                              1. 1

                                                                …and csvkit.

                                                              1. 4

                                                                It boots Linux, but slowly. If you have Bluesim, a verilog-based simulator or an FPGA to play with, you may want to check out the tools and processor designs at https://github.com/bluespec.

                                                                1. 4

                                                                  That’s a very long winded account of finding “goto error” in two different programs.

                                                                  1. 4

                                                                    Just another confirmation of Betteridge.

                                                                    1. 4

                                                                      That’s like saying your blog posts are a time-consuming way to display certificate warnings on others’ screens. Leaves out some detail people might enjoy reading.

                                                                      Main thing that was interesting to me is how he identifies or refutes whether something looked copied.

                                                                    1. 14

                                                                      I disagree, because that will only lead to a morass of incompatible software. You refuse for your software to be run by law enforcement, he refuses for his software to be run by drug dealers, I refuse for my software to be run by Yankees — where does it all end?

                                                                      It’s a profoundly illiberal attitude, and the end result will be that everyone would have to build his own software stack from scratch.

                                                                      1. 5

                                                                        Previous discussions on reddit (8 years ago) and HN (one year ago).

                                                                        1. 4

                                                                          “It’s a great way to make sure proprietary software is always well funded and had congress/parliment in their corner.” (TaylorSpokeApe)

                                                                        2. 1

                                                                          I don’t buy the slippery slope argument. There are published codes of ethics for professional software people by e.g. the BCS or ACM, that may make good templates of what constitutes ethical activity within which to use software.

                                                                          But by all means, if you want to give stuff to the drug dealing Yankee cop when someone else refuses to, please do so.

                                                                          1. 9

                                                                            Using one of those codes would be one angle to go for ethical consensus, but precisely because they’re attempts at ethical consensus in fairly broad populations, they mostly don’t do what many of the people wanting restrictions on types of usage would want. One of the more common desires for field-of-usage restriction is, basically, “ban the US/UK military from using my stuff”. But the ACM/BCS ethics codes, and perhaps even more their bodies’ enforcement practices, are pretty much designed so that US/UK military / DARPA / CDE activity doesn’t violate them, since it would be impossible to get broad enough consensus to pass an ACM code of ethics that banned DARPA activity (which funds many ACM members’ work).

                                                                            It seems even worse if you want an international software license. Even given the ACM or BCS text as written, you would get completely different answers about what violates it or doesn’t, if you went to five different countries with different cultures and legal traditions. The ACM code, at least, has a specific enforcement mechanism defined, which includes mainly US-based people. Is that a viable basis for a worldwide license, Americans deciding on ethics for everyone else? Or do you take the text excluding the enforcement mechanism, and let each country decide what things violate the text as written or not? Then you get very different answers in different places. Do we need some kind of international ethics court under UN auspices instead, to come up with a global verdict?

                                                                            1. -10

                                                                              I had a thought to write software so stupid no government would use it but then I remembered linux exists

                                                                            2. 4

                                                                              It’s not a slippery slope. The example in the OP link would make the software incompatible with just about everything other than stuff of the same license or proprietary software. An MIT project would be unable to use any of the code from a project with such a rule.

                                                                          1. 10

                                                                            Seems like the hunt for these things is regressing to the 90’ies days of organisations like IDSA ( https://cs.stanford.edu/people/eroberts/cs201/projects/copyright-infringement/emulationanti.html ) where such takedowns were a common thing.

                                                                            Now, “disneyright” seem to have zero chance of returning to anything sensible, i.e. a time-limited monopoly to give the author a fair chance to profit from his work, before it is forcibly elevated into the public domain for the benefit of current and future culture.

                                                                            Meanwhile, the “corporation sanctioned alternatives” (various ‘virtual console’ stores, including the one by nintendo) have shown to be extremely volatile and customer unfriendly.

                                                                            I have a faint hope that developments like this instills enough ‘disobedience’ to foster new piracy tools for discovering, sharing and curating emulation and related assets (including derivative work like gameplay streaming) - without ad-parasites or the unreliability and user-unfriendliness of torrents.

                                                                            1. 14

                                                                              To me, the ironic thing here is that old ROMs were abandonware and not commercially available for quite a while before Nintendo discovered how popular they were. Only then did they start their virtual console store. (Note, most of these NES titles were never owned by Nintendo, and in many cases the entities that did hold the copyrights are long defunct. But lets just keep talking about NES Zelda, as available on the current 3DS VC store…)

                                                                              And that’s why their “NES Classic” console is essentially a Raspberry Pi running an emulator. Very nice of the community to do all that development work for the brand owner to profit from!

                                                                              1. 11

                                                                                Even their earlier efforts, the virtual console, had iNES headers of the roms they were using in their “inspired” emulators. What are the chances that they had preserved dumps and maintained their in-house emulators (they did have those) themselves vs. outsourcing the job to some firm that took whatever dumps they could find and repurposed an open source emulator.

                                                                                Sidenote: I’ve been on the preservation side of emulation since about the mid-90ies, I restore pinball and arcade machines as a hobby and have quite a big emotional attachment to the ‘culture’ from that era. As such, I happily throw in both money and code to projects like mame and ‘the dumping union’ (procuring rare and dying arcade PCBs, dumping them for the mame devs to take over). It pains me dearly that there are not valid documentation (3D models, …) of now dead and dying arcade cabinets and other artifacts of that era.

                                                                            1. 3

                                                                              Thanks! My remark was sort of an idle complaint; what a pleasant surprise to see a real study that more or less confirms my unscientific impression. For what it’s worth, in my experience the language pragma combinatorics don’t really contribute inordinately much to the Haskell learning curve, in comparison with all the other fancy abstractions and idioms. It’s a complex language, with or without the pragmas.

                                                                              Looking at your results, I have a follow-on question about how frequently these extensions occur together. Are there prominent clusters of extensions? I’m thinking of constructing a weighted graph with pragmas as vertices and an edge for each co-occurence in a source file.

                                                                              1. 2

                                                                                “The important detail is that in all three of these areas [of security] we have not only been fanatical, but pretty much first.”

                                                                                Your arrogance precedes you, Sir! Nah, he was about forty years behind on doing the kinds of things he described as “first.” Even for UNIX’s where several security-focused projects (maybe 1980’s) and one commercial product (Trusted Xenix, 1990-1994) preceded them. On security side, the main difference between those projects’ methods and OpenBSD is that the former addresses root causes to provably eliminate problems and the latter does probabilistic mitigations that may or may not work. The reasons for the latter are they take less development effort and have lower performance cost on prevalent, insecure CPU’s. This means OpenBSD trades away security for performance and increased compatibility in those cases. It will always be behind methods in high-assurance security but probably ahead in adoption. The fact that it’s FOSS vs close source like most high-security vendors will help it keep the lead on that.

                                                                                “Someone on wikipedia has gone through a lot of effort to identify some of our security efforts, and there is the Exploit Mitigation Techniques paper which I have presented at a number of conferences.”

                                                                                I still think they are worth copying in high-assurance security, too. The older approach was mixing strong stuff like MAC with the regular, security mechanisms in OS’s like UNIX. Just extra layers in practice. I also like obfuscation. The thing I like most about many mitigations in OpenBSD is they sort of combine extra layers and obfuscation together. The other benefit OpenBSD’s work could bring high-assurance security is a starting point for new, highly-secure components. The reason is they have small, carefully-built, well-documented components. That’s a prerequisite for kind of analysis that says “all our bases are covered for X, Y, and Z kinds of problems.”

                                                                                “hese are Linux developers, basically placing the community in a situation where they have to run a binary blob of unknown code from a vendor, instead of sticking to their guns about open source? I must admit, I just don’t understand some people. They must have much more flexibility to their belief systems than I have.”

                                                                                (Let me illustrate what a hypocritical asshole he’s being with that statement.)

                                                                                So, there were some OpenBSD developers trying to put an “open, secure” OS on these gigantic blobs of transistors from greedy, sneaky companies with bad track record of quality and security. The project lead called them out on processor and firmware errata many times. He wanted them to fix the chips, firmware, and/or their documentation. Various people have tried reverse engineering it to figure out how it works in many situations. All that work, with and without tooling, found many problems ranging from undocumented behavior to secret leaks to code execution. OpenBSD lead continued focusing his OS work on these CPU’s from these vendors instead of the slower, open ones that occasionally turned up.

                                                                                I mean, here are vendors of “open, secure OS’s” placing the community in a situation where they have to run their secrets through tens of millions to a billion transistors of unknown behavior from a vendor instead of sticking to their guns about open, trustworthy components? I must admit, Theo, I just don’t understand some people. You must have more flexibility in your belief system to call out folks making practical choices about risk in software when you do the same thing, if not worse, with hardware. If you and OpenBSD team believe in such principles, then it’s time for everyone in OpenBSD project to buy some Verilog/VHDL books plus “High-Speed, Digital Design.” Practice a while with simulators. Then another fundraiser for OpenBSD’s first ASIC based on Leon3, OpenPITON, or RISC-V. Hell, you could build some of your mitigations right into the CPU like CompSci students and proprietary vendors have been sporadically doing from the B5000 in 1961 up to CoreGuard in past year or two.

                                                                                “Damien Bergamini joined Jonathan toward the end and got all the bugs out of the driver. We are happy to say that it appears to be working better than the Nvidia binary blob. It is also significantly smaller, and it is very clean source code.”

                                                                                Excellent work on the software side of the untrustworthy hardware. Now, when will they tackle the hard part? Or will they make a pragmatic compromise doing what they can with what resources they have balancing varous goals of differing priorities? Like Linux folks were doing with different goals and priorities.

                                                                                “There are many reasons why vendors will not give information out. I believe that all their reasons are a lie to the customer. “

                                                                                This is true. My studies of hardware indicate it’s a ultra-high-cost development that’s cheaper to steal than build. There’s also patents on about everything conceivable. Hiding everything you can, from the design details to what can infer the design details (eg in firmware), force their opponents to have companies like ChipWorks tear it down for design details. Alternatively, send in spies. What resources and risks these require is high enough that secrecy gives I.P. vendors more time in market before competitors release clones, blocks some new entries that lack necessary experience, and lower number of losses from legal battles. On that last one, Samsung paid about a billion in royalties to Microsoft alone for Android which Microsoft didn’t contribute to that I recall. It’s very worth it to them push this as close to zero as possible.

                                                                                Changing this requires political action where a massive number of companies (lobbying) and/or citizens (voting) achieve patent reform that stops that crap. My compromise was to limit the length of the patent to essentially the release cycles of products. Example: it would be two years for phones if companies release phones every two years. Tie it to whoever is doing it fastest with a competitive product so they don’t respond by slowing down. Then, either free or paid, open hardware can stand a better chance. Right now, just making it open increases odds of being sued to death or with lots of margin gone.

                                                                                “Of course we did not set out to create OpenSSH for the money – we purposely made it completely free so that the “telnet infrastructure” of the 1980s would die. But it sure is sad that none of these companies return even a fraction of value in kind.”

                                                                                I keep wondering if they could’ve accomplished the same thing or close enough by making it a dirt-cheap, shared-source, paid product. Either companies pay every year or they pay for updated versions. They can modify their copy however they want. If cheap enough (eg $1-10k a year), it might be such a small, budget item to many companies that they still bought it. Then, it would’ve been a revenue source for the project. Rinse repeat for a lot of other superior alternatives to common tech that OpenBSD team creates.

                                                                                “Twice we asked them to cover the travel and accommodation costs for a developer to come to their event, and they refused. Considering that their SunSSH is directly based on our code, that is just flat out insulting. “

                                                                                It’s flat out predictable. Everyone probably told them about what happens when BSD licenses meet corporate greed, attitude, apathy. It’s why I don’t advocate that license any more. They should’ve done either a paid or copyleft offering if they wanted something from companies. It assumes they’ll act as selfishly as possible only benefiting these projects when they have to. Which they do in almost every case. Exceptions exist but are really rare.

                                                                                1. 2

                                                                                  OpenBSD lead continued focusing his OS work on these CPU’s from these vendors instead of the slower, open ones that occasionally turned up.

                                                                                  If one doesn’t take the rhetoric too seriously, it’s easy to see that everybody sits somewhere on the purity/pragmatism continuum. It’s clearly important to the OpenBSD project to support the commodity processors that are actually available. For now that means the x86 family, warts and all, with popular ARM variants in second place. However, the project also supports several other architectures that have much worse price/performance characteristics.

                                                                                  I think RISC-V is the shining hope here. FreeBSD supports the RISC-V ISA to some extent already, but as yet there are no commercially available RISC-V processors that can compete with even very cheap x86 chips on performance. When there are, I have no doubt that OpenBSD will have long since been ported. Do you have any specific suggestions about what OS devs can do to encourage the manufacture of open hardware?

                                                                                  1. 1

                                                                                    “it’s easy to see that everybody sits somewhere on the purity/pragmatism continuum. “

                                                                                    That’s my view. His was smearing others for making compromises. I just applied his own logic to another area to show he was no different in terms of compromising on openness or security to achieve other goals.

                                                                                    “Do you have any specific suggestions about what OS devs can do to encourage the manufacture of open hardware?”

                                                                                    Several things they can do. One is to start businesses selling their stuff by itself or bundled with other software. This revenue can support them plus hardware projects. For instance, they might pay companies like Centaur or MCST for high-performance design. Another is targeted evangelism toward CompSci folks doing government-funded hardware to build enhancements into open CPU’s. Some already do that. Finally, they might learn hardware skills to make it themselves just like they did OS coding to solve their OS problem. They could even make other hardware for money to pay for tools to make the open chip.

                                                                                1. 1

                                                                                  To see what this CPU was about, I first ran About Us through Google Translate. Output is below. They’re doing embedded chips with a wide range of applications (esp security), plenty of foundries, partnered with Alibaba, and developing cloud chips. Definitely worth keeping an eye on.

                                                                                  “Hangzhou Zhongtian Microsystems Co., Ltd. (“Zhongtian Microsystems”) is an integrated circuit design company dedicated to 32-bit high-performance low-power embedded CPUs with chip architecture licensing as its core business. Founded in 2001, the company is headquartered in Hangzhou High-tech Zone, with branches in Shanghai Pudong New Area and Ningbo Haishu District. For more than ten years, Zhongtian Microsystem has always adhered to the concept of independent innovation, and developed an embedded CK-CPU with international advanced level. The 32-bit C-SKY series embedded CPU core with independent intellectual property rights developed has low power consumption. High performance, high code density, and ease of use.

                                                                                  Zhongtian Microsystem has a roadmap for CPU technology development for various embedded application scenarios. Currently, it has developed 7 embedded CPUs covering high, medium and low embedded applications, which are widely used in IoT intelligent hardware, digital audio and video, and information. Security, networking and communications, industrial control, and automotive electronics. It has become the only CPU supplier in China to develop embedded CPU based on autonomous instruction architecture and achieve mass production.

                                                                                  The company builds a chip software and hardware platform around its own embedded CK-CPU, providing customers with core competitiveness, cost-effective and customized CPU IP core and related SOC design and development platform, software tool chain and integration for customers in various industry segments. Development environment. Zhongtian CK-CPU supports international mainstream foundries such as SMIC, TSMC, Huahong, Huahong Hongli (HHGrace) and Hejian Technology (HJTC). Up to now, C-SKY series CPU cores have more than 60 authorized users, and are widely used in many embedded fields such as financial IC cards, digital audio and video, information security, industrial control, security monitoring and network wireless communication. As of now, the cumulative shipments of SoC chips based on C-SKY CPUs have exceeded 700 million.

                                                                                  Hangzhou Zhongtian Microsystems Co., Ltd. cooperates with Alibaba Group to develop Yun-on-Chip architecture for IoT/IoT segments, and develops a new generation CPU, SoC platform and software under the cloud-integrated framework. Supports the environment and operating system, supporting full-link security and low-cost access from chip to cloud. Zhongtian Microsystem will support the upstream and downstream partners of the industry chain, deeply integrate Internet technology with traditional technologies, develop cloud chip products for the whole industry, and construct an application ecosystem connected with objects.”

                                                                                  1. 2

                                                                                    Very interesting. The “Yun-on-Chip” refers to YunOS, now AliOS, an incompatible Android fork on which Alibaba Cloud has spent considerable development effort.

                                                                                    1. 2

                                                                                      For those not familiar with Doug Lenat, he achieved some notoriety for his “Automated Mathematician” program in the late 1970s, and has spent most of his career on Cyc, a very ambitious and long-running project to achieve artificial general-purpose “common sense” by means of what amounts to a huge curated Prolog schema. Most academics I’ve seen willing to comment on this undertaking have basically called it a fool’s errand.