Threads for jeffhuang

  1. 16

    The story in the article is of a somewhat entitled corporate user of open source. But I’ve found that the toughest issues are ones by grateful but persistent users who encounter an issue that is not immediately obvious how to address. They create these unfinished investigations that are possibly actual bugs, but rarely lead to a satisfying resolution. So it feels like the workload keeps growing and there’s all these requests that are fairly reasonable individually, but hard to really close the loop on.

    1. 6

      Always love to see neat hacks like this but… What is going on here, is this an alternate SVG renderer written in Javascript targeting canvas? If so how could that possibly be faster than the browser’s native implementation? I believe it might be, but how and why?!

      I remember talking to Mike when he got started on Protovis, long before D3. And I was like “SVG? Does that still work? I thought it was dead”. It’s amazing how that tech has come back and been so useful.

      1. 2

        Thanks for engaging! Basically the library directly renders the visualization with a virtual DOM, skipping the standard SVG rendering process which is slow due to the complexities of SVG. So it’s not rendering a real SVG, but rather a visualization of what the SVG would display. It’s very easy to prove it’s faster than the browser’s implementation, because you see these examples showing with and without SSVG

        Personally, I’m a huge fan of SVG, and working with another person to showcase more capabilities of SVG like the ability to live-draw vector-based animations and textures.

        1. 4

          the library directly renders the visualization with a virtual DOM, skipping the standard SVG rendering process which is slow due to the complexities of SVG

          Ok, but why? Is it just a deficiency in the browser’s svg implementation? Is it that svg is expensive to parse compared with the js you generate? Or what?

          1. 6

            Looking at the technical limitations section of the site, it seems like this just implements a subset of SVG. In Pareto principle terms, it implements the 20% of the solution that solves 80% of the problem.

            1. 1

              That still doesn’t answer the question: why isn’t the browser just as fast for that subset?

              1. 2

                Because the built-in renderer adds every node to the real DOM,with all the (in)validation loops and performance cliffs that entails.

          2. 2

            Thanks for the explanation!

            Have you done much testing with Firefox? A lot of the examples don’t render with SSVG turned on. Donut does render but is slower; 45 FPS with SSVG on, 90 FPS with SSVG off. Chrome is the opposite: 66 with SSVG on, 37 with it off.

            I imagine dynamic SVG performance is very complicated. Neat to see an alternate approach!

            1. 2

              You’re right, not a lot of testing on Firefox. I can see how it can be hit-or-miss. That’s something we should work on.

        1. 9

          Really, Microsoft? $10k? Is that even enough to be useful to most projects?

          1. 15

            They do $10k once a month to various projects and have been doing so since 2020. Curl was just one month of many. Microsoft sponsors open source in other ways as well, this is just one way to allow open source contributors from within Microsoft to choose specific projects to direct donations towards.

            Note: I do not work for Microsoft, I merely possess the ability to read.

            1. 3

              curl was selected in January for $10, 000.00 provided one month, for ten months through GitHub Sponsors.

              It reads like it will be $10,000 a month, for ten months though, no?

              1. 12

                It’s 1k a month for ten months

                1. 1

                  The description could definitely be clearer. I’m still not sure after rereading that several times.

              2. 3

                On the one hand, if multiple companies would donate $10k, it might be a good source of income. But I think a problem for projects is that it is not really predictable. You can only leave your job or decide to work part-time if you project has a reliable income stream, but drops of money at random points of time does not really make that possible/

                Besides that, it seems Microsoft usually donates 10k every month. Given that that is probably less than the monthly salary of many Microsoft employees, it’s only a paltry amount compared to the profit that they extract from FLOSS (though Azure, WSL, etc.)

                1. 2

                  Microsoft delivers a lot of value. So not saying this directed at them. It’s the whole status quo. These sponsorships are just handouts. Maybe you’ll be lucky that your critical open source dependency will get a pittance. cURL delivers tens of millions of value, probably more. Maintainers and contributors must become stakeholders in the code they create themselves.

                  1. 0

                    To my mind, it’s more PR stunts. All FOSS projects aren’t in the running—just ones on Microsoft’s GitHub code forge that have opted into their Sponsors payment platform. The motive would seem to spark FOMO to get projects to flock back to or lock in from leaving GitHub (because every big bad news cycle, there’s usually a small exodus and a lot of the biggest projects, like Freedesktop, left long ago) in a chance to win their lottery. GitHub keeps a 10% of the transaction towards orgs in the process. This feels like a marketing run to monopolize the dev payment space so they can skim off some cash, and require every developer to have a Microsoft GitHub account and feed all that data to their machine.

                    My tinfoil hat is on pretty tight, but a centralized, proprietary, closed-source platform built atop the shoulders of FOSS definitely feels like a wolf in sheep’s clothing.

                    1. 5

                      [ Disclaimer: I work for MS, but this is my perception of the programme and not based on an official statement by anyone who runs it. ]

                      I believe that the main purpose of this is actually inward focused. Microsoft is gradually undergoing a culture transition from open sourcing things only if there is a very compelling reason to do so to open sourcing by default unless there is a very compelling reason business case for keeping something proprietary. Programmes like this are there to try to normalise contributing things upstream within the company. Any employee who contributes to any open-source project during a given month is eligible to vote (and to nominate projects). The project that gest the most votes gets some money but that’s a secondary benefit: the main goal is to get folks in the company talking about the open-source projects that they contribute to and want to support. The winners are also listed in an internal newsletter and so this increases the visibility of cURL in teams that have historically never thought about the F/OSS ecosystem as anything other than a competitor. In the long term, hopefully this will increase the number of MS employees who contribute to cURL.

                      1. 1

                        I’ll still remain skeptical that this isn’t just an Embrace. Turning a new leaf may be possible, but you still have a publicly traded company so I’m curious what value this is providing the shareholders.

                        1. 3

                          Two things: First, we are a massive consumer of open source software and that’s only going to grow. We benefit from a thriving ecosystem. Second, Azure attach. We make enormous piles of money from renting computers to run Linux (and much smaller, but still noticeable, piles of money from renting computers to run FreeBSD) to a large number of customers. A lot of disruptive technologies come out of open source and we have a vested interest in ensuring that people think of Azure as the place to run them.

                          1. 1

                            If I compared to politics, wealthy donors donating is rarely about wanting to help the candidate, but asking for favors in return. Luring people to Azure seems the “now you scratch mine” favor then. I’m still not buying the purely goodwill aspect.

                      2. 3

                        That’s patently false. GNOME uses self-hosted GitLab, and QEMU uses hosted GitLab, both recipients of the fund.

                        1. 1

                          https://www.gamingonlinux.com/2022/06/microsoft-chucks-gnome-10000-from-their-foss-fund/

                          This is true. This find is separate from the $500 campaign for GitHub Sponsers

                    1. 2

                      Careful with GCC 11. They made a change in C++, they are more strict about #includes of headers [1]. Expect a lot of existing code to break. I actually had to downgrade to get very large projects to build again.

                      [1]: https://gcc.gnu.org/gcc-11/porting_to.html#header-dep-changes

                      1. 2

                        They made a change in C++, they are more strict about #includes of headers.

                        What they did is stop including headers they don’t need. Code that relied on such (non-portable, I must add) side-effects was always broken and now has to be fixed.

                        1. 1

                          Linus Torvalds would reply: “We do not break user space!” If a Linux API works one way for 20 years, and everyone relies on it, you don’t just break everyone’s code. Why is GCC/C++ different?

                          Why is code that compiled and worked perfectly well for 20 years broken? Perhaps it is broken by some definition, but it begs the question if the standard is broken and the code is correct. About your comment that the code is non-portable, I already wrote a lengthy blog post about this subject: https://mental-reverb.com/blog.php?id=24

                        2. 1

                          Could they not have added a warning for a couple of versions before implementing this? It seems like one of those things that should be easy for a compiler to check for, and automatically fix or warn. Like “hey, you gotta add #include since you’re using it, and not depend on another standard library header to include it for you…”

                          1. 1

                            Sounds like a nightmare of special cases tbh. It’s the same definition, so you have to keep track of the whole chain of includes. Actually multiple chains and their ordering, because you could first include the class through the side-effect and then explicitly. And then you have things created through defines. And you have to keep the list of the cases to warn about.. :scream:

                            I think that’s one of those things that seems simple to do, but isn’t :-( gcc keeps only the first place something was defined if I remember correctly.

                            1. 1

                              It’s a simpler problem than that because it’s a standard library with a set of well-defined symbols and you need to handle the case where these identifiers are used but are not defined. You can maintain a list of function and type names and the headers that the standard says they come from and tell people what header they’re missing. Clang does this for a load of C standard functions.

                          2. 1

                            To be explicit, this is not a change with GCC, it is a change with libstdc++ and will affect any compiler using the same implementation. Historically, libstdc++ has always had a policy of minimising header includes, which is why code written against libc++ (which doesn’t) often fails to compile. The fix is always trivial: add the headers that the standard says you need to include. With modules (any decade now), all of this will go away and I can just put import std; in my C++ source files if I want to use the stdlib.

                            This is a far less annoying bug than the one in 20.04’s GCC 9.4, where the preprocessor evaluates __has_include(<sys/futex.h>) to false, in spite of the fact that #include <sys/futex.h> works fine. This has no simple work around and is why I’ve given up supporting GCC 9.4 on Ubuntu for some projects: GCC 10, 11, or any version fo clang is fine. Apparently the bug is something to do with the weird fixincludes thing that GCC does, meaning that the search paths for #include and __has_include diverge and so __has_include is not reliable.

                          1. 2

                            I’m going to start using this immediately. This is exactly what I need for my devlog. I’ve been doing TODO: and TODONE: (get it?) as text tags. But this is much better.

                            And yeah, Trello kind of imploded with the Atlassian purchase and changes. I exported and deleted my Trello account. A shame, was cool. Very few options from the searching around I did on hacker news etc.

                            1. 3

                              TODONE, hehehe XD

                              1. 2

                                The kanban boards in cryptpad are pretty good

                                1. 2

                                  In my opinion, a semi-manual changing of TODO -> TODONE as a way of tracking tasks is a must-have for any task tracking system. Turning a todo list to a record of what’s been done is basically like getting a free lab notebook as a side effect.

                                1. 1

                                  It’s unfortunate that peer-to-peer video calls haven’t taken off, and WebRTC adoption is slow. It seems like part of it is the difficulty with NATs and closed ports on each client, but the platforms seem to prefer centralized services to give themselves control. The lack of interoperability between iPhone and Android is annoying, and with neither one taking the market in the near term, I wish they would try to work together better.

                                  1. 1

                                    Is WebRTC not just The Way these things have to be done to get browser compatibility? So all the platforms are just using proprietary signalling servers and self-hosted TURN and/or forwarder and everything else is the same.

                                    1. 1

                                      I think there are a few parts to WebRTC. To access the camera, you use getUserMedia, but I should have clarified the parts I was referring to was the RTC* interfaces for making the connection and communication. But maybe you’re right there are a few platforms that use more of WebRTC than others.

                                  1. 10

                                    Many things are not captured in this rubric. The most obvious thing from my university experience which is missing here is the connection to Free Software. I got to write Free Software on campus, evangelize Free Software adoption to the rest of the electrical-engineering school, and participate in a well-funded Linux User Group.

                                    Undergraduate class quality is often lackluster and must be supplemented. I could write better C than the person whose lectures I attended, because they had never written C in non-pedagogical settings. I taught teachers’ assistants facts about Python. What really matters is the community of students and professors who create learning opportunities outside of the classroom.

                                    1. 4

                                      Absolutely, I would love to capture those qualities in a ranking. This project is a meta ranker, so it combines existing ranking sources (two of the rankings sources we generate ourselves, one at considerable expense). If you know of a good ranking that captures these qualitative measures, that would be amazing. I’d even pay for a service that collects this data and provides the ranking.

                                      1. 4

                                        What really matters is the community of students and professors who create learning opportunities outside of the classroom.

                                        This was my experience too. Evangelism for applied coding is really important in these settings, even though it’s not always the focus.

                                        My anecdote here: I helped run our school hackerspace and ran our CTF team, and we always struggled getting students actively involved because no one had time outside class. It’s a shame, since having lots of students pushing each other to learn and challenge each other helps everyone hone their skills. We ended up advertising the CTF team in our security class, but got… maybe one student to join. No one would make time. So there were like four or five students involved, max.

                                        We ended up roping one of the professors in, who ordered pizza for us broke undergrads and spent a night or two staring at x86 disassembly with us. So we tried to make the best of it. Goes to show how students won’t get involved even if the professors will given the wrong structural incentives.

                                      1. 4

                                        I wonder what the correlation between rankings and quality of undergraduate programs is (quality of individual courses and of overall curriculum).

                                        My personal experience is at a university listed in the Top 15 on this site, yet the first two year sequence of (arguably important) fundamental courses is carried out almost entirely by lecturers with no relation to a publication-based ranking.

                                        1. 4

                                          My experience in compsci programs at two top-ten universities on this list (undergrad + grad) is that courses were mostly taught by research faculty who had no interest or particular aptitude for teaching. Full-time lecturers were consistently better educators than research faculty, with a few notable exceptions.

                                          1. 3

                                            Strangely, it’s the exact opposite for me.

                                            The full-time lecturers were rather mediocre: their courses were essentially “let’s read a slide deck together.” The concepts taught lacked motivation and context.

                                            Whereas the professors I took (senior-level/grad) classes with had far more engaging courses. But all of the ones I sat classes with had a visible interest in teaching.

                                          2. 2

                                            I agree, the rankings are definitely skewed towards graduate program factors like publications and reputation. I think part of it is that it’s difficult to measure teaching. Maybe the factor that’s slightly relevant to that is the placement rank where the undergraduate degree institution of faculty is accounted for.

                                          1. 1

                                            The naming and lack of clear marking of the specs is terrible. But in practice, most of my use is for charging devices and gadgets, and it’s basically interchangeable for that if you’re okay with sometimes charging at a slightly slower rate. If I need fast data transfer, then I make sure to use one of the “good” cables that feels a bit more rigid.

                                            The biggest problem I’ve encountered that it’s not mentioned in the article, is a lot of cheaper lower-cost usb-c devices don’t properly implement even the most basic specification, so cannot be charged with a usb-c to usb-c cable. They can only be charged using a usb-a to usb-c cable. I’ve found this happens with brands I haven’t heard of, with things like usb-c lights, toothbrushes, and various household appliances.

                                            1. 1

                                              I didn’t know about the ch unit. Seems pretty niche but interesting! Thanks for sharing.

                                              1. 1

                                                The interesting thing about ch and, on the vertical direction, the ex unit is they depend on the font’s metrics, so they’re useful to set micro-level things, such as letter-spacing or text-underline-offset.

                                              1. 2

                                                This is really cool, but I’m having rather mediocre success with the demos. I tried the basic demo, and was able to calibrate with 85% accuracy; despite this, the ‘dot’ frequently moved outside of where I was looking, or hovered a certain distance away from my actual gaze. The Google demo kept telling me I’m looking two links above. Perhaps my camera is bad.

                                                Having an open source eye tracking library that works with any webcam, though, is fantastic. As a matter of fact, I was looking around just yesterday, and was only able to find proprietary hardware and software, some of which was sold out. Perhaps I will use this for an experiment or two!

                                                Thank you for sharing, and keep up the great work!

                                                1. 3

                                                  Thanks for the comments and trying this out. Yeah it won’t get to the word level, and does depend a bit on camera. I use a Logitech Brio and it works at about 85-90% accuracy in the calibration. I’d love it if anyone could make a clear improvement to the accuracy. It would open many more applications, like for people with motor control difficulties.

                                                1. 2

                                                  How much of this is a running log of what you did rather than what you have to do? Do you find that the tagging system is a good way to keep track of the different projects you are working on?

                                                  My PI has something similar in a 400 page document. I wonder if I should start using something like that too to keep track of what I do.

                                                  1. 2

                                                    Basically any day besides the current day is a record. For the current day, I use just a blank line to separate the done stuff from the upcoming stuff.

                                                    Tagging is fine, though it’s almost unnecessary because I tend to write about things in the same way, so I can even just search for “meet with” or “submit” to find some things. The key part is having a search tool that shows all the results on one pane, rather than hop through them one at a time.

                                                  1. 7

                                                    I’ve noticed that everyone has a different system for organizing their tasks, ideas, schedule, meeting minutes, research notes, etc.

                                                    I wrote mine up because I think it’s a fairly simple method, that has worked well for me and it’s battle tested for 12 years.

                                                    I would love to hear from you if you have a similar system, or have any thoughts about the way I do it.

                                                    1. 4

                                                      I use something very similar but it also serves me as a journal when I have thoughts to write.

                                                      I just lack the energy and willpower to actually do everything as planned, so I do push stuff into the next day.

                                                      Related to that, it helps me to put estimates to tasks because “5 minutes of torture can’t really hurt”.

                                                      1. 2

                                                        Since I’ve been doing this for a while, I have a pretty good estimation of how much I can get done in a day. Basically it’s like “full day of meetings plus one small thing” or “one or two meetings plus one medium sized thing”. That’s all I aim to do in a day.

                                                      2. 2

                                                        What subreddit is that? I love reading these…

                                                        1. 2
                                                        2. 1

                                                          For task tracking, I use a plain-text* system (loosely) based on the Bullet Journal system. Essentially I have

                                                          • a file for the year, split (using markdown and folding in vim) into months and days, with checklists at the month and day level
                                                          • a file for ‘future’ tasks covering the next 4 quarters. I use this like you use your calendar
                                                          • a file for repeating tasks, which I copy/past into the main file as required
                                                          • an archive folder where previous years live

                                                          I have a couple of scripts that extract recurring checklists to CSV, which I import into LibreOffice for visualisation etc.

                                                          I also have my diary in a separate set of markdown files. For some reason I decided to have one file per day (which I periodically combine and convert to HTML) though it would probably make my life easier if I just had one per year.

                                                          I’ve been really pleased with my move to plain text files for both tasks and diary.

                                                          * if you count encrypted markdown files as “plain text”

                                                        1. 3

                                                          I am relatively new to using a simple text file for productivity. For now it’s a list of what I’ve worked on in a day and how long it took. It really helps put into perspective how productive, or not, I’ve been in a day.

                                                          I’m interested in how to have structured data, but without a log of syntax noise. It would be nice to be able to pull reports, and enable future tooling to do neat things. But I wonder how far I would go before it becomes something similar to existing syntax, such as org-mode. For now I use simple markdown.

                                                          @jeffhuang Do you use tooling with the structured data; to do something with the tags, or items in a day?

                                                          Have you encountered any challenges with the .txt file being on a remote server? I’ve had mine local before because I don’t want to rely on a network connection, but I find it more flexible to have it on a remote server.

                                                          1. 3

                                                            Hmm great question. The only structure I have is really just the tags. So if I’m looking for ideas, I just search for #idea and if I need to fill in my annual report, I just search for #annual and it has reduced that time from 4 hours to 15 minutes now. I’ve thought about doing something to automatically see my past meeting notes with the same person(s) like on a side screen in my office, but haven’t been motivated enough to do so. I could do something to count the number of items I worked on or thought about each time, in a sort of time-series way, but since I also do time tracking separately, I haven’t found the need to really process the daily lists very much.

                                                            For the .txt being on a remote server, I use a static network IP instead of DHCP so it’s very reliable. Microsoft has done a nice job with remote desktop that it works well enough even on my phone. If it’s a really important, I just copy the day’s list to Google Keep which is accessible everywhere, but I find myself using Remote Desktop anyways.

                                                          1. 10

                                                            A few other suggestions:

                                                            • include a date of publication and date of the last revision;

                                                            • if you rendered the HTML after all, consider making the source documents available as well (e.g. by replacing .html with .md in URL);

                                                            • if you want to allow people to archive and redistribute your work:

                                                              • explicitly choose a license and put it on a well-visible space;

                                                              • consider generating UUID which could be used to reference or search for your article regardless of the URL;

                                                              • consider assigning a digital signature so readers can verify the mirror hasn’t be tampered with; this assumes they already know your public key and I’m not sure how useful it really is, but it’s better than nothing

                                                              • consider using relative links; this is a bit controversial, because they are more difficult to get right and tend to break, but once setup correctly, they allow to migrate entire sites really easily (also, replacing URLs would break the signatures should you decide to use them);

                                                            1. 1

                                                              Thanks, these are pretty interesting suggestions, and I’ve thought about some of these.

                                                              If it’s not a dynamic website, doing something like alert(document.lastModified) might help the user retrieve the date of the last revision. Putting it explicitly on the website could make sense in many cases, but I also imagine some cases where the author doesn’t want it (like a restaurant website, where visitors might think, rightly or wrongly, that the restaurant is out of date if it doesn’t keep updating its page).

                                                              UUID/signature – I imagine it’d be uncommon for someone has the UUID but not the website saved. But to both of these comments, I feel that generally you’re thinking of a use case where a website remains static and we need to preserve that copy, but I’m thinking of use cases where websites should be continuously updated over time. I’m okay with older content being revised and not having the previous edits (if they don’t use the quick-backup scheme I mentioned), if the upside is that there is less overhead to updating the website.

                                                              Relative links are fine, but in my opinion, putting the other content on the single page and using skip page navigation like “#references” would be a bit more maintainable.

                                                              I like the idea of having a .md version of each .html page (though as I note, I think just writing out the html/css is preferred). But an .md is a better version of “view source” if the html is generated.

                                                              1. 1

                                                                If it’s not a dynamic website, doing something like alert(document.lastModified) might help the user retrieve the date of the last revision.

                                                                I didn’t know of this, thanks. I’d still prefer to see date on the page, though. Filesystem attributes are not always entirely reliable or semantically meaningful.

                                                                I also imagine some cases where the author doesn’t want it (like a restaurant website, where visitors might think, rightly or wrongly, that the restaurant is out of date if it doesn’t keep updating its page).

                                                                Of course. I was mostly concerned about blog posts and articles. I hate when I encounter an undated article. Sometimes, the date itself tells half the story.

                                                                If restaurants had websites that were designed to last, that wouldn’t hurt, but I would mostly appreciated it for an improved UX (incl. lower CPU usage and potentially a better parsability) instead of longevity per se.

                                                                UUID/signature – I imagine it’d be uncommon for someone has the UUID but not the website saved.

                                                                I was thinking in terms of references (bibliography). Books are referenced by ISBN. For online content, we currently only have URLs. But URL is necessarily tied to a single server (or datacenter) which just happens to be serving that particular content. This creates a centralization and once that technical infrastructure collapses, the URL is only good for feeding into archive.org. It would be rather unfortunate to read a paper that references a book “that you can borrow from the guy with the black cape who is seen on the local market every second Tuesday from around 7 am to 9 am”.

                                                                Linking to a concrete website is fine, but providing UUID in the bibliography would be even better, because users have the option to copy-paste it into a search engine and try to find a mirror.

                                                                In conjunction with digital signatures and hashes, readers would be able to assert they read the same copy, or a modified copy, but at least written (signed) by the same person that wrote the original linked paper.

                                                                I’ll write about it more in the near future.

                                                                Relative links are fine, but in my opinion, putting the other content on the single page and using skip page navigation like “#references” would be a bit more maintainable.

                                                                There are also images and stylesheets. Also, I was thinking of entire blogs rather than single documents.

                                                                though as I note, I think just writing out the html/css is preferred

                                                                Well, that depends on the use-case (and also who’s sitting behind the keyboard). I hate writing HTML with the passion, because in my opinion, it’s too technical and frankly, it just looks ugly. I always tend to use proper formatting/indentation I’d use for XML, but in the end up with so many levels of indentation it’s unbearable, so I just fallback to a mess and try to pretend it’s OK to use no indentation in the <body>. And then the “writer’s block” comes into the play and looking at “a bunch of characters that just need to be there for no obvious reason” (it’s certainly not nearly as much document-related or semantics-related as it is design- or programming- related) doesn’t help. When I decide to change a headline, I want to just press CTRL+D and type # a new headline, not think about selecting all the content between <h1></h1>. In the end, I forget the closing tag somewhere anyway.

                                                                You have some good points about writting HTML by hand, but for me, the disadvantages are too much. I can imagine doing it for certain projects, and I’ve done it in the past, but I mostly have personal blogs in mind (because I’m slowly but surely working on mine right now; I’ve ran away from Jekyll for complexity, hopefully my own toolchain will be better).

                                                            1. 3

                                                              And if you follow this advice, it’s much easier for a service like pinboard.in to store your pages. For $25 a year every link I want will be preserved forever (well, limited to something like 30MB per link I believe and assuming I backup my stored stuff, as pinboard itself could of course disappear at some point)

                                                              1. 1

                                                                The solution needs to be multi-faceted because no one solution can solve the entire problem. pinboard.in can’t reliably store all the pages if the pages exponentially grow in size as they currently are doing.

                                                              1. 1

                                                                Edit suggestion: under point 7, the link to “monitoring services” is broken.

                                                                1. 1

                                                                  Thank you, fixed.