Alternatively, here is an equivalent XSL stylesheet that can be applied directly to a sitemap XML document:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:sitemap="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xhtml="http://www.w3.org/1999/xhtml">
<xsl:output encoding="UTF-8" method="text" version="1.0" />
<xsl:template match="/"><xsl:apply-templates select="sitemap:urlset/sitemap:url/sitemap:loc" /></xsl:template>
<xsl:template match="sitemap:loc"><xsl:value-of select="text()" /><xsl:text>
</xsl:text></xsl:template>
</xsl:stylesheet>
This has the added benefit of not requiring the whitespace-processing step in the article, and could be easily modified to select elements based on attributes (e.g., language). However, it does require that the sitemap document actually be well-formed XML – the example in the article is not well-formed because it contains an unclosed <urlset> element. This is why XML parsers, validating parsers and transformers are so much more valuable than ad-hoc implementations. If the document does not conform to the specification, it will tell you.
You can use xsltproc (pre-installed or available via a package manager with many Linux distributions):
xsltproc sitemap-to-text.xsl sitemap.xml -o sitemap.txt
You can also transform via the xsltc library, Java or C# DOM or one of many other XML DOM-compliant transformers if you need to integrate into an existing codebase.
If the 2 sides spent time talk past one another, it should have been made clear to provide a proposal by X date. It seems like the process was unclear or failed on that point (deliverables, before moving onto step Y).
“The single biggest problem in communication is the illusion that it has taken place.”
(attributed to various folks in one form or another)
EDIT: Just to be super clear: this wasn’t intended as a dig. I myself suffer from this problem continuously. It’s a constant struggle to communicate effectively.
I’ve been happy-enough with LastPass - I can’t point to any reason beyond inertia, so really what I’m curious about in this thread: are there any significant differentiators that could sway a person to switch?
To my knowledge at least by staying mainstream there’s a team of individuals working on the product. Ive used LastPass for years, and while there have been issues in the past … There is a large userbase and community scrutinizing it.
Going the self hosted route negates alot of the large community, and trail by fire already accrued by legacy solutions like LastPass.
They also provide an export mechanism …
I’ve stuck with LastPass for a while. AFAIK, no security issues that I’ve judged to be significant. I appreciate that, compared to the other solutions that I know of, it seems to be widely compatible and simple to use on all platforms.
Only minor beef that I have is that the browser plugins, or at least the Chrome one, seems to have gotten slower and a little bit buggier over time instead of better and faster.
I use LastPass, but am not happy with it, as in the past, it had some pretty serious security issues:
I would switch to 1Password, but it does not have linux support (edit: it has a browser extension for linux, which is suboptimal, but probably better than Lastpass). I’ve almost talked myself into switching to Keepass, but I’ll have to find out how trustworthy the iOS version is.
Yup! … but the tour is not for another 2 weeks or so … I went there on a tour as a child, and my buddy who is going on the tour there, was an employee when the plant was being built 27 years ago …
Own a Daskeyboard 1.0
But daily drivers are:
… I prefer standard 104 key layouts.
Is there a reason something like Sqlite isn’t used to store/read the PLY files or other metadata? Surely thousands of files on disk isn’t an optional solution … I assume you’re taking steps to parition out the files to say 1000 per directory to maximize ulimit and file read operations … Using g Sqlite could alleviate some of these issues.
Please pardon my ignorance, I’m not a graphics guy although I find it fascinating. I’m a webdev/polygot programmer …
It’s almost certainly fine. SQLite is useful when you want to query data or manipulate it in ways SQL is designed for. If you’re accessing objects by name, a filesystem is fine, already supported in your programming environment, easy to replicate across a large cluster, and on modern systems, is totally fine with tens of thousands of files per directory.
Isnt it evaluated linearly to be deterministic …? If it was a Trie datastructure what would that buy you and would “a lookup” still be deterministic for a sysadmin to understand precedence defined within the file.
I suppose a big question is – is it quicker to find a plain needle in the file vs build a Trie for a typical hosts file …
If it was a Trie datastructure what would that buy you
log time instead of linear time. This allows you to have a very large hosts file without slowing down internet use.
would “a lookup” still be deterministic
yeah, you can ensure that happens in the implementation.
I suppose a big question is – is it quicker to find a plain needle in the file vs build a Trie for a typical hosts file …
Building the trie is slower than searching through the file but the only needs to be done when the file is edited.
Have you numbers on how big it would have to be? Without knowing that, this reeks of premature optimization.
Is there any research that shows whether an ORM based application is more riddled with these problems, compared to applications not written with ORMs, but still use SQL? …
A good example in a project I ha e written is I do: Select 1 from too where x = 4 rather than a count to check for existence of something …. But I could see every entry level programming arguing “ I counted the results – it was bigger than one” … Its not wrong, just an inefficient approach until you learn better.
So are ORMs helping us learn better by avoiding these mistakes?
Standups should not
Try to resolve an issue live
Don’t try to trouble shoot something or hash out details in a standup. Use the standup to report it, then grab someone after the meeting. Otherwise you are wasting everyone time. This one drives me nuts as it seems to happen in my stand ups all the time.
Include a measure of how productive you are.
If you need that measure there are burn downs. Also its generally a good idea to not include any unnecessary management, as that does tend things to devolve into ‘make yesterday sound productive’ as @pab said.
Be longer than 1 minute per person.
You want to report your current position, and where you are going. If it takes you more than 60 seconds to report this you are probably running into the prior two bullet points.
Be longer than 10 minutes in total.
You don’t want to give time for devs to glaze over. If its taking longer either brevity is suffering from loquacious individuals, or the team may be getting too big.
Happen prior to caffeination, lunch adjacent (unless team eats together), or near EOD
Stands ups should facilitate follow up communication between members. You want people be alert enough to help, but during a time where they wanting to wrap something because they have something else to do at that time.
Yup. I think you hit the nail on the head. I did stand ups years ago as project lead. First 5-10 mins of work: 1) what are you doing 2) any foreseeable pain points, connect with your peers that your tasks need you to coordinate on.
Done. Everyone thought they were extremely productive (to my knowledge).
Some tools that we use to achieve these goals:
It is always OK and preferable for someone to say “can we / you continue this discussion after the daily?”. Not everything that is currently interesting and relevant you is that to everyone. It is very easy to forget this when you get excited!
Keep everything short: It is OK if there is nothing peculiar happening or you don’t need input/help!
Always briefly go through every task which has been worked on since last daily. From end of the process pipeline to the beginning (in our current case: task moved to production -> tasks moved to ready -> tasks moved to review -> tasks moved to in progress -> stories taken to in progress -> stories groomed to backlog). There is couple reasons for this “backwards” order. First it gives personal productivity measure (I deployed/did stuff that needs review/etc). Secondly it creates natural pull for people to review and take new tasks/stories to be worked on, which removes insane amount of that fruitless “is this the next thing, or this, or this” kind of conversation. Having physical kanban wall makes this really easy by the way.
First couple items in the tip of the backlog are in strict priority order, so when previous story is done one just needs to take next one to be worked on. No need to converse about this during daily.
Pick a time when daily starts. Be very strict about this. Making others to wait is rude, no one is that important.
Currently our team is 16 people, our dailies take 5 to 15 minutes. Depending on how much churn there has been.
FYI The first known recorded standup from the highly productive Borland Quattro pro team was an hour long. Standups should be mini planner meetings, not status meetings.
I own a Synology Nas.
What is root cause of this vulnerability? The fix looks strange, it filters “bad” requests instead of fixing vulnerable parts of code. Maybe it’s urgent fix before actually fixing underlying bugs, or it’s done to not disclose actual bugs too early?
I don’t know much about practices of fixing urgent vulnerabilities in popular projects, is it common practice to add fixes like this to not disclose root cause early?
In Drupal a commonly used coding convention throughout the api is to pass information state around in arrays. Notable, for page rendering, but also for Form processing and state.
Array keys are used to denote either data or function chains to call to pre/post process the array packaged data. Special keys starting with # symbol denote internal-use api keys. But any programmer is free to add/modify #keys as they need for their own business logic.
Various sub systems of Drupal take (user) inputs and pack into these arrays … It appears as care was not taken to assure malicious #key values are not added into arrays – possibly allowing RCE.
Back to a hobby project I started long ago and restarted again recently: a time series datastore in Go using gRPC.
So far have it up and running with a library I wrote previously for a fast ordered datatype. Watched a presentation about best practices for gRPC since the docs and examples seemed so thin and wound up finding out that yes, I was indeed doing it wrong. Rewrote my .proto and simplified my error handling late last week and switched over to the delightful logrus for logging.
This week, when time allows, I will actually port over the TSDB in-memory format from Gorilla and add more structured input and aggregation queries for output. Next milestone after that is adding continuous queries & downsampling. I hope to have all that done this month if the weather stays crappy.
Ah Logrus! Slightly OT, but why did you opt for logrus over zap?
Logrus (sorry for the bad link in my post btw) is older and popular, so it’s a great starting point for most projects that need leveled logging. It is pluggable as well, so it’s easy for me to add a configuration item later that will change the log output for my app to logstash, fluentd, JSON, etc without any extra work for me. It also defaults to a nice format when a TTY is detected, so debugging with colored levels and pretty offset times is another win for me.
Zap is great; it’s very fast and I would consider it for any application that wants to output structured data quickly (not just logs!). As always, though, there’s a tradeoff in flexibility for that speed. From the docs:
The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces.
That tipped the balance for me; I wanted to support a few common format options and Logrus makes that super simple. My app doesn’t write structured data often enough to warrant that ease-of-use tradeoff.
Cheers for your insights, nice to see why you opted for Logrus.
I mainly asked as I’ve been burnt by logrus before and managed to subsequently piss off half the Go community whilst doing so. Thanks!
EDIT: However it was not Logrus’ fault - but mostly Go’s lack of successful package management.
I’ve been on the pissing off whole communities side before – hey at least it helps instill changes sometimes.
I am working on CI based releases.
… I haven’t had to rollback any releases yet … So yay! (In 5+ years)
Prod has Grafana graphs and internal audit reports of the live system.