I've just had my first look at the logging module. The "Warning" immediatly caught my eye: "The global list of handlers is a thread var, this means that the handlers must be re-added in each thread."
If I'm writing a "library" meant to be used by others (eventually), that create it's own threads (I've already got several of those), then how is the library meant to initialise the logging according to the user's wish in those threads? Do I have to add an "initLogging()" proc pointer as parameter to each library "setup()" proc?
Sounds like a plan.
But libraries which have logging support are suspect to me. I never enabled "logging support" in C#, nor Java, C++, Nim's stdlib... And most libraries do not create threads either, they leave that to the application developer for good reasons.
I think it boils down to me being used to use "binary" libraries in Java. When something goes wrong in that case, you can't really tell what happens without logging (unless you can reproduce it in a debugger). OTOH, Nim's "libraries" are (AFAIK) mostly available in source form, so one can easily add "echo()" where appropriate as needed, and so I see there is a lesser need for built-in logging with Nim.
The other reason I was considering adding logging to my "libraries", is that bugs in multi-threaded apps are generally hard to track down, and given the limited amount of time I have to work on them, they are likely to be bugged, so it would be useful to just crank the logging to "debug" when needed.
I think a litte utility type that works like a Java ThreadLocal that takes a proc pointer/lambda as parameter, and does lazy-init for you would be a nice addition to the standard library. I'll try to prototype that one day.
Logging in libraries is pretty common across different languages. It's not invasive as long as libraries do not explicitly configure loggers to create files and so on. The application is at the root of a hierarchy of loggers and can set verbosity and backends for all imported libraries.
(shameless plug: I'm maintaining https://github.com/FedericoCeratto/nim-morelogging )
Unfortunately, as common as this is, it is also an endless source of frustration. Not least, because each library will choose a different logging library, leading to a nightmare of integrating all of them. I still struggle every time I have to decide whether to use log4j-over-slf4j or slf4j-ove-log4 or over-log4j2 or any other combination.
Let us witness the dire situtation of logging in the JVM and avoid repeating the same mistake
@Araq As "the guy who receives all logged errors as email" in my "day job" project, I must point out that you're wrong in your assumption; "... the process dies ..." is simply wrong in our case (and hopefully in many other Java apps; you must get something for the cost of that gigantic heap requirement and stop-the-world GC). That might be one of the greatest strength of Java (it only dies if you want it to). When an exception comes, and I see literally 1000s of occurrences every day (after merging repeated exceptions from the same client for the same operation), it is logged and, depending on the code location, some data might be thrown away, but the process just carries on (the exception to the rule is "corrupt Web-Start caches"). This applies in particular for OOM exceptions. I spent ages making both our client and servers "OOM safe". Most of the time, when one client connection to a server causes an OOM, the other clients don't even notice, except the request might take longer due to a possible retry.
When I see some new exception in some place, I get the hotline to call the clients. About half of the time, the clients were not aware that any exception occurred. That is in large part due to my personal contribution to the project. Corporate environment cares mostly about "client visible" errors. That is why dev time for deep refactoring is rarely granted. Of course, it could be different at other companies.
Considering logging to a DB, I’m kind of ambivalent. The “unspecified text format” is a pita (I’m also “the guy that has to grep errors in the logs”). But we log exclusively to local drives. Mainly, because local drives don’t randomly get overloaded or crash, as some remote DB might do. Also we never log so much that it becomes a bottle-neck, which could happen when logging to a (presumably SQL) DB. (I’ll eat a broom the day we are allowed to use Kafka ...)