To me, hot loading is a game changer. By that I mean the ability to compile code changes at runtime without the need to restart the application. Especially in game development where rapid prototyping is key, it is an incredibly useful feature to have. I'd go as far as to say that any modern language that's supposed to be viable should be designed with hot loading in mind.
The implementation I know of is to have the majority of your project to be compiled into a dynlib, while the main binary doesn't contain much more than the main loop and some code to check if the lib file has been updated, and if it was, it reloads the lib and gets the proc addresses.
In nim, this works fine at first, but once you want to do heap allocations (e.g. new, newSeq, ...) within your dynamically loaded code, it gets tricky. Since we probably want those allocations to persist when the lib is being reloaded, we store the refs into a state struct that's created in the main binary and then continuously passed to the linked code. This doesn't work at all unless you link both the main binary and the lib against nimrtl. Now unfortunately this removes support for threads and causes stack traces to be incomplete (not available in almost all cases).
While these are pretty hefty losses (especially the missing stack traces hit hard), I still went on and worked with this setup for couple of month now. I can toggle hot loading on/off at compile time, so when I'm not sure what's going on and need a stack trace, I can just make a normal build that imports the code the normal way instead of loading it from a lib. Then I get nice stack traces again.
Now sadly, there's one more problem that makes it increasingly difficult to work with hot loading. It looks like the allocations are somehow "unstable", even when linking against nimrtl. I am not sure what causes this as I know nothing about the inner workings of the GC, but I observe that when I insert or remove a chunk from the code that goes into the lib, the applications reloads the lib and then crashes with SIGSEGV: Illegal storage access. (Attempt to read from nil?) upon operations like seq.del or seq.setlen.
I also observed that whether it crashes or not seems to depend on how much code I modified. When just making some single-line edits, it reloads and keeps going most of the time. When changing a few more things, it is much more likely to crash.
One theory is that when I do unloadLib, any allocations associated with that lib are being marked for the GC even though they're still referenced elsewhere.
So I wonder, is there anyone else who has toyed around with hot loading and hit similar barriers? Anyone that may have an insight to share as to way this doesn't work like I want it to?
Btw if there's interest, I can share a little test case that contains a minimal version of my hot loading implementation which can be used to reproduce what I described above.
Btw if there's interest, I can share a little test case
Please share the results should you find the time to reproduce it on your machine :)
I get that SIGSEGV instantly, exit via ctrl + c and 1 version 1 been printed. I get that immediate SIGSEGV even for empty main.nim and lib.nim.
Yet, I don't know if that's related: my DLL-dependencies tool crashes on opening lib.dll (that just should use NimTRL), and only if MinGW's libgcc_s_dw2-1.dll (on which lib.dll depends) is in PATH for the tool.
hmm, have you put the nimrtl.dll next to lib.dll and main.exe?
have you made sure that all binaries are compiled in 32 xor 64 bit?
I am using MinGW-w64 and have now added a little nim.cfg to the repo that makes sure that the compiler spits out 64 bit binaries.
The Nim version I am using is 0.17.0.
I wrote this article a while back:
Hey def_pri_pub, yes, I know your article. It's been a great help to get started, thanks for that!
When I first tried to implement hot loading in Nim a couple of months ago I based it on your article, however, I decided to not use threading and not to call the compiler from within the script. I then observed seemingly random crashes at heap allocations (using new) that are being done in the code of the lib file. I learned that these problems are caused by the runner binary and the lib using individual instances of the GC and that I had to link both against nimrtl for them to share one instance (at least that's my understanding of it). Indeed that stabilized the allocations, but the issue described in my initial posting triggered by re/deallocations once the lib instance has been swapped out popped up instead.
With your implementation, I'd be back at the unstable allocations again since you do not link against nimrtl. And if you did, you couldn't use threading any longer since nimrtl still can't handle that afaik.
If you'd like to confirm the issue, I uploaded the files I tested with:
There's a loop in the lib code that allocates stuff, and it always crashes for me on the 16th iteration. It may be different on other systems though.
Sorry for the belated reply.
I'm a little confused at what you're trying to achieve here. Can you explain it in a few words what you want? It it something with trying to get dynamically allocated memory to place nice with hot-loaded code?
No worries, any kind of reply is greatly appreciated as it looks like the number of people interested in hot loading their nim code is rather limited (while to me it's like THE feature).
After you had implemented the hot loading on which your article is based, have you used it in a project and battle tested it a bit? I ask because the moment you do allocations (e.g. system.add, tables.`=`, ...) in the hot loaded code, you should've experienced occasional SIGSEGV crashes. Apparently this is caused by the runner binary and the lib not sharing the same instance of the GC, as explained here.
And while this can be resolved by linking both binaries against nimrtl.dll, other crashes do still occur for me, triggered by re/deallocations (e.g. system.del for seqs) and seemingly in relation to how much of the hot loaded code changed.