Modularising the PRC

So I’ve kind of ended up taking over the PRC Compednium, I think. A sentiment that gets expressed a lot is “make it more modular”. Does anyone know how the heck you would go about doing this in nwscript? Here’s a list of the ways I can think of to do any kind of modularisation/polymorphism:

#include - due to prototypes and implementations being in the same file (unlike C with .c and .h files) there’s no runtime linking. This means it’s effectively compile-time copy and paste. Useful for breaking up files into smaller files and creating shared code, but useless for modularisation (the dependency is still there).

ExecuteScript - can do runtime dynamic polymorphism (exactly what we want) but my reading suggests that it has significant overhead, has sharp limits on stack size (cannot be nested more than 8 calls deep) and child scripts count towards parent script execution limit. Also cannot return values without messing around with Set/GetLocalInt.

DelayScript - as above, but doesn’t count towards parent execution limit and the child script doesn’t have access to a few things. Can’t be used for a serial process.

Events - with UserEvents, these would be ideal. The PRC wouldn’t need any knowledge of its modules. However my understanding is that they’re just ExecuteScript under the hood and objects can’t have more than one handler for any given event (and all UserEvents count as one Event). Also, the API is really clunky.

If statements - Spaghetti. Still requires all code to be present when compiling. Will make things even buggier (every if means you’ve just doubled the paths through the code). Bad, bad, bad.


In summary: they all either suck or have some horrible drawback. Is there anything better I can use to do basic runtime polymorphism? The absolute ideal would be to be able to invert control so that modules could inject themselves into a core that has no knowledge of them.

Were this not nwscript, I would modularise it by implementing some kind of event/callback system. If it were C/C++, I could do this with function pointers and structs, an object oriented language I could use objects that implement a callback interface and in a functional language I could just pass the PRC some functions at load time that it can call later. Unfortunately, nwscript is a procedural language, which has no pointers, no lamda functions and no goto. It doesn’t even have arrays - as far as I can tell it only has forks.

P.S: What I’ve written above may be wrong. I’m writing based on my understanding. I don’t have the actual experience to back up anything I’m saying here. I’m an experienced programmer but not a very experienced with NWN scripter.

I think ExecuteScript is the way to go. The limit of 8 deep is correct but for what you are doing I think that should be fine. Modularity would mean the modules are not all called one from the other, but by a master script which called all of the registered modules. Or just called all of them and the ones that weren’t there would be no-ops.

The overhead is not that much different than calling a function. Yes, you have to use local variables to pass arguments and return values. Not that hard…

If the scripts are independent enough you can delay them 0.0 to break the TMI count. It depends to some degree on the level you modularize. Subsystem feature not individual function for example. Case by case basis, probably. EE (and 1.69 with nwnx) allows for increased instruction count.

Just my 2 cents.

Also I think you will find the 2das (and tlk file…) the hardest part to modularize in PRC (but I only looked at it for a little while a few years back).

1 Like

Huh. The information I’ve been given makes indirection sound stupid-expensive in nwscript. Like, milliseconds per function call expensive. I’d been working on the assumption that the script compiler just inlines #included functions calls to get around this.

I really need to test this stuff for myself, it seems. Thanks for letting me know.

Probably won’t be able to use this much apart from at very high-level. One of the things that really offends me about the PRC code is how every calculation you’d ever want to extend is inside a few whopping big #included functions like GetSneakAttackDice or GetCasterLevel. Want to make a change? You gotta find your way through this huge pile of spaghetti and add a little bit more. Yuck.

I’m well aware of that. I’ve made some suggestions about breaking up the 2da files via namespacing on the Beamdog forums (e.g. instead of feat 12345 you’d have feat “prc” 12345). Someone else suggested support for UUIDs. I don’t get much of a sense there’s much interest in either of these suggestions on their forums, much to my chagrin.

For now, I’m just ignoring them, I can’t do anything about them. I’m going to work on the assumption this problem doesn’t exist - it’ll be an excuse not to break things up otherwise.

The only useful things I can really do to make the 2da situation better for now is to make it easier for people to fork the PRC and modify the constants at the script level. I’m actively working on the former while updating it’s tooling; Build scripts are done (Windows only at present because I can’t be bothered to install on my Linux partition), Module patcher is more-or-less done and I’ve still got to update the character editor.

It’s not completely free, no. It works really will on a subsystem basis. Module event scripts can be a series of executescript calls for each component. It might effect how you modularize PRC. Poison system, spellbook system, etc. rather than this class or that class. But if you look at your options, this is basically the only one that’s viable. Only other one is to compile it all together.

SquattingMonk had a base module setup that did this thing on a module-wide basis. Core something… I don’t remember. Hopefully, he’ll chime in. Then the memetic AI scripts had a pretty cool mechanism for doing essentially loadable libraries. But underneath this they were basically just doing executescript calls.

People have done a lot of cool things within the limitations.

1 Like

The Core Framework is a mashup of HCR2, the Common Scripting Framework, and Memetic AI. I tore my old implementation apart and am putting it all back together again. There’s an alpha here. Not much documentation yet outside of the comments. Should have it all fixed up within the month after my classes end.

Yep. They made it more complicated than it needed to be, though. I’ve simplified their method and pulled it into my system.


I think it’s on the order of (half) a microsecond for function call out of nwscript. Running a script has some >200us delay. I think ExecuteScript() from within a script has the same issue, but it could be lower since some of the setup is already there.

So, if you have say, 4 different things that need to happen in the heartbeat, that’s 4+1 scripts executed, which is about a millisecond of staging cost. Could be a bit more, like 2ms or 3ms, but still once per 6s isn’t a huge deal.
I’m certain you could save much more just by cleaning up and optimizing the scripts. Don’t do performance at the expense of readability and maintainability…


Just a note about classes, but if I recall, the PRC already has a flag set in the 2da for each new prestige class that enables it to be disabled by adding a line of code to the OnModuleLoad event for each class the Builder wants to disable. Basically, it sets a variable on the module that locks out that prestige class.

Yup, “the root of all evil is premature optimisation” and all that.

I’m concerned about this stuff because spending ages and ages modularising the PRC just to find out the fundamental approach makes it too expensive to actually use is depressing to think about. To avoid that, I need to understand the performance characteristics of the tools I have available and work with them rather than work against them. If I can’t do it in a way that is both modular and has an acceptable performance then there’s not really much point in bothering.

You know, thinking about this some more it does occur to me that a lot of the calculations in the PRC does really could be cached pretty easily. It really doesn’t have to work out your Caster Level EVERY time you cast a spell. I wouldn’t have to care so much about the performance hit of the architecture if the overhead is only paid when someone levels up/down or changes their equipment. It’ll probably be worth having some way of declaring a particular cache as dirty as well (so you could do things like add Sneak Attack damage from spells like Hunter’s Eye).

It’s not viable for effects that happen on certain conditions (like hitting a particular type of enemy), but even then I could just handle common conditions inside the PRC code itself (so you say give it a struct that says “ExecuteScript this script only when you hit an Undead”). That way the overhead of indirection is only ever paid when a condition is true. It’s already got its hooks into just about every part of the game - adding a few extra if statements there isn’t going to kill things.

God I wish nwscript had arrays. I could make that sort of thing so clean and simple if I could receive structs and store them in an array. Guess it’ll have to be Get/SetLocalArray* of scalars for this sort of stuff.

The direction I’d like to take the PRC in is to make it more of a framework for prcs/feats/etc and have all the stuff already in as addon haks. You know, have a core that doesn’t really do anything much besides facilitate all the other modules.

It’s probably never going to end up that way (2das and tlks kill the approach stone dead) but it just makes sense from a system design stand point. It’s so much easier to have clean, elegant, bugfree code when you have less of it (and everything doesn’t depend on everything else). Who knows, maybe one day Beamdog will fix up 2das/tlks to be extensible somehow.

1 Like

If you can have an array of scalars but not an array of structs, you can emulate the structs by using parallel arrays, several arrays that all use the same index, which is how it was done before the idea of a struct came around.

As far as I’m aware, you don’t really even get arrays of scalars. The GetSetLocalArray* functions just emulate arrays by taking advantage of how local vars are basically a hash table. When you go SetLocalArrayInt(42, “foo”, 1) it translates it into SetLocalInt(42, “foo1”). It’s like how PHP does arrays only it doesn’t store any ordering information and you get no constructs to help with iteration like foreach.

But yeah, you’re right. It’s just having arrays would save having to write a bunch of ugly glue code to serialise/deserialise your structs into local vars (also would probably be a very minor performance saving).

I don’t know anything about scripting for NWN, but I do have a BS in CS from 1989, and even after all these years, I retain some basic knowledge of coding. Would it be possible and useful to use pointers in your struct to make a linked list?

There’s no pointers in NWN, unfortunately.

I have a pseudoarray implementation in sm-utils if you’d like to use it. It maintains a count of the list so you can iterate over it with a for loop.

@Empyre65 Given how C “inspired” nwscript is, if we had pointers that would also imply function pointers. Function pointers would let me go to town with some Object Oriented Programming (function pointer + struct = method) and Functional Programming. If I had either of those, I would be having fun with lamdas and OOP design patterns instead of asking questions. :wink:

@squattingmonk Thanks for the offer but it seems there’s already an array implementation in the PRC code. I dunno how I’ve missed it so far - it must only be used in the guts of the dynamic conversation/newspellbook stuff - I haven’t really seen any mention of it in any of the spells, feats, classes etc. scripts.

Thanks everyone for your replies; it’s really given me a vague idea of how to go about things. I’ve still got to fix up the tooling before I can start playing with this, mind.

Next question; does anyone have any advice/handy tips on testing nwscripts? My current testing method is “change a thing, compile the PRC via its make script, play the game a bit to see what breaks”. It’s not exactly the most rigorous testing methodology and the PRC already has a bit of a reputation for being a bit buggy.

FWIW I would use a tool like the Software Ideas Modeler (it’s free for non-commercial use) to block diagram what calls what so I could then start to untangle the PRC with a view to making it more modular. If you were lucky, doing so might help to find some of the bugs I’ve heard about in the PRC as well.


maybe this can help you:

this is a pseudo list that:

  • knows how many members it has
  • knows what is previous and next member
  • doesn’t need renumbering after removing a member
  • can add member anywhere into the list, head, tail, middle
  • can store classic pseudo array inside the member


  • not unlimited number of members/elements not suitable for caching 2da
  • TMI (this is shared limitation of all pseudo-arrays solutions)

I must say, that is one very cool hack.

Don’t think it’s actually gonna be too useful for me at present. All I need is iteration so arrays are a perfectly adequate data structure. A list would let me do the whole recursive head/tail iteration which would be neat but… I somehow think NWN wouldn’t like that.

Very, very cool hack though. If I wore a hat, I would tip it to you, sir. :wink:

A sentiment that gets expressed a lot is “make it more modular”. Does anyone know how the heck you would go about doing this in nwscript?

imho one of the key issues w/the prc is exactly this, but conceptually rather than informatically. for example, what do you do if you want to include the prc’s psionics system, but don’t really care for the additional [non-psionic] classes it includes ?  sorry, can’t really do that, as everything’s a must-take. this is due to how they’ve coded for these classes. deep in the bowels of the prc system in some of the core sub- sub- sub-routines whose fundamental functionality requires they be called by everything, you’ll find references to some obscure class only your brother’s girlfriend’s dog would play with, followed by exception coding for that class. and what if i think the class is a silly idea ?  well, tough, i get it anyway – along w/all the exception coding that would be nightmarish to excise if you’ve got an eye toward a really lean implementation.

frankly, i don’t know if there’s a painless way out of that Pit Of Doom. i think one good approach might be the one you’re already considering, i.e., using ExecuteScript for the special-purpose classes. it’d be wonderful to be able to choose conceptual blocks of functionality [‘i want this class, this class, and that class, but no others – and keep your psionics to yourself, tyvm’…] and allow a builder to configure things so only the desired functionality [and therefore rule changes] are included when the module runs. unfortunately, i think the mechanism that @Pstemarie alludes to [iirc] is the same one bioware use, i.e., scriptvar settings in cls_pres_*.2da. this only precludes certain classes from being selected by a player, however it’s all still ‘in there’ in all its eye-gauging messiness.

does anyone have any advice/handy tips on testing nwscripts?

one way that speeds things up for me is to make heavy use of the override directory. another way is using skywing’s standalone compiler nwnnsscomp.exe. make sure that none of the scripts you want to work on are in haks, and that they are in your override directory before you fire up nwn ; scripts must be present when the engine starts up in order for this to work. then whenever you make changes to a script, compiling it into /override will allow the engine to take changes into account immediately – no need to restart your module, just leave it running. :smiley:

oh, and as for inlining bits of code [sorry, i know i read that somewhere above, but can’t find it any more… :sheepish: ], from what i’ve seen, that isn’t the case. if you #include a file w/several routines and use only a few of them, the .ncs still includes all of them. [hope i didn’t misunderstand your question ! ].

Imo modularization is not that big issue with PRC.

It is not 5gb unpacked monster, it does not contain 40 haks, it has no 16k limit issues and it doesn’t contain content that is partially usable (issue of missing blueprints). I think we all know what I am comparing PRC with…

And to be honest the already inbuilt modularization is why nobody wants to use it for just single class. Say I want Archmage, but I must add all module events etc. etc. while the real archmage stuff is coded in 5 spellscripts and maybe 1 event. But because of the modularity, if you want it without ripping it off completely, you need to use all the abundant scripts. Whatever you do @plok , you won’t fix this. It still won’t be an option for builders who wish only one or two classes.

Imo. To fix PRC and make it more user friendly, it needs to use newest technologies (nwnx). Or if thats not an option, then remove everything that needs clunky workarounds. The stuff that needs these workarounds is hardly usable, maybe in singleplayer but not in multiplayer. Also what needs to be done is to completely redo the includes, fix the circular dependancy, there must be a normal way to code this stuff? It is all so needlessly complicated.

The only modularization that is required is to split psionic classes from main package (prc_psionics contains only scripts, it should contain textures, creatures, and anything else too, also the hak itself should also work standalone without other 2das, ie. duplicate the 2das into this hak but provide only psionic classes in those 2da, duplicate include files and other scripts shared with main package as well).

Most of my working life has involved trying to tame projects that break in weird and wonderful ways because they’re riddled with dependencies. You change a bit of a program to do with entering calendar entries and then the authentication system breaks. Dependencies are goooood times! :wink:

Trust me, there really isn’t an easy answer. The best way I’ve found to do it is to refactor any code you touch in the process of updating/fixing things. Eventually you stop having to refactor so much and the only bits of crappy code you’re left with are the thing that aren’t buggy and no-one wants to change.

On the subject of testing, the PRC source code has what you suggest only better. I make a change, I run the make script and it will build all the hak files and install them into NWN. This literally takes two seconds unless a script inside the include folder has been changed - when that happens I’ve got to wait 10 minutes while all of the scripts that use it are recompiled. Another compelling reason to modularise everything right there. :wink:

I agree and I’m not planning to either. The goal is to make it easier to understand, extend and test so people can do interesting things with it. It is not to make it so you can just copy and paste a script out of it and the script will somehow magically still work. The PRC is a framework not a series of code snippets.

I do want to modularise the classes as well but I will not be making it so you can just pull out the archmage class and it’ll work without the PRC. The goal of doing this is to make them easier to work with and make it easier for people to make new ones. If there’s just one file that defines a class (via a few structs or something) that’s a heck of a lot easier for people to get their head around than the current state of “lots of random bits of logic spread throughout dozens of unrelated files”.

Modularise. That’s how you cope with big sprawling bundles of mud; you break them down into lots of smaller systems with well-defined ways of communicating with each other. Ideally, they don’t communicate at all or only communicate with a core that’s responsible for orchestrating everything but this isn’t always possible.

All dependencies should be switched from being dependencies on code/implementations to dependencies on abstractions/function signatures. This lets you swap out modules without touching any of the code which uses them. It’s great for automated testing and developing modules independently. People who are into Object Oriented Programming may know this as the Liskov Substitution Principle.

I don’t know enough about the PRC or NWNX to really comment on switching out all the stuff like the newspellbook for NWNX features… but my gut instinct is to not add another dependency - certainly not until I’ve got my feet wet and understand things a bit better. However, if things are modularised better I see no reason why there can’t be NWNX versions of core modules that could be switched in when running the PRC in NWNX. It’s certainly better than a bunch of ugly if/else spaghetti.

On the subject of 2das/tlks, like I’ve said before, I’m completely ignoring them for now. I’m just going to pretend they’re not a problem. They’re a huge problem not just for the PRC but for all mods of NWN since they stop any notion of composing different mods together stone dead. I’ve made some requests on the Beamdog forums to deal with it but they don’t seem to have taken off, which is a shame.

Incidentally - and this is completely off topic - is there anyone here who’s really into their Java and feels like taking updating the character creator to EE off my hands? I’m really, really not enjoying it and I’m starting to worry I might get a bit burned out with it. I’ve gotten as far as getting it into netbeans and it’ll compile into a full working copy (git repository: ) and I’ve had a go at refactoring it a bit but it’s slow going and really unpleasant.

Full disclaimer (with apologies to people involved if they read this). The code is really bad. It’s really inflexible, it’s full of copy and pasting, everything is a global function wrapped in an object, some things throw exceptions, other things call System.exit, everything to do with bif/key files is nothing but magic numbers and the code is just plain ugly to look at.

I honestly don’t understand how the hell it even works with the GOG version of NWN; it should just crash when it can’t find patch.key or xp1patch.key but it doesn’t.

1 Like