The wiki is very helpful in the matter of constraints applying to EE and 1.69 respectively.
Question - CEP 2 has a number of haks which, while well within the 2Gb maximum, are in the ballpark of 500 Mb. We have choices about where we put new content. Does it give anyone a practical problem if, say, a hak grows to 750-1000 Mb?
That decision purely depends on if you’re making hakpacks for EE only or not, and only relates to the number of files. Hakpacks have always had a 2GB limit (not sure if 2000 or 2048MB though)
no, I use haks that are up to 1.5 GB but after 1.5 definitely dont. I think the hardcap is 2
Yeah, stuff gets unreliable with haks over 1.5 GB in size - random crashes, trouble packing, etc. Mileage varies with different systems.
I just don’t see any advantage of big haks, but a lot of downsides.
Shadoow, the whole point is that having hundreds of tiny hakpacks would be a nightmare to organise/use in a module.
What are the purported downsides you claim? I mean this is a complete non-comment
I’ve also not seen any issue with larger hakpacks myself, although not quite got to 2GB yet in any one.
Who is talking about hundreds? I use a classic distribution and separation that CEP1 lined out and that just means there are about 10 haks with content by type. I am not sure why is a problem to have separate hak for every tileset - those aren’t modified very often.
I didn’t mention the drawbacks because I thought it is super obvious. Well if it is not then I will elaborate on that topic.
- the larger is the size of the hak the more problematic is to send it over internet (or even LAN) onto an actual server machine
- the larger is the size of the hak / the more files are in it, the slower is working with it in nwhak.exe - this was especially a huge issue until recently as nwhak.exe had a limit on a number of files you can upload at once, so adding 15000 files into hak had to be done in about 8 batches and you had to separate the content manually beforehand into different folders (or is it still an issue? I am not sure actually)
- large sized haks are absolutely unsuitable to PWs without nwsync, once even single file changes in hak players must redownload it and nobody wants to download 1gb haks every time you change them (and don’t even try argumenting that players can use nwhak and update the hak themselves please)
- the more content there is in hak, the harder is to find out specific content in case you want to adjust/delete it, and while it is true that the module properties - check conflict allows you to find out hak name where that content is, opening toolset, then module and then this is often too slow to bother with it or not an option at all (like when you are working on custom content on a PC without nwn installed)
- tilesets, often I am improving or adding tiles into specific tilesets, if I had them all in one combo hak it would be nightmare to deal with it, having all tileset files in one single hak allows me to just unpack whole hak into folder and then work in that folder without need to delete models/textures/other files not related to that tileset
- in case of community packages like CEP, large haks doesn’t allow user to easily exclude content he doesn’t want / he has no usage for.
This is why I always criticized CEP2 for their core haks that are packing content without any logic and sense.
The only reason for having big haks is/was limit on a number of haks. I never run into it myself and it was upped in EE anyway.
1 Like
Ah so it’s just personal preference, ok. Nothing like “it crashes” or “it won’t work”, just that at some arbitrary level the hakpack is “harder” to work with in whatever way you want to. I hoped it crashed the toolset or something
I get that classifying and splitting up logically is sound, you won’t hear a complaint from me there, even to the individual tileset level seems fair. However at some stage you can’t have 1 hakpack per creature or whatever, there is a middle ground and I don’t think the size in MB of the hakpack really determines that; maybe the amount of files is more pertinent but even then why would you have 2 hakpacks for, say, a new phenotype when it will fit in one?
If you want to criticise the CEP file organisation, having no external reference points for the files and the hap-hazard naming conventions (if any even exist) is a much bigger fault then the size of the hakpacks - growing from 500MB to 750MB isn’t a big deal.
I’d also suggest any persistent world not using nwsync is in for massive pain using any moderate amount of hakpacks, regardless of how anything is organised, such is the way of tens of thousands of custom files…! There’s no easy fix there, even if somehow you decide that 100MB is the random limit you think is “big enough”.
You haven’t read what I wrote. Will keep that in mind for the next time.
I agreed with you on organisational points but I can’t get behind it being harder just at some arbitrary level which you don’t even specify hahaha
from a technical standpoint there are no advantages in using small haks, 2 factors to consider.
- 2GB or 2048 is the hardcap after that it just goes bonkers and crashes left and right and the skies turn red
- another hardcap is 32k files (this i tested a few years back so i cant actually 100% remember if thats the right number of files but its something along those lines) 32k files is equal to 2 GB even if the hak is like 800 MB it will behave the same way.
There is no reason to use haks under 1 GB unless you are capped on files, i would say 1.2 - 1.5 is ideal using both for years and never caused an issue only when trying the 2 points above.
The file cap was raised to a lot higher as per the wiki page in an EE patch. The 2GB limit remains however.
I don’t think sending haks over the internet is an issue for most. Even the most rural users (e.g. me) has an internet connection that moves at ~1-2 mb/s). As for sending over LAN, my network moves at 100 mb/s because everything is wired in - more than enough speed to handle transferring haks.
A lot of the difficulties you point out working with large haks can be mediated by using NASHER with a good organizational structure:
- Keep the HAK extracted so you don’t have to unpack every time you want to work with the files in it.
- Keep the source code separated into a folder structure with a top-level folder named for the hak, then the content within separated into folders (e.g. /creatures/beholders/etc.).
- Run NASHER to build your haks. You really don’t need nwhak anymore.
This is the way Project Q is organized and it makes the workflow 100 times easier to work with than the old method of dumping all the contents of a hak into a jumbled mess of a /temp folder.
Your last point about community packages is spot on. However, this can be mediated by excluding the content at the 2da level. Of course, that assumes the user knows how to modify 2das.
Agree with you totally on tilesets. Those should always be in separate haks.
1 Like
What I mean is that it is just slower. There is a difference between sending a 100mb hak and sending a 1gb hak. Of course it depends on how often you do it, I do it pretty often. The LAN argument is probably nonsense, it is in matter of seconds indeed and I don’t have proof that it affects nwserver in any way. Although sending files from my ubuntu server to outside is extremely slow for some reason, but since there is no reason to send haks outside (backup them) it is not relevant anyway.
I am thrifty person so whenever I am just adjusting 1 file it pains me to send large hak containing 5000 files.
Sure, my remarks on the nwhak.exe aren’t relevant for anyone using NASHER or nim-tools or xoreos-tools or whatever. Personally I never found a liking in these console apps so I am still using old nwhak.exe and those limitations applies there. If you don’t use it, then the second argument is a non-issue of course.