NIT uses the filename for an automated download operation as well as to determine whether the file has already been downloaded.
In addition, the old format allowed me to query the file url to get the bytesize of the download.
Without the old url format I am into a fair amount of recoding (which I have started). Even if the url is modified as you suggested, the class that performs the download would have to change to accommodate the new format.
If it can’t be changed, I will assume that the metadata, which seems to include the file name as part of the url’s description text is valid. In addition, there is length specification under type in the html, which I assume is the correct byte size. Also, the new urls, start with /sites instead of https.
Let me know what your plans are. If your suggested url change included additional info, such as the filename and size, it would make parsing simpler, but if it is the same info I mentioned above, then I can accommodate it.
This is the problem with web page scrapers