in discussion General discussion / ReSample » [SOLVED] .srs MemoryError Unexpected Error
Great tutorial.Thanks.
Drift Boss
Great tutorial.Thanks.
Drift Boss
Trying 'renamed' file: S:\Big Folder\Xbox 360\Extracted\to srr\0B335CBB87AD8382757573CE6513DB843245B08341
Trying to rebuild compressed file 415608CB\00000002\0B335CBB87AD8382757573CE6513DB843245B08341.
Trying 'renamed' file: S:\Big Folder\Xbox 360\Extracted\to srr\0B335CBB87AD8382757573CE6513DB843245B08341
Hi Hecti, I see 'No RAR executables found.' in your last screenshot.
This means that the rars were created using file compression (default setting).
To reconstruct these a whole lot more effort is required. Here is a tutorial: http://rescene.wikidot.com/tutorials#compressed
The DVDR rules say to not use compression: https://scenerules.org/html/2011_DVDR.html
11.5. Compression method MUST use mode 0 ("store").
It means no moderator has looked at it to confirm the srr.
Based on uploader (automated or not) I had a quick look at the release names and batch confirmed them all.
The smaller uploaders were left and those uploads were looked at a bit more closely to remove any crap uploads.
The confirm script halted at broken SRRs. Some large music releases took a while to get fixed and so no batch confirms for a long time.
Then Skalman set all SRRs as confirmed in the database.
Ideally the old state should be set back and usage of the feature resumed.
The confirmed status should not be visible to everyone, since it's even confusing for the admin that created the feature :)
same here.. uploaded and unconfirmed .
i think it need to be varified but i didnt found any explanation about that…
Example :
https://www.srrdb.com/release/details/Xianzai_-_Synesthesia_EP-WEB-FLAC-2021-SSR
thank you for suggesting the method. I've tried it a bit and it's almost like hitting the lottery really for me to recreate an original sample.
It is must likely a DLL file missing. msvcr100.dll is missing.
Even when I uninstall Visual C++ Redistributable and reinstall Visual C++ Redistributable from MS homepage, then issue still persist. I am using windows 11.
I'm having trouble when trying to repack my iso file after I've generate the srr from the packed rars using the pyrescene gui.
Then I unpack my release and put the iso and srr in the same folder an when trying to reconstruct back to rars I get nothing…
any help would be much applicated
What does it mean exactly when a release is "unconfirmed" ?
And how can it be "confirmed" ?
Example:
https://www.srrdb.com/release/details/8.Mile.2002.COMPLETE.UHD.BLURAY-SURCODE
Source: forum message on a torrent site
You can try restore m2ts sample using tsmuxer.
This method is not 100% perfect: tested 3 COMPLETE.BLURAY releases, only 1 success. I've successfully recreated m2ts sample from
Black.Panther.Wakanda.Forever.2022.MULTi.COMPLETE.BLURAY-AKENATON
using TSmuxerGUI v2.6.12 with settings:
Insert SEI and VUI data if absent
[check] Continually insert SPS/PPS
Blu-ray: 45KHz clock: 0
Split every 60s
Sometimes srrdb shows info from incomplete sample files - in this case you can't compare checksums.
It means that srs is not able to find the necessary data in the main video file.
The sample could be made from another rip, or srs has trouble locating subtitles since they are mismatched: packaged/extracted in the sample vs mkv.
One other case could be that the location stored in the srs is wrong e.g. with a samplefix
In that case the reconstruction can be attempted by adding the -m parameter to srs.exe.
You can look up the codes here:
https://github.com/srrDB/pyrescene/blob/master/resample/srs.py#L470
Btw the "Unable to locate track signature for track 4" error only appears when using command prompt not in srrGUI which is weird as it used to be displaying that on other releases.
Using srrGUI on windows.
I had the Unable to locate track signature for track 4. Aborting (common) errors more than a few times over the last years.
srrGUI: Process completed. srs.exe exit code = 3
Is it something I should do to fix that ? I always give up
Would be nice to have a table with all error code messages and what it means precisely.
In my case, error code = 3 what could be possibly wrong ?
- .srs file is corrupted or equivalent ?
- anything else.
Alright, so for posterity, I tried out all the RAR versions for macOS (that I could find), starting with the most recent one and going backwards. And testing the CRCs after a few files had been generated during each attempt.
Used this release: https://www.srrdb.com/release/details/UNCHARTED_Legacy_of_Thieves_Collection-FLT
This is the command I used. I guess "-mt12" could have been lower. That poor dual core from 2013…
./rar a -m1 -mdG -v250000000b -s- -ds -mt12 -vn -o+ -ep -idcd -ma4 name.rar name.iso
Ran these versions:
Version | Result |
---|---|
rarmacos-x64-620.tar.gz | Bad CRC |
rarmacos-x64-612.tar.gz | Bad CRC |
rarmacos-x64-611.tar.gz | Bad CRC |
rarmacos-x64-610.tar.gz | Bad CRC |
rarosx-6.0.2.tar.gz | Bad CRC |
rarosx-6.0.1.tar.gz | Bad CRC |
rarosx-6.0.0.tar.gz | Bad CRC |
rarosx-5.9.1.tar.gz | Bad CRC |
rarosx-5.9.0.tar.gz | Bad CRC |
rarosx-5.8.0.tar.gz | Bad CRC |
rarosx-5.7.1.tar.gz | Bad CRC |
rarosx-5.7.0.tar.gz | Bad CRC |
rarosx-5.6.1.tar.gz | Bad CRC |
rarosx-5.6.0.tar.gz | Bad CRC |
rarosx-5.5.0.tar.gz | Didn't run (32-bit) |
Like I said, I might be missing some smaller versions in between, including beta versions, but I don't think this release requires some weird date modifications when looking closer att the .srr file. But I'm of course not sure about that.
So maybe I was just unlucky that this didn't work. Or there's more to it to replicate this on macOS.
I think I'm quickly reaching my limits to fully understand this and to keep testing it. Not sure how to verify that it reaches that code. I think I'm just going to be happy with the manual workaround for Razer 1911 that you showed me. :) And be sad that FairLight uses macOS. :(
I do have access to an old Macbook Air though, so I decided to do some testing with it. I managed to scrounge up no less that 29 versions of RAR for macOS. Missing some of the really old ones and I don't have any beta versions. Not even sure it's all of them, but I have from 3.9.3 to 6.20.
I soon realized though that most of them wouldn't run on the system, since they are 32-bit and Apple dropped support for that in Catalina and onwards in 2019. Around that time I think all future macOS versions of RAR also became 64-bit. So unless the good people at FairLight are running outdated operating systems, I'm assuming they have to be using a fairly recent version of RAR, which narrows it down a lot.
That computer is really slow though, so I've only done one test and it didn't pan out. But I'm not even sure I'm doing it correctly. I'm running the same, custom command as the one above (but for an FLT release of course). Compressing the whole release and testing the .sfv file. Perhaps I'll try a few more though and see if I get lucky. And perhaps I should do it on something smaller than this. :D
https://www.srrdb.com/release/details/UNCHARTED_Legacy_of_Thieves_Collection-FLT
About FILE_CRC though, I think I get it! It's the CRC of the compressed payload of the volume. That would be what you've highlighted in blue above I take it? I.e. the header/meta data of the volume itself is not included. That's in the .sfv though, the CRC of the full file, header data and all, which is generated afterwards by a separate process. FILE_CRC is generated and stored in the header during the compression process.
So it compresses "some" data and calculates CRC for that. Then stores that in the header of that volume and moves on to the next volume and caculates that, and so on. The exception being the last volume that has the CRC of the whole compressed file, instead of the CRC of the compressed remaining portion, as you explained.
(I wonder how it does that though, the last bit. If it can sort of "add up" the partial checksums and know what the last one will be immediately when it reaches there. Or if it's maintaining an ongoing, parallel checksum calculation for the whole file as it is progressing. Because otherwise it'd have to recalculate the whole thing when it's done, to be able to close out/finalize the archive properly, which takes time. Anyway, that doesn't matter. Just curious.)
But what I'm not really understanding here is this. If the resulting volume (header and all) should result in EXACTLY 250000000, and since we're dealing with compression here, it cannot know beforehand how much uncompressed data it will be able to put in the first volume, because it can't know exactly how well it will compress. So it has to compress more, too much if you will, and then cut it off at 250000000. Well, slightly less than that, because it needs to fit the header data in there as well, so that the result comes out to exactly 250000000. It can know how large the header is though.
And if the size of what you are compressing affects how it compresses the data, i.e. "later" data can influence how it chooses to compress "earlier" data, then how is it going to be able to determine enough of a sample size so that it's going to leave the "earlierly" (is that a word?) compressed data alone? So it will match what it would do if you just let it compress the whole thing?
To put it another way, 250MB of compressed data into one volume might look different from the first volume of two of 500MB compressed data, since it has more to look at during the compresseion and might make different decisions. Despite the first 250MB of uncompressed data of the 500MB file being identical in this example.
As I'm typing this I realize this is not a problem for when you're generating the original archive. You're just compressing data in one end and out the other comes a stream of compressed data. Whatever it decides to write during this process is up to the algorithm. How far it decides to "look ahead" and when it actually writes something doesn't matter, but WHEN it does you can be sure that that's not going to change anymore during the compression process. The algorithm is satisfied with that. So then when it's output enough data for a volume (minus the header), it can just chop that off, calculate FILE_CRC, and write a file. And then go back to waiting for more data to arrive.
So if I specify "volumes of 250000000, please", how much data is pyReScene actually grabbing to test the compression so that it arrives at slightly less that 250000000 of compressed data? I guess if the sample size is sufficiently larger, it will compress similarly enough, as if you'd compress the whole thing, to not affect the outcome for the first volume.
still no solution for this issue?
i have the same error!
I'm not sure I'm following regarding FILE_CRC. Why isn't it the same as the checksums in the .sfv file? Especially if we're trying to force it to use 250000000 as the size in the testing phase, which is the same as what the end rar volumes are supposed to have.
Suppose you have a release with 3 volumes:
You are talking about the pyReScene_compressed.rXX files, right? They all get this size for me still:
Yeah, with the change above these files change size according to what I put in main.py. I don't know why it's not working for you, maybe try to make some change that is easier to check and see if that gets applied (i.e. are you really running the modified code).
When you say sample here, do you mean pyReScene_data_piece.iso I see in the temp folder?
Yes
I've given it a try too with pyReScene, but I can't get it to detect the correct RAR version. I've tried various values for -v…b until I found one that results in the right value for the PACK_SIZE field1. With this the right amount of data should be in the first volume and the split should happen at the right place, but still it doesn't result in a good checksum when pyReScene runs RAR on the sample file. I don't have a good explanation for why this is. Maybe the actual size of the file triggers some special condition?
1. My value was -v249999977b, but it will be different for you because I have other mods that e.g. change the name of the sample file. For you it may be -v249999989b, but ymmv. The difference is that I use a different (shorter) name for the sample. To check this I stopped pyReScene once it gets "stuck" (trying the full-file compression) and looked at the pyReScene_compressed.rar in a hexeditor.
I'm not sure I'm following regarding FILE_CRC. Why isn't it the same as the checksums in the .sfv file? Especially if we're trying to force it to use 250000000 as the size in the testing phase, which is the same as what the end rar volumes are supposed to have.
You are talking about the pyReScene_compressed.rXX files, right? They all get this size for me still:
Size: 180 MB (189 417 906 bytes)
Size on disk: 180 MB (189 419 520 bytes)
I'm guessing the slight difference is due to the "sectors" of the SSD or something, like how much data must be written at minimum at a time. Not due to the formatting, but how the NAND cells work, but not sure.
When you say sample here, do you mean pyReScene_data_piece.iso I see in the temp folder?
And I'm still trying it out with this release, so it's a 50GB file. :)
https://www.srrdb.com/release/details/A_Plague_Tale_Requiem_v1.3.0.0-Razor1911
What checksum is that on FILE_CRC? Because it's not the checksum of the first .rar volume.
It's the checksum over the compressed data stored in that RAR File block. Except for the last volume, where it has a FILE_CRC that matches the CRC of the whole file. You can see this in the comments of function try_rar_executable().
I commented out row 1923 and added the row below that.
Your change looks good (the expected size is ~238MiB). This works for me, I can see it create temporary files with the size I set here :S
What is the size of the sample? Maybe the sample is too small or compresses too well? Although the sample-size detection should take care of that :/
Oh alright. Then I understand better how it works. Forget about my assumptions. :)
What checksum is that on FILE_CRC? Because it's not the checksum of the first .rar volume.
I normally just use the compiled executables, but I do have the Python version set up as well for use with pyAutoReScene. So I tried to modify main.py for that, but I'm not sure I'm doing it right, because it keeps spitting out temp files of those 180MB or so. Shouldn't they be 250MB is the thinking? So that it can compare one of those with the correct size to FILE_CRC?
I commented out row 1923 and added the row below that.
-
So I think what you mean is that the creation time does not matter,
Yes!
This bit I don't quite follow. You mean that you compressed a sparse file, but when you extracted it was no longer sparse?
It doesn't matter, but, well, yes. For the second release it worked fine, even with 0x220, so it was sparse. But as you explained, when using pyReScene, it copies the header data from the .srr file, so it doesn't matter what attributes my .iso file had. My bad attributes don't go into the archive.
Haven't found a significance to that date either. Perhaps different packers in the group just use different dates over and over. :) I'm going to try 2005 first next time though.