Does iOS Daemon has memory limit?

Questions and Answers about all things *OS (macOS, iOS, tvOS, watchOS)

Does iOS Daemon has memory limit?

Postby Wingzero » Thu Jul 27, 2017 2:44 am

I write a daemon process on a iOS jb device, and it can upload file to AWS S3 by SDK. I found when trying to upload, it crashes seems like memory issue. Does anyone know if daemon has memory limit on iOS? If so, can we increase it? Thanks!

More details:

I copy AWS SDK from Github. I compile them as static library and link with my code (tweak + daemon code). When trying to upload file < 5M, it seems work fine. However, when I try to upload large files like 9M, 20M, the Daemon start crashing. I read the syslog but no exception clue and no crash report at all.

First I created a demo app with same functions and AWS lib to test, but in the demo app, it seems totally fine, handling large files as expected. From Charles, it send out multiple requests, first is a initial request, then each part of the file. I checked Charles when working with Daemon, it only send out the first request and crashed then.

Then I tried to use LLDB to hook into my Daemon process, directly run ObjC code to trigger upload, it hangs for a sec and says the process has terminated due to memory issue, still, no stack trace, and I can't even break at exceptions as well (just googled how to add break point at exceptions like -w true).

Now I don't know how to tackle down this issue further. I tried to read S3 upload functions, it seems will cut file into parts and read part of them directly as below part. It seems fine. That's why I wonder if there is memory limit for a daemon, so
Code: Select all
NSData *partData = [fileHandle readDataOfLength:dataLength];
might be issue?

First, S3 indeed check for file size and use different way to upload:
Code: Select all
if (fileSize > AWSS3TransferManagerMinimumPartSize) {  // 5 * 1024 * 1024, 5MB
            return [weakSelf multipartUpload:uploadRequest fileSize:fileSize cacheKey:cacheKey];
        } else {
            return [weakSelf putObject:uploadRequest fileSize:fileSize cacheKey:cacheKey];
        }


Code: Select all
        for (NSUInteger i = c; i < partCount + 1; i++) {
            uploadPartsTask = [uploadPartsTask continueWithSuccessBlock:^id(AWSTask *task) {

                //Cancel this task if state is canceling
                if (uploadRequest.state == AWSS3TransferManagerRequestStateCanceling) {
                    //return a error task
                    NSDictionary *userInfo = @{NSLocalizedDescriptionKey: [NSString stringWithFormat:NSLocalizedString(@"S3 MultipartUpload has been cancelled.", nil)]};
                    return [AWSTask taskWithError:[NSError errorWithDomain:AWSS3TransferManagerErrorDomain code:AWSS3TransferManagerErrorCancelled userInfo:userInfo]];
                }
                //Pause this task if state is Paused
                if (uploadRequest.state == AWSS3TransferManagerRequestStatePaused) {

                    //return an error task
                    NSDictionary *userInfo = @{NSLocalizedDescriptionKey: [NSString stringWithFormat:NSLocalizedString(@"S3 MultipartUpload has been paused.", nil)]};
                    return [AWSTask taskWithError:[NSError errorWithDomain:AWSS3TransferManagerErrorDomain code:AWSS3TransferManagerErrorPaused userInfo:userInfo]];
                }

                NSUInteger dataLength = i == partCount ? (NSUInteger)fileSize - ((i - 1) * AWSS3TransferManagerMinimumPartSize) : AWSS3TransferManagerMinimumPartSize;

                NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingAtPath:[uploadRequest.body path]];
                [fileHandle seekToFileOffset:(i - 1) * AWSS3TransferManagerMinimumPartSize];
                NSData *partData = [fileHandle readDataOfLength:dataLength];
                NSURL *tempURL = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent:[[NSUUID UUID] UUIDString]]];
                [partData writeToURL:tempURL atomically:YES];
                partData = nil;
                [fileHandle closeFile];
                AWSS3UploadPartRequest *uploadPartRequest = [AWSS3UploadPartRequest new];
                uploadPartRequest.bucket = uploadRequest.bucket;
                uploadPartRequest.key = uploadRequest.key;
                uploadPartRequest.partNumber = @(i);
                uploadPartRequest.body = tempURL;
                uploadPartRequest.contentLength = @(dataLength);
                uploadPartRequest.uploadId = output.uploadId?output.uploadId:uploadRequest.uploadId;
               
                //pass SSE Value
                uploadPartRequest.SSECustomerAlgorithm = uploadRequest.SSECustomerAlgorithm;
                uploadPartRequest.SSECustomerKey = uploadRequest.SSECustomerKey;
                uploadPartRequest.SSECustomerKeyMD5 = uploadRequest.SSECustomerKeyMD5;

                uploadRequest.currentUploadingPart = uploadPartRequest; //retain the current uploading parts for cancel/pause purpose

                //reprocess the progressFeed received from s3 client
                uploadPartRequest.uploadProgress = ^(int64_t bytesSent, int64_t totalBytesSent, int64_t totalBytesExpectedToSend) {

                    AWSNetworkingRequest *internalRequest = [uploadRequest valueForKey:@"internalRequest"];
                    if (internalRequest.uploadProgress) {
                        int64_t previousSentDataLengh = [[uploadRequest valueForKey:@"totalSuccessfullySentPartsDataLength"] longLongValue];
                        if (multiplePartsTotalBytesSent == 0) {
                            multiplePartsTotalBytesSent += bytesSent;
                            multiplePartsTotalBytesSent += previousSentDataLengh;
                            internalRequest.uploadProgress(bytesSent,multiplePartsTotalBytesSent,fileSize);
                        } else {
                            multiplePartsTotalBytesSent += bytesSent;
                            internalRequest.uploadProgress(bytesSent,multiplePartsTotalBytesSent,fileSize);
                        }
                    }
                };
Wingzero
 
Posts: 34
Joined: Thu Jul 27, 2017 2:35 am

Re: Does iOS Daemon has memory limit?

Postby morpheus » Thu Jul 27, 2017 12:54 pm

Everything has memory limits in iOS (and also in MacOS, though more lax). Jetsam kicks in and kills you if exceed them.

The memory limit definitions are in /System/Library/LaunchDaemons/com.apple.jetsamproperties.MODEL.plist. See attachment from the upcoming Volume I.

There are definitions for SystemXPCService, XPCService, Daemon, etc.

IF you have an entitlement, you can use memorystatus_control to easily change your jetsam properties. There is also posix_spawnattr_setjetsam. For either, you need a Jailbroken device. Else, I can't really suggest a way around this.
Attachments
Screen Shot 2017-07-27 at 8.42.29 AM.png
Screen Shot 2017-07-27 at 8.42.29 AM.png (128.82 KiB) Viewed 629 times
morpheus
Site Admin
 
Posts: 532
Joined: Thu Apr 11, 2013 6:24 pm

Re: Does iOS Daemon has memory limit?

Postby Wingzero » Fri Jul 28, 2017 4:01 am

Hi thank you for reply!

Actually I have made more tests, and that's what I found:

I manually copy out the data read part out for method calling in AWS S3:
Code: Select all
-(void)testReadData {
    NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingAtPath:@"/var/root/libAWSS3.a"];
    [fileHandle seekToFileOffset:(1 - 1) * 5 * 1024 * 1024];
    NSUInteger dataLength = 3650000;
    DDLog(@"checking file handle %@ read Data length :%lu", fileHandle, (unsigned long)dataLength);
    NSData *partData = [fileHandle readDataOfLength:dataLength];
    DDLog(@"read data of length %lu done, partData real length: %lu", (unsigned long) dataLength, (unsigned long)[partData length]);
    [fileHandle closeFile];
    DDLog(@"closed file %@", fileHandle);
}

I found that my daemon seems crashing around NSData *partData = [fileHandle readDataOfLength:dataLength]; however, it will not crash in a app. One more weird thing is, I will see Crash Reporter in syslog saying my daemon crashed, after the file handler is closed (I see the log "closing file ...").

I also tried to reduc the part size to 2MB, now, the S3 upload seems working then.

So I am wondering is it possible that any memory leak or overflow when releasing the resources in daemon? Because the same code in app is totally fine. Another question would be, are there any restrictions or difference in terms of memory leak/overflow for app and daemon and even releasing resources ?

The pain point is there is no crash report, and I don't know how to debug further. Any suggestions and clues /tools / commands? Thanks!
Wingzero
 
Posts: 34
Joined: Thu Jul 27, 2017 2:35 am

Re: Does iOS Daemon has memory limit?

Postby Wingzero » Mon Jul 31, 2017 8:07 am

Hi morpheus,

I don't found my Daemon com.my.MIWorker in
Code: Select all
com.apple.jetsamproperties.N42.plist


Also I am kind of unclear how to modify memory limit for my daemon. Could you give some steps to follow? Looks like I am missing lot of knowledge here...

Also, what are the default values for a daemon and an app in terms of memory limit if I don't explicitly specify it?
Wingzero
 
Posts: 34
Joined: Thu Jul 27, 2017 2:35 am

Re: Does iOS Daemon has memory limit?

Postby Wingzero » Wed Aug 02, 2017 3:55 am

Actually, I found your article No pressue Mom! http://www.newosxbook.com/articles/MemoryPressure.html
The deamon indeed has a small memory limit:
Code: Select all
iOS-06:~ root# mtool |grep 32167
PID: 32167   Priority: 0   User Data: 0   Limit: 6   State:0x18 Tracked,IdleExit


Using
Code: Select all
memorystatus_control
to set the limit pulled the trigger, and S3 SDK upload in daemon does not crash then. Cheers!

BTW, AWS S3 SDK does not release resources fast, compares to AFNetworking, which can upload 200M files without any issue even in daemon with limit 6. Shame on AMZ!
Wingzero
 
Posts: 34
Joined: Thu Jul 27, 2017 2:35 am


Return to Questions and Answers

Who is online

Users browsing this forum: No registered users and 1 guest

cron