<html><head></head><body>Hi Daimonion,<br><br>This seems like a bug.<br>Could you retry with the freshly launched 2.2 client? We tweaked some<br>things in the chuncked upload there so maybe the problem is fixed.<br><br>Other than that I want to stress that ownCloud is not a backup solution.<br>The sync client syncs. So if the file on your computer is removed (and<br>the sync client is running) it will delete the file on the server as well (and<br>vise versa). So please use additional backup software to backup the data on<br>your server.<br><br>Cheers,<br>--Roeland<br><br><div><strong>
[owncloud-devel] wanted behaviour on chunking big files and upload them
<br><br><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:1px solid #CCC;padding-left:1ex;">Hello First Post here in this mailing list.
<br>My brother in law setup a debian server with owncloud 8.2.4 and we ar eboth
<br>using it as a backup system.
<br>I'm using the OwnCloud Client 2.1.1 Build 5837 and i'm syncing files via
<br>internet to this ownCloud instance.
<br>At the moment i'm upload a bunch of really big Files (4,3/7,8 GB (600GB in
<br>total) ) as i a want to store my system backup automatically outside my
<br>So long everything is good with the setup. The Owncloud runs very stable and
<br>i synced many files (little ones (kb) and also big ones (1-2GB)) before the
<br>really big files (4,3/7,8GB) without problem.
<br>But with this files i ran into a problem. "24h forced disconnect of dsl
<br>line". Every night my dsl line gets disconnect (Deutsche Telekom) and
<br>disrupts the upload. No Problem for Owncloud. It logs a failure "write not
<br>possible" and as soon as the connection is online again it resumes upload.
<br>Let you explain an example.
<br>The Big Files are splitted in 1600 chunks. So let's say i start an upload
<br>from a new file in the evening. Then maybe the first 1000 chunks (3 at the
<br>same time) will be uploaded until the forced disconnect disrupts the
<br>internet connection. The client fails on chunk 1000,1001,1002
<br>After reestablishing the connection the client resumes the upload with chunk
<br>1000,1001,1002. Fine. But if it reaches the last chunk 1599 it will start
<br>over with chunk 0 and it gets more worse. There are 3 parallel uploads, and
<br>the first time the file was uploaded all 3 upload sockets were used by the
<br>same file. Now as it started over from chunk 0 it uses only 1 upload socket.
<br>the 2 remaining sockets will be used by other files (even the same big size
<br>Now as there remains just 1 upload socket it will take ages to upload the
<br>remain chunks. And there is no detail how much chunks have to be
<br>What is the intended behaviour on forced interrupt while chunked uploading
<br>and is there anything we can configure that a forced disconnect doesn't not
<br>ends in eternally upload the same chunks again and again?
<br>Is there a possible way (server or client side) to look which chunks are
<br>uploaded correct and which chunks have to be uploaded to complete the file
<br>on the server?
<br>Thanks in advance
<br>View this message in context: http://owncloud.10557.n7.nabble.com/wanted-behaviour-on-chunking-big-files-and-upload-them-tp17249.html
<br>Sent from the Developers mailing list archive at Nabble.com.
<br>Devel mailing list