HomeDev GuideAPI Reference
Dev GuideAPI ReferenceUser GuideGitHubNuGetDev CommunitySubmit a ticketLog In
GitHubNuGetDev CommunitySubmit a ticket

Chunk upload of large files

Describes how to work with chunk uploads of large files when using the Optimizely Service API.

The Optimizely Service API allows for bulk operations, for example uploading of files to be imported into Optimizely Commerce.

ASP.NET has a maximum limit of 2 GB. Uploading of files may be time consuming, which in turn can also be a security risk. You can therefore upload files in chunks to be used later for the import.

Example model

[Serializable]
public class UploadFile
  {
    public Guid UploadId { get; set; }
    public long OffSet { get; set; }
    public DateTime Expires { get; set; }
  }

Upload file chunks

This method uploads large files to the service API in multiple chunks. It also can resume if the upload is interrupted. If an uploadId is not passed in, this method creates a new upload identifier. If an offset is not passed in, it uses offset of 0.

Sends part of file for the given upload identifier. If no upload identifier is given then it will create a new upload id. Use the offset to tell where the part of the chunk file starts.

Typical uses

  • Send a PUT request to episerverapi/commerce/import/upload/chunk with the first chunk of the file without setting uploadId, and receive an uploadId in return.
  • Repeatedly PUT subsequent chunks using the upload_id to identify the upload in progress and an offset representing the number of bytes transferred so far.
  • After the last chunk, POST to episerverapi/commerce/import/upload/commit to complete the upload.
GETget/episerverapi/commerce/import/upload/chunk/{uploadId}/{offset}
PUTput/episerverapi/commerce/import/upload/chunk/{uploadId}/{offset}

JSON response type

C# sample code

using (var client = new HttpClient())
  {
    client.BaseAddress = new Uri("https://mysite.com/");
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
    var content = new MultipartFormDataContent();
    var filestream = new FileStream(path, FileMode.Open);
    content.Add(new StreamContent(filestream), "file", "Catalog.zip");
    var response = client.PutAsync("/episerverapi/commerce/import/upload/chunk", content).Result;
    if (response.StatusCode == HttpStatusCode.OK)
      {
        var returnString = response.Content.ReadAsStringAsync().Result;
        var results = JsonConvert.DeserializeObject(returnString);
      }
  }

XML response type

C# sample code

using (var client = new HttpClient())
  {
    client.BaseAddress = new Uri("https://mysite.com/");
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
    client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("text/xml"));
    var content = new MultipartFormDataContent();
    var filestream = new FileStream(path, FileMode.Open);
    content.Add(new StreamContent(filestream), "file", "Catalog.zip");
    var response = client.PutAsync("/episerverapi/commerce/import/upload/chunk", content).Result;
    if (response.StatusCode == HttpStatusCode.OK)
      {
        var returnString = response.Content.ReadAsStringAsync().Result;
        FileUpload results = null;
        XmlSerializer serializer = new XmlSerializer(typeof(FileUpload));
        using(var reader = new StringReader(returnString))
          {
            results = (FileUpload)serializer.Deserialize(reader);
            reader.Close();
          }
      }
  }

Response

<UploadFile xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <Expires>2014-09-08T10:18:12.8122808Z</Expires>
  <UploadId>aa22d215-16be-47ed-814d-0a7c96692a53</UploadId>
  <Offset>25565</Offset>
</UploadFile>

Commit uploaded chunk

This method commits the chunked uploads associated with the upload identifier, and creates a whole file to be used in a separate import call. The default value overwrites the file if the completed file already exists on the server for this upload identifier. The method returns the upload identifier if the upload was successfully committed.

GETget/episerverapi/commerce/import/upload/commit/{uploadId}/{overwrite}
POSTpost/episerverapi/commerce/import/upload/commit/{uploadId}/{overwrite}

JSON response type

C# code sample

using (var client = new HttpClient())
  {
    client.BaseAddress = new Uri("https://mysite.com/");
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
    var response = client.PostAsync(String.Format("episerverapi/commerce/import/upload/commit/{0}", 
      uploadId), new FormUrlEncodedContent(new List<KeyValuePair<String, String>>())).Result; 
  }

Response

"\"9e4bd26f-b263-488c-a5d3-3e4c9f87ac4f\""

Delete uploaded chunk

This method recursively deletes chunked uploads associated with the upload identifier.

If the deletion is successful, the method returns a "no content" status code. If the upload identifier does not exist, the method returns a "not found" status. Otherwise, it returns an internal server error with an exception message.

DELETEdelete/episerverapi/commerce/import/upload/{uploadId}

JSON response type

C# code sample

using (var client = new HttpClient())
  {
    client.BaseAddress = new Uri("https://mysite.com/");
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
    var response = client.DeleteAsync(String.Format("episerverapi/commerce/import/upload/{0}", uploadId), new FormUrlEncodedContent(new List<KeyValuePair<String, String>>())).Result; 
  }

📘

Note

You can also delete old uploaded files and directories using the Commerce scheduled job Remove old Service API uploaded files and directories. Use this job in situations where you uploaded file chunks but forgot to call the Commit API. The scheduled job ensures that the files will eventually be removed.