Vulnerability in EPiServer.Forms

Try our conversational search powered by Generative AI!

Darren Sunderland
Apr 20, 2018
  2266
(2 votes)

Write uploaded media files to Amazon S3

We recently had a need to allow the end users of our website to upload images as part of a competition.  As the images are not part of the website content it seemed illogical to save the data on the EpiServer platform, as this would use up our content item space.  After thinking through our options, we decided to write the uploaded images to an Amazon S3 bucket.

Below are the steps we followed to get our site writing to S3 – this has been tested in EpiServer 10 and above:

Add required packages to the solution

Using NuGet, install the following packages to connect to S3

  • AWSSDK.S3
  • AWSSDK.Core

Create a new class in your solution to handle the Amazon connection and writing:

using System.IO;
using Amazon;
using Amazon.Runtime;
using Amazon.S3;

namespace MyCMS
{
    public class AmazonS3Handler
    {
        private static string accessKey
        {
            get { return "ACCESS_KEY"; }
        }

        private static string secretKey
        {
            get { return "ACCESS_SECRET"; }
        }

        private static AmazonS3Client GetClient()
        {
            AmazonS3Config config = new AmazonS3Config
            {
                RegionEndpoint = RegionEndpoint.EUWest1
            };

            var creds = new BasicAWSCredentials(accessKey, secretKey);
            AmazonS3Client clientout = new AmazonS3Client(creds, config);

            return clientout;
        }

        /// <summary>
        /// This method controls the file writing to the S3 bucket
        /// </summary>
        /// <param name="bucketname">String containing the name of the S3 Bucket</param>
        /// <param name="filename">String containing the name for the artefact in S3</param>
        /// <param name="mimetype">String containing the file extension type</param>
        /// <param name="filecontent">Byte Array containing the content of the file</param>
        public static void CreateMediaFile(string bucketname, string filename, string mimetype, byte[] filecontent)
        {
            var client = GetClient();

            var request = new Amazon.S3.Model.PutObjectRequest
            {
                BucketName = bucketname,
                Key = filename,
                ContentType = mimetype,
                InputStream = new MemoryStream(filecontent)
            };
            client.PutObject(request);
        }
    }
}

Create a new page type to handle the user uploads

There are no specific property requirements for the new page type model.

In the new page type view, use HTML5 file upload component to allow the users to select and upload a file, see example:

<form enctype="multipart/form-data" method="post">
	<label for="fileUpload">Click or Drop your receipt here to upload it</label>
	<input type="file" id="fileUpload" name="fileUpload" multiple="multiple">
	<br />
	<input type="submit" value="Upload Files" />
</form>

In the new page type controller action method, use the following code to read the uploaded files and write them to S3

string bucketname = "BUCKET_NAME";
string filename = "";
string mimetype = "";

for (int i = 0; i < Request.Files.Count; i++)
{
	HttpPostedFileBase file = Request.Files[i];
	int fileLen = file.ContentLength;

	if (fileLen != 0)
	{
		string extension = Path.GetExtension(file.FileName);
		if ((extension.ToUpper() == "JPG") || (extension.ToUpper() == "JPEG"))
		{
			mimetype = "image/jpeg";
		}
		else if (extension.ToUpper() == "PNG")
		{
			mimetype = "image/png";
		}
		else if (extension.ToUpper() == "GIF")
		{
			mimetype = "image/gif";
		}
		
		byte[] input = new byte[fileLen];

		var fileContent = file.InputStream;
		fileContent.Read(input, 0, fileLen);

		filename = file.FileName;
		AmazonS3Handler.CreateMediaFile(bucketname, filename, mimetype, input);
	}
}

The above code snippets provide a guide on how to write to S3.  To use this in a full EpiServer solution the following points should be considered:

  1. Amending the Access Key and Access Secret properties to read the values from a configuration source, this would allow different values to be used for different environments.
  2. Creating a method to generate a unique file name for the uploaded file, this would prevent issues of different users uploading files with the same name (e.g. image1.jpg).
Apr 20, 2018

Comments

Adam Finzel
Adam Finzel Apr 21, 2018 07:57 PM

Thanks for sharing.  The downside of this appreoach is that it ties your server up dealing with the upload.  You can actually get the user to upload directly to S3 from their web browser.  Have a read of https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-UsingHTTPPOST.html .

valdis
valdis Apr 23, 2018 11:05 AM

it depends. sometimes you want to control what's being uploaded and then take a decision (run additional analysis, measure size of the file, etc).

Please login to comment.
Latest blogs
Google Read Aloud Reload Problems

Inclusive web experiences greatly benefit from accessibility features such as Google Read Aloud. This tool, which converts text into speech, enable...

Luc Gosso (MVP) | Dec 4, 2023 | Syndicated blog

Google Read Aloud Reload Problems

Inclusive web experiences greatly benefit from accessibility features such as Google Read Aloud. This tool, which converts text into speech, enable...

Luc Gosso (MVP) | Dec 4, 2023 | Syndicated blog

Import Blobs and Databases to Integration Environments

In this blog, we are going to explore some new extensions to the Deployment API in DXP Cloud Services, specifically the ability to import databases...

Elias Lundmark | Dec 4, 2023

Join the Work Smarter Webinar: Working with the Power of Configured Commerce (B2B) Customer Segmentation December 7th

Join this webinar and learn about customer segmentation – how to best utilize it, how to use personalization to differentiate segmentation and how...

Karen McDougall | Dec 1, 2023