NCS - Documents
Go to Portal
  • NIPA Cloud Space Overview
    • Welcome to NIPA Cloud Space documentation
    • NCS User Account
      • Create NIPA Cloud Space Account
      • Loging in to Nipa Cloud Space
      • Reset NCS Password
      • Activating Two-Factor Authentication
      • Deactivating Two-Factor Authentication
    • Co-working Projects
      • Create Co-working Project
      • Manage Project Member
      • Exporting Resource List
    • Billing & Wallet
      • Topup a Project Wallet
      • Redeem Voucher
      • Insufficient Wallet Balance
  • COMPUTE
    • Compute Instance
      • What is OS Status?
      • Launching Compute Instance
      • Managing Instance
        • Stop
        • Start
        • Restart
        • Resize (Change Machine Type)
        • Take Snapshot
        • Auto Backup
        • Reset Linux "root" Password
        • Reset Windows "Administrator" Password
      • SSH to Linux Compute Instance
        • Converting Key Pair for PuTTy
        • Windows Client using Key Pair
        • Windows Client using Password
        • MacOS/Linux using Key Pairs
        • MacOS/Linux using Password
      • Remote Desktop to Windows Instance
      • Setting Docker Image Caching
      • Renaming Instance
      • Exporting Instance List
      • How to change Compute Instance hostname
      • How to install QEMU Guest Agent
      • How to fix update kernel for RHEL
      • How to Update Rocky Linux 9
      • How to Upgrade Rocky Linux to 9.4
      • How to change RDP port on Windows
    • Compute Image
      • Create Image From a Bootable Volume
      • Importing Your Own Image
      • Export Image
      • Share Image to Between Projects
      • Exporting Image List
    • Key Pair
      • Managing Key Pair(s)
      • Creating a New Key Pair
      • Importing an Existing Key Pair
    • Deployment Script
      • Managing Deployment Script
      • Create a Deployment Script
      • Clone a Deployment Script
      • Edit a Deployment Script
      • Delete a Deployment Script
  • STORAGE
    • Block Storage
      • Managing Volume(s)
        • Create From Blank
        • Create From an Image
        • Create From a Volume
        • Create From a Snapshot
        • Transferring a Volume
        • Accepting a Tranferred Volume
        • Renaming a Volume
        • How to Change Volume Type
      • Managing Snapshot(s)
        • Create a Snapshot
        • Renaming a Volume Snapshot
      • Exporting Volume and Volume Snapshot List
    • Object Storage (S3)
      • Migrate file S3 AWS to S3 NIPA
      • Create an Object Storage Bucket
      • Delete an Object Storage Bucket
      • Create an Object Storage Sub-User
      • Regenerate Sub-User's Access Key
      • Revoke Sub-User's Access Key
      • Create Bucket Policy
      • Bucket Versioning
      • Access S3 Bucket with Cyberduck
        • Upload Files to a Bucket
        • Share File via Public Link
      • Access S3 Bucket with s3cmd
        • Basic command
        • การ set ACL สำหรับการเปิดใช้งาน Objects แบบ public
        • การสร้าง Presigned URL สำหรับการใช้งานชั่วคราว
      • Mount S3 Bucket on instances with s3fs-fuse
      • Mount the S3 bucket on the Windows
      • Access S3 buckets With AWS S3 Client SDK
        • S3Client Configuration
        • Basic Command
        • Multipart Upload
      • Access S3 buckets with internal network for NCS instance
      • Delete Lifecycle Policies
      • Move Objects Lifecycle Script
      • Configure a static website using S3 Bucket
    • NIPA Drive
      • Purchasing a Drive
  • NETWORKING
    • Networking
      • Managing VPC Network(s)
        • Create a Network
        • DHCP Setting
        • Create Port
        • Create Router
      • Managing Security Group(s)
        • Create a New Security Group
        • Create Security Group Presets
      • Managing External IP(s)
        • Create an External IP
        • Exporting External IP List
      • NAT Gateway with Ubuntu (VM) แบบ Host Route
      • NAT and VPN Gateway on NCS with Pfsense-2.6.0
  • LOAD BALANCING
    • Load Balancer as a Service
      • Create Load Balancer
      • Using Network Load Balancing
      • Using Application Load Balancing
      • Renaming a Load Balancer
      • Exporting Load Balancer List
      • Monitoring Load Balancer Using Prometheus
    • SSL Certificate
      • Import SSL Certificate
  • DATABASE AS A SERVICE
    • SQL Database
      • Create SQL Database Instance
        • Create MySQL Database Instance
      • Manage SQL Database Instance
        • Reboot Database Service
        • Delete Database Instance
        • Online Extend Storage Size
        • Edit Allowed CIDR
      • Auto-Scaling SQL Database Storage
        • Enable Auto-Scaling
        • Disable Auto-Scaling
        • Edit Auto-Scaling
      • Manage SQL Database Root User
        • Enable Root User
        • Reset Root User Password
      • Manage SQL Database Schema
        • Create Database Schema
        • Delete Database Schema
      • Manage SQL Database User
        • Create Database User
        • Delete Database User
        • Reset Password
        • Edit Access
      • Manage SQL Database Backup
        • Create Backup
        • Create A New SQL Database Instance From Backup
        • Delete Backup
      • Manage SQL Database Logs
        • Enable Logs
        • Disable Logs
        • Refresh Logs
        • Load More Logs
      • Manage Monitoring User
        • Create Monitoring User
        • Delete Monitoring User
      • Monitor SQL Database with Percona Monitoring and Management (PMM)
  • SCHEDULING
    • Schedules
      • Create Schedule
    • Jobs
  • Public API
    • What is NCS Public API ?
      • Download NCS Project RC File
      • Getting Start with NCS Public API
        • Using OpenStack Client Tool
        • Using REST API
        • Terraform with Openstack
        • Auto-scaling OpenStack Instances with Senlin and Prometheus
          • Installation Prometheus
          • Installation Alertmanager
  • MIGRATION
    • Migrating Linux VM from vSphere to NCS
    • Migrating Windows VM from vSphere to NCS
  • Customer Support
    • Having Problem Before Access a Project
    • Having Problem In a Project
  • Tutorial
    • My First Website
    • Access MySQL Database With MySQL Workbench
    • Pritunl for VPN server
    • Install Rancher Server with Docker Quick Start
      • Create RKE2 Cluster via Rancher Dashboard
    • Install odoo18 with external database
    • How to use LBaaS for mysql Load Balancing
    • How to use Cloudflare with Nipa Cloud Space
  • Release Notes
    • v5.0.X (v5.0.0-now)
      • v5.0.0
      • v5.1.0
      • v5.2.0
      • v5.2.1
      • v5.2.2
      • v5.2.3
      • v5.2.4
      • v.5.3.0
      • v5.4.0
    • v4.19.X (v4.19.0-v4.19.3)
      • v4.19.0
      • v4.19.1
      • v4.19.2
      • v4.19.3
    • v4.18.X (v4.18.0-v4.18.2)
      • v4.18.0
      • v4.18.1
      • v4.18.2
    • v4.17.X (v4.17.0-v4.17.3)
      • v4.17.0
      • v4.17.0.1
      • v4.17.1
      • v4.17.2
      • v4.17.3
    • v4.16.X (v4.16.0-v4.16.5)
      • v4.16.0
      • v4.16.1
      • v4.16.2
      • v4.16.3
      • v4.16.4
      • v4.16.5
    • v4.15.X (v4.15.0-v4.15.9)
      • v4.15.0
      • v4.15.1
      • v4.15.2
      • v4.15.3
      • v4.15.4
      • v4.15.5
      • v4.15.6
      • v4.15.7
      • v4.15.8
      • v4.15.9
    • v.4.14.X (v4.14.0-v4.14.2)
      • v.4.14.0
      • v4.14.1
      • 4.14.2
Powered by GitBook
On this page

Was this helpful?

Last updated 10 months ago

Was this helpful?

สำหรับการทำงานร่วมกับไฟล์ข้อมูลขนาดใหญ่นั้น อาจต้องทำการแบ่งส่วนการ Upload ของไฟล์ป็นส่วนๆ ซึ่งสามารถทำได้โดยการใช้ MultipartUpload

MultipartUpload

หลังการทำงานเบื้องต้น

  • CreateMultipartUploadCommand -> ใช้สำหรับการ initiates การทำ multipart upload โดยข้อมูลที่ได้กลับมาจะเป็น upload ID ซึ่งจะถูกนำไปใช้สำหรับการ upload แต่ละ part ของ file

  • UploadPartCommand -> ใช้สำหรับการ upload file แต่ละ part

  • CompleteMultipartUploadCommand -> หลังจากทำการ upload ทุก part เรียบร้อยแล้ว จะเรียกใช้งาน Command นี้เพื่อทำการรวม file ที่ได้ upload ไป เป็นการเสร็จสิ้นการทำงาน

  • AbortMultipartUploadCommand -> หากการ upload ล้มเหลว command นี้จะถูกเรียกใช้งานเพื่อทำการลบข้อมูลที่ล้มเหลวออกไป

ตัวอย่างการ upload ไฟล์ขนาด 5.04GB

import fs from 'fs';
import { Buffer } from 'node:buffer';
import {
    S3Client,
    CreateMultipartUploadCommand,
    UploadPartCommand,
    CompleteMultipartUploadCommand
} from '@aws-sdk/client-s3';

// 100 MB chunk/part size
const CHUNK_SIZE = 1024 * 1024 * 100;

// Max retries when uploading parts
const MAX_RETRIES = 3;

const multipartS3Uploader = async (filePath, options) => {
    const { ncs_region, contentType, key, bucket, ncs_credentials, ncs_endpoint } = options;

    // Get file size
    const fileSize = fs.statSync(filePath).size;

    // Calculate total parts
    const totalParts = Math.ceil(fileSize / CHUNK_SIZE);

    // Initialize the S3 client instance
    const S3 = new S3Client({ region: ncs_region, endpoint: ncs_endpoint, credentials: ncs_credentials });
    const uploadParams = { Bucket: bucket, Key: key, ContentType: contentType };

    let PartNumber = 1;
    const uploadPartResults = [];

    // Send multipart upload request to S3, this returns a UploadId for use when uploading individual parts
    // https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/s3.html#createmultipartupload
    let { UploadId } = await S3.send(new CreateMultipartUploadCommand(uploadParams));

    console.log(`Initiate multipart upload, uploadId: ${UploadId}, totalParts: ${totalParts}, fileSize: ${fileSize}`);

    // Read file parts and upload parts to s3, this promise resolves when all parts are uploaded successfully
    await new Promise(resolve => {
        fs.open(filePath, 'r', async (err, fileDescriptor) => {
            if (err) throw err;

            // Read and upload file parts until end of file
            while (true) {

                // Read next file chunk
                const { buffer, bytesRead } = await readNextPart(fileDescriptor);

                // When end-of-file is reached bytesRead is zero
                if (bytesRead === 0) {
                    // Done reading file, close the file, resolve the promise and return
                    fs.close(fileDescriptor, (err) => { if (err) throw err; });
                    return resolve();
                }

                // Get data chunk/part
                const data = bytesRead < CHUNK_SIZE ? buffer.slice(0, bytesRead) : buffer;

                // Upload data chunk to S3
                const response = await uploadPart(S3,
                    { data, bucket, key, PartNumber, UploadId }
                );

                console.log(`Uploaded part ${PartNumber} of ${totalParts}`);
                uploadPartResults.push({ PartNumber, ETag: response.ETag });
                PartNumber++;
            }
        });
    });

    console.log(`Finish uploading all parts for multipart uploadId: ${UploadId}`);

    // Completes a multipart upload by assembling previously uploaded parts.
    // https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/s3.html#completemultipartupload
    let completeUploadResponse = await S3.send(new CompleteMultipartUploadCommand({
        Bucket: bucket,
        Key: key,
        MultipartUpload: { Parts: uploadPartResults },
        UploadId: UploadId
    }));

    console.log('Successfully completed multipart upload');

    return completeUploadResponse;
};

const readNextPart = async (fileDescriptor) => await new Promise((resolve, reject) => {
    // Allocate an empty buffer to save data chunk that is read
    const buffer = Buffer.alloc(CHUNK_SIZE);

    fs.read(
        fileDescriptor,
        buffer,                             // Buffer where data will be written
        0,                                  // Start Offset on buffer while writing data
        CHUNK_SIZE,                         // Length of bytes to read
        null,                               // Position in file(PartNumber * CHUNK_SIZE); if position is null data is read from the current file position, and the position is updated
        (err, bytesRead) => {               // Callback function
            if (err) return reject(err);
            resolve({ bytesRead, buffer });
        });
});

// Upload a given part with retries
const uploadPart = async (S3, options, retry = 1) => {
    const { data, bucket, key, PartNumber, UploadId } = options;
    let response;
    try {
        // Upload part to S3
        // https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/s3.html#uploadpart
        response = await S3.send(
            new UploadPartCommand({
                Body: data,
                Bucket: bucket,
                Key: key,
                PartNumber,
                UploadId
            })
        );
    } catch {
        console.log(`ATTEMPT-#${retry} Failed to upload part ${PartNumber} due to ${JSON.stringify(response)}`);

        if (retry >= MAX_RETRIES)
            throw (response);
        else
            return uploadPart(S3, options, retry + 1);
    }

    return response;
};

export default multipartS3Uploader;

// Example:

await multipartS3Uploader('file2upload.dum',
    {
        ncs_region: 'NCP-TH',
        bucket: 's3-client-buckets',
        key: 'uploaded.dum',
        ncs_endpoint: 'https://s3-bkk.nipa.cloud',
        ncs_credentials: {
            accessKeyId: 'X2EGTHMCW0xxxxxxS1B8',
            secretAccessKey: 'c1XgG0DCjPxH9RCHJByDDMxxxxxxxxxxxxU17F7m'
        }
    }
);
  1. STORAGE
  2. Object Storage (S3)
  3. Access S3 buckets With AWS S3 Client SDK

Multipart Upload

PreviousBasic CommandNextAccess S3 buckets with internal network for NCS instance
  • MultipartUpload
  • หลังการทำงานเบื้องต้น