小封feng微信

微信号:QF_qingfeng1024
image.png

vue-simple-uploader介绍

vue-simple-uploader是基于 simple-uploader.js 封装的vue上传插件。它的优点包括且不限于以下几种:

  • 支持文件、多文件、文件夹上传;支持拖拽文件、文件夹上传
  • 可暂停、继续上传
  • 错误处理
  • 支持“秒传”,通过文件判断服务端是否已存在从而实现“秒传”
  • 分片上传
  • 支持进度、预估剩余时间、出错自动重试、重传等操作

springboot整合vue2-uploader文件分片上传、秒传、断点续传 - 图2

效果预览

image.png
image.png

完成服务端的创建

创建UploaderController类

UploaderController用于接口的对接,接口为三个方法:
checkChunkExist - 检查分片是否存在,如果分片存在则返回true,否则返回false并返回已上传的分片。
uploadChunk - 上传文件分片,上传的核心方法。
mergeChunks - 合并文件分片,当分片全部上传完成,进行分片的合并。

  1. package com.qingfeng.uploader.controller;
  2. import com.qingfeng.uploader.dto.FileChunkDTO;
  3. import com.qingfeng.uploader.dto.FileChunkResultDTO;
  4. import com.qingfeng.uploader.response.RestApiResponse;
  5. import com.qingfeng.uploader.service.IUploadService;
  6. import org.springframework.beans.factory.annotation.Autowired;
  7. import org.springframework.web.bind.annotation.*;
  8. /**
  9. * @ProjectName UploaderController
  10. * @author Administrator
  11. * @version 1.0.0
  12. * @Description 附件分片上传
  13. * @createTime 2022/4/13 0013 15:58
  14. */
  15. @RestController
  16. @RequestMapping("upload")
  17. public class UploaderController {
  18. @Autowired
  19. private IUploadService uploadService;
  20. /**
  21. * 检查分片是否存在
  22. *
  23. * @return
  24. */
  25. @GetMapping("chunk")
  26. public RestApiResponse<Object> checkChunkExist(FileChunkDTO chunkDTO) {
  27. FileChunkResultDTO fileChunkCheckDTO;
  28. try {
  29. fileChunkCheckDTO = uploadService.checkChunkExist(chunkDTO);
  30. return RestApiResponse.success(fileChunkCheckDTO);
  31. } catch (Exception e) {
  32. return RestApiResponse.error(e.getMessage());
  33. }
  34. }
  35. /**
  36. * 上传文件分片
  37. *
  38. * @param chunkDTO
  39. * @return
  40. */
  41. @PostMapping("chunk")
  42. public RestApiResponse<Object> uploadChunk(FileChunkDTO chunkDTO) {
  43. try {
  44. uploadService.uploadChunk(chunkDTO);
  45. return RestApiResponse.success(chunkDTO.getIdentifier());
  46. } catch (Exception e) {
  47. return RestApiResponse.error(e.getMessage());
  48. }
  49. }
  50. /**
  51. * 请求合并文件分片
  52. *
  53. * @param chunkDTO
  54. * @return
  55. */
  56. @PostMapping("merge")
  57. public RestApiResponse<Object> mergeChunks(@RequestBody FileChunkDTO chunkDTO) {
  58. try {
  59. boolean success = uploadService.mergeChunk(chunkDTO.getIdentifier(), chunkDTO.getFilename(), chunkDTO.getTotalChunks());
  60. return RestApiResponse.flag(success);
  61. } catch (Exception e) {
  62. return RestApiResponse.error(e.getMessage());
  63. }
  64. }
  65. }

创建IUploadService接口类

  1. package com.qingfeng.uploader.service;
  2. import com.qingfeng.framework.exception.GlobalExceptionHandler;
  3. import com.qingfeng.uploader.dto.FileChunkDTO;
  4. import com.qingfeng.uploader.dto.FileChunkResultDTO;
  5. import java.io.IOException;
  6. /**
  7. * @ProjectName IUploadService
  8. * @author Administrator
  9. * @version 1.0.0
  10. * @Description 附件分片上传
  11. * @createTime 2022/4/13 0013 15:59
  12. */
  13. public interface IUploadService {
  14. /**
  15. * 检查文件是否存在,如果存在则跳过该文件的上传,如果不存在,返回需要上传的分片集合
  16. * @param chunkDTO
  17. * @return
  18. */
  19. FileChunkResultDTO checkChunkExist(FileChunkDTO chunkDTO);
  20. /**
  21. * 上传文件分片
  22. * @param chunkDTO
  23. */
  24. void uploadChunk(FileChunkDTO chunkDTO) throws IOException;
  25. /**
  26. * 合并文件分片
  27. * @param identifier
  28. * @param fileName
  29. * @param totalChunks
  30. * @return
  31. * @throws IOException
  32. */
  33. boolean mergeChunk(String identifier,String fileName,Integer totalChunks)throws IOException;
  34. }

创建UploadServiceImpl接口实现类

UploadServiceImpl是整个附件分片上传的核心实现类,在UploadServiceImpl中实现了分片的存储,合并,判断等一系列操作。

  1. package com.qingfeng.uploader.service.impl;
  2. import com.qingfeng.uploader.dto.FileChunkDTO;
  3. import com.qingfeng.uploader.dto.FileChunkResultDTO;
  4. import com.qingfeng.uploader.service.IUploadService;
  5. import org.apache.tomcat.util.http.fileupload.IOUtils;
  6. import org.slf4j.Logger;
  7. import org.slf4j.LoggerFactory;
  8. import org.springframework.beans.factory.annotation.Autowired;
  9. import org.springframework.beans.factory.annotation.Value;
  10. import org.springframework.data.redis.core.RedisTemplate;
  11. import org.springframework.stereotype.Service;
  12. import java.io.*;
  13. import java.util.*;
  14. /**
  15. * @ProjectName UploadServiceImpl
  16. * @author Administrator
  17. * @version 1.0.0
  18. * @Description 附件分片上传
  19. * @createTime 2022/4/13 0013 15:59
  20. */
  21. @Service
  22. @SuppressWarnings("all")
  23. public class UploadServiceImpl implements IUploadService {
  24. private Logger logger = LoggerFactory.getLogger(UploadServiceImpl.class);
  25. @Autowired
  26. private RedisTemplate<String, Object> redisTemplate;
  27. @Value("${uploadFolder}")
  28. private String uploadFolder;
  29. /**
  30. * 检查文件是否存在,如果存在则跳过该文件的上传,如果不存在,返回需要上传的分片集合
  31. *
  32. * @param chunkDTO
  33. * @return
  34. */
  35. @Override
  36. public FileChunkResultDTO checkChunkExist(FileChunkDTO chunkDTO) {
  37. //1.检查文件是否已上传过
  38. //1.1)检查在磁盘中是否存在
  39. String fileFolderPath = getFileFolderPath(chunkDTO.getIdentifier());
  40. logger.info("fileFolderPath-->{}", fileFolderPath);
  41. String filePath = getFilePath(chunkDTO.getIdentifier(), chunkDTO.getFilename());
  42. File file = new File(filePath);
  43. boolean exists = file.exists();
  44. //1.2)检查Redis中是否存在,并且所有分片已经上传完成。
  45. Set<Integer> uploaded = (Set<Integer>) redisTemplate.opsForHash().get(chunkDTO.getIdentifier(), "uploaded");
  46. if (uploaded != null && uploaded.size() == chunkDTO.getTotalChunks() && exists) {
  47. return new FileChunkResultDTO(true);
  48. }
  49. File fileFolder = new File(fileFolderPath);
  50. if (!fileFolder.exists()) {
  51. boolean mkdirs = fileFolder.mkdirs();
  52. logger.info("准备工作,创建文件夹,fileFolderPath:{},mkdirs:{}", fileFolderPath, mkdirs);
  53. }
  54. // 断点续传,返回已上传的分片
  55. return new FileChunkResultDTO(false, uploaded);
  56. }
  57. /**
  58. * 上传分片
  59. *
  60. * @param chunkDTO
  61. */
  62. @Override
  63. public void uploadChunk(FileChunkDTO chunkDTO) {
  64. //分块的目录
  65. String chunkFileFolderPath = getChunkFileFolderPath(chunkDTO.getIdentifier());
  66. logger.info("分块的目录 -> {}", chunkFileFolderPath);
  67. File chunkFileFolder = new File(chunkFileFolderPath);
  68. if (!chunkFileFolder.exists()) {
  69. boolean mkdirs = chunkFileFolder.mkdirs();
  70. logger.info("创建分片文件夹:{}", mkdirs);
  71. }
  72. //写入分片
  73. try (
  74. InputStream inputStream = chunkDTO.getFile().getInputStream();
  75. FileOutputStream outputStream = new FileOutputStream(new File(chunkFileFolderPath + chunkDTO.getChunkNumber()))
  76. ) {
  77. IOUtils.copy(inputStream, outputStream);
  78. logger.info("文件标识:{},chunkNumber:{}", chunkDTO.getIdentifier(), chunkDTO.getChunkNumber());
  79. //将该分片写入redis
  80. long size = saveToRedis(chunkDTO);
  81. } catch (Exception e) {
  82. e.printStackTrace();
  83. }
  84. }
  85. @Override
  86. public boolean mergeChunk(String identifier, String fileName, Integer totalChunks) throws IOException {
  87. return mergeChunks(identifier, fileName, totalChunks);
  88. }
  89. /**
  90. * 合并分片
  91. *
  92. * @param identifier
  93. * @param filename
  94. */
  95. private boolean mergeChunks(String identifier, String filename, Integer totalChunks) {
  96. String chunkFileFolderPath = getChunkFileFolderPath(identifier);
  97. String filePath = getFilePath(identifier, filename);
  98. // 检查分片是否都存在
  99. if (checkChunks(chunkFileFolderPath, totalChunks)) {
  100. File chunkFileFolder = new File(chunkFileFolderPath);
  101. File mergeFile = new File(filePath);
  102. File[] chunks = chunkFileFolder.listFiles();
  103. //排序
  104. List fileList = Arrays.asList(chunks);
  105. Collections.sort(fileList, (Comparator<File>) (o1, o2) -> {
  106. return Integer.parseInt(o1.getName()) - (Integer.parseInt(o2.getName()));
  107. });
  108. try {
  109. RandomAccessFile randomAccessFileWriter = new RandomAccessFile(mergeFile, "rw");
  110. byte[] bytes = new byte[1024];
  111. for (File chunk : chunks) {
  112. RandomAccessFile randomAccessFileReader = new RandomAccessFile(chunk, "r");
  113. int len;
  114. while ((len = randomAccessFileReader.read(bytes)) != -1) {
  115. randomAccessFileWriter.write(bytes, 0, len);
  116. }
  117. randomAccessFileReader.close();
  118. }
  119. randomAccessFileWriter.close();
  120. } catch (Exception e) {
  121. return false;
  122. }
  123. return true;
  124. }
  125. return false;
  126. }
  127. /**
  128. * 检查分片是否都存在
  129. * @param chunkFileFolderPath
  130. * @param totalChunks
  131. * @return
  132. */
  133. private boolean checkChunks(String chunkFileFolderPath, Integer totalChunks) {
  134. try {
  135. for (int i = 1; i <= totalChunks + 1; i++) {
  136. File file = new File(chunkFileFolderPath + File.separator + i);
  137. if (file.exists()) {
  138. continue;
  139. } else {
  140. return false;
  141. }
  142. }
  143. } catch (Exception e) {
  144. return false;
  145. }
  146. return true;
  147. }
  148. /**
  149. * 分片写入Redis
  150. *
  151. * @param chunkDTO
  152. */
  153. private synchronized long saveToRedis(FileChunkDTO chunkDTO) {
  154. Set<Integer> uploaded = (Set<Integer>) redisTemplate.opsForHash().get(chunkDTO.getIdentifier(), "uploaded");
  155. if (uploaded == null) {
  156. uploaded = new HashSet<>(Arrays.asList(chunkDTO.getChunkNumber()));
  157. HashMap<String, Object> objectObjectHashMap = new HashMap<>();
  158. objectObjectHashMap.put("uploaded", uploaded);
  159. objectObjectHashMap.put("totalChunks", chunkDTO.getTotalChunks());
  160. objectObjectHashMap.put("totalSize", chunkDTO.getTotalSize());
  161. // objectObjectHashMap.put("path", getFileRelativelyPath(chunkDTO.getIdentifier(), chunkDTO.getFilename()));
  162. objectObjectHashMap.put("path", chunkDTO.getFilename());
  163. redisTemplate.opsForHash().putAll(chunkDTO.getIdentifier(), objectObjectHashMap);
  164. } else {
  165. uploaded.add(chunkDTO.getChunkNumber());
  166. redisTemplate.opsForHash().put(chunkDTO.getIdentifier(), "uploaded", uploaded);
  167. }
  168. return uploaded.size();
  169. }
  170. /**
  171. * 得到文件的绝对路径
  172. *
  173. * @param identifier
  174. * @param filename
  175. * @return
  176. */
  177. private String getFilePath(String identifier, String filename) {
  178. String ext = filename.substring(filename.lastIndexOf("."));
  179. // return getFileFolderPath(identifier) + identifier + ext;
  180. return uploadFolder + filename;
  181. }
  182. /**
  183. * 得到文件的相对路径
  184. *
  185. * @param identifier
  186. * @param filename
  187. * @return
  188. */
  189. private String getFileRelativelyPath(String identifier, String filename) {
  190. String ext = filename.substring(filename.lastIndexOf("."));
  191. return "/" + identifier.substring(0, 1) + "/" +
  192. identifier.substring(1, 2) + "/" +
  193. identifier + "/" + identifier
  194. + ext;
  195. }
  196. /**
  197. * 得到分块文件所属的目录
  198. *
  199. * @param identifier
  200. * @return
  201. */
  202. private String getChunkFileFolderPath(String identifier) {
  203. return getFileFolderPath(identifier) + "chunks" + File.separator;
  204. }
  205. /**
  206. * 得到文件所属的目录
  207. *
  208. * @param identifier
  209. * @return
  210. */
  211. private String getFileFolderPath(String identifier) {
  212. return uploadFolder + identifier.substring(0, 1) + File.separator +
  213. identifier.substring(1, 2) + File.separator +
  214. identifier + File.separator;
  215. // return uploadFolder;
  216. }
  217. }

创建相关辅助类

创建FileChunkDTO

  1. package com.qingfeng.uploader.dto;
  2. import org.springframework.web.multipart.MultipartFile;
  3. /**
  4. * @ProjectName FileChunkDTO
  5. * @author Administrator
  6. * @version 1.0.0
  7. * @Description 附件分片上传
  8. * @createTime 2022/4/13 0013 15:59
  9. */
  10. public class FileChunkDTO {
  11. /**
  12. * 文件md5
  13. */
  14. private String identifier;
  15. /**
  16. * 分块文件
  17. */
  18. MultipartFile file;
  19. /**
  20. * 当前分块序号
  21. */
  22. private Integer chunkNumber;
  23. /**
  24. * 分块大小
  25. */
  26. private Long chunkSize;
  27. /**
  28. * 当前分块大小
  29. */
  30. private Long currentChunkSize;
  31. /**
  32. * 文件总大小
  33. */
  34. private Long totalSize;
  35. /**
  36. * 分块总数
  37. */
  38. private Integer totalChunks;
  39. /**
  40. * 文件名
  41. */
  42. private String filename;
  43. public String getIdentifier() {
  44. return identifier;
  45. }
  46. public void setIdentifier(String identifier) {
  47. this.identifier = identifier;
  48. }
  49. public MultipartFile getFile() {
  50. return file;
  51. }
  52. public void setFile(MultipartFile file) {
  53. this.file = file;
  54. }
  55. public Integer getChunkNumber() {
  56. return chunkNumber;
  57. }
  58. public void setChunkNumber(Integer chunkNumber) {
  59. this.chunkNumber = chunkNumber;
  60. }
  61. public Long getChunkSize() {
  62. return chunkSize;
  63. }
  64. public void setChunkSize(Long chunkSize) {
  65. this.chunkSize = chunkSize;
  66. }
  67. public Long getCurrentChunkSize() {
  68. return currentChunkSize;
  69. }
  70. public void setCurrentChunkSize(Long currentChunkSize) {
  71. this.currentChunkSize = currentChunkSize;
  72. }
  73. public Long getTotalSize() {
  74. return totalSize;
  75. }
  76. public void setTotalSize(Long totalSize) {
  77. this.totalSize = totalSize;
  78. }
  79. public Integer getTotalChunks() {
  80. return totalChunks;
  81. }
  82. public void setTotalChunks(Integer totalChunks) {
  83. this.totalChunks = totalChunks;
  84. }
  85. public String getFilename() {
  86. return filename;
  87. }
  88. public void setFilename(String filename) {
  89. this.filename = filename;
  90. }
  91. @Override
  92. public String toString() {
  93. return "FileChunkDTO{" +
  94. "identifier='" + identifier + '\'' +
  95. ", file=" + file +
  96. ", chunkNumber=" + chunkNumber +
  97. ", chunkSize=" + chunkSize +
  98. ", currentChunkSize=" + currentChunkSize +
  99. ", totalSize=" + totalSize +
  100. ", totalChunks=" + totalChunks +
  101. ", filename='" + filename + '\'' +
  102. '}';
  103. }
  104. }

创建FileChunkResultDTO

  1. package com.qingfeng.uploader.dto;
  2. import java.util.Set;
  3. /**
  4. * @ProjectName FileChunkResultDTO
  5. * @author Administrator
  6. * @version 1.0.0
  7. * @Description 附件分片上传
  8. * @createTime 2022/4/13 0013 15:59
  9. */
  10. public class FileChunkResultDTO {
  11. /**
  12. * 是否跳过上传
  13. */
  14. private Boolean skipUpload;
  15. /**
  16. * 已上传分片的集合
  17. */
  18. private Set<Integer> uploaded;
  19. public Boolean getSkipUpload() {
  20. return skipUpload;
  21. }
  22. public void setSkipUpload(Boolean skipUpload) {
  23. this.skipUpload = skipUpload;
  24. }
  25. public Set<Integer> getUploaded() {
  26. return uploaded;
  27. }
  28. public void setUploaded(Set<Integer> uploaded) {
  29. this.uploaded = uploaded;
  30. }
  31. public FileChunkResultDTO(Boolean skipUpload, Set<Integer> uploaded) {
  32. this.skipUpload = skipUpload;
  33. this.uploaded = uploaded;
  34. }
  35. public FileChunkResultDTO(Boolean skipUpload) {
  36. this.skipUpload = skipUpload;
  37. }
  38. }

创建RestApiResponse

  1. package com.qingfeng.uploader.response;
  2. /**
  3. * @ProjectName RestApiResponse
  4. * @author Administrator
  5. * @version 1.0.0
  6. * @Description 附件分片上传
  7. * @createTime 2022/4/13 0013 15:59
  8. */
  9. public class RestApiResponse<T> {
  10. /**
  11. * 是否成功
  12. */
  13. private boolean success;
  14. /**
  15. * 响应数据
  16. */
  17. private T data;
  18. public boolean isSuccess() {
  19. return success;
  20. }
  21. public void setSuccess(boolean success) {
  22. this.success = success;
  23. }
  24. public T getData() {
  25. return data;
  26. }
  27. public void setData(T data) {
  28. this.data = data;
  29. }
  30. public static <T> RestApiResponse<T> success(T data) {
  31. RestApiResponse<T> result = new RestApiResponse<>();
  32. result.success = true;
  33. result.data = data;
  34. return result;
  35. }
  36. public static <T> RestApiResponse<T> success() {
  37. RestApiResponse<T> result = new RestApiResponse<>();
  38. result.success = true;
  39. return result;
  40. }
  41. public static <T> RestApiResponse<T> error(T data) {
  42. RestApiResponse<T> result = new RestApiResponse<>();
  43. result.success = false;
  44. result.data = data;
  45. return result;
  46. }
  47. public static <T> RestApiResponse<T> flag(boolean data) {
  48. RestApiResponse<T> result = new RestApiResponse<>();
  49. result.success = data;
  50. return result;
  51. }
  52. }

核心方法分析

1、检查分片是否存在

  • 检查目录下的文件是否存在。
  • 检查redis存储的分片是否存在。
  • 判断分片数量和总分片数量是否一致。

如果文件存在并且分片上传完毕,标识已经完成附件的上传,可以进行秒传操作。
如果文件不存在或者分片为上传完毕,则返回false并返回已经上传的分片信息。

  1. /**
  2. * 检查文件是否存在,如果存在则跳过该文件的上传,如果不存在,返回需要上传的分片集合
  3. * @param chunkDTO
  4. * @return
  5. */
  6. @Override
  7. public FileChunkResultDTO checkChunkExist(FileChunkDTO chunkDTO) {
  8. //1.检查文件是否已上传过
  9. //1.1)检查在磁盘中是否存在
  10. String fileFolderPath = getFileFolderPath(chunkDTO.getIdentifier());
  11. logger.info("fileFolderPath-->{}", fileFolderPath);
  12. String filePath = getFilePath(chunkDTO.getIdentifier(), chunkDTO.getFilename());
  13. File file = new File(filePath);
  14. boolean exists = file.exists();
  15. //1.2)检查Redis中是否存在,并且所有分片已经上传完成。
  16. Set<Integer> uploaded = (Set<Integer>) redisTemplate.opsForHash().get(chunkDTO.getIdentifier(), "uploaded");
  17. if (uploaded != null && uploaded.size() == chunkDTO.getTotalChunks() && exists) {
  18. return new FileChunkResultDTO(true);
  19. }
  20. File fileFolder = new File(fileFolderPath);
  21. if (!fileFolder.exists()) {
  22. boolean mkdirs = fileFolder.mkdirs();
  23. logger.info("准备工作,创建文件夹,fileFolderPath:{},mkdirs:{}", fileFolderPath, mkdirs);
  24. }
  25. // 断点续传,返回已上传的分片
  26. return new FileChunkResultDTO(false, uploaded);
  27. }

2、上传附件分片

  • 判断目录是否存在,如果不存在则创建目录。
  • 进行切片的拷贝,将切片拷贝到指定的目录。
  • 将该分片写入redis

    1. /**
    2. * 上传分片
    3. * @param chunkDTO
    4. */
    5. @Override
    6. public void uploadChunk(FileChunkDTO chunkDTO) {
    7. //分块的目录
    8. String chunkFileFolderPath = getChunkFileFolderPath(chunkDTO.getIdentifier());
    9. logger.info("分块的目录 -> {}", chunkFileFolderPath);
    10. File chunkFileFolder = new File(chunkFileFolderPath);
    11. if (!chunkFileFolder.exists()) {
    12. boolean mkdirs = chunkFileFolder.mkdirs();
    13. logger.info("创建分片文件夹:{}", mkdirs);
    14. }
    15. //写入分片
    16. try (
    17. InputStream inputStream = chunkDTO.getFile().getInputStream();
    18. FileOutputStream outputStream = new FileOutputStream(new File(chunkFileFolderPath + chunkDTO.getChunkNumber()))
    19. ) {
    20. IOUtils.copy(inputStream, outputStream);
    21. logger.info("文件标识:{},chunkNumber:{}", chunkDTO.getIdentifier(), chunkDTO.getChunkNumber());
    22. //将该分片写入redis
    23. long size = saveToRedis(chunkDTO);
    24. } catch (Exception e) {
    25. e.printStackTrace();
    26. }
    27. }

    3、合并分片-生成文件

    1. @Override
    2. public boolean mergeChunk(String identifier, String fileName, Integer totalChunks) throws IOException {
    3. return mergeChunks(identifier, fileName, totalChunks);
    4. }
    5. /**
    6. * 合并分片
    7. * @param identifier
    8. * @param filename
    9. */
    10. private boolean mergeChunks(String identifier, String filename, Integer totalChunks) {
    11. String chunkFileFolderPath = getChunkFileFolderPath(identifier);
    12. String filePath = getFilePath(identifier, filename);
    13. // 检查分片是否都存在
    14. if (checkChunks(chunkFileFolderPath, totalChunks)) {
    15. File chunkFileFolder = new File(chunkFileFolderPath);
    16. File mergeFile = new File(filePath);
    17. File[] chunks = chunkFileFolder.listFiles();
    18. //排序
    19. List fileList = Arrays.asList(chunks);
    20. Collections.sort(fileList, (Comparator<File>) (o1, o2) -> {
    21. return Integer.parseInt(o1.getName()) - (Integer.parseInt(o2.getName()));
    22. });
    23. try {
    24. RandomAccessFile randomAccessFileWriter = new RandomAccessFile(mergeFile, "rw");
    25. byte[] bytes = new byte[1024];
    26. for (File chunk : chunks) {
    27. RandomAccessFile randomAccessFileReader = new RandomAccessFile(chunk, "r");
    28. int len;
    29. while ((len = randomAccessFileReader.read(bytes)) != -1) {
    30. randomAccessFileWriter.write(bytes, 0, len);
    31. }
    32. randomAccessFileReader.close();
    33. }
    34. randomAccessFileWriter.close();
    35. } catch (Exception e) {
    36. return false;
    37. }
    38. return true;
    39. }
    40. return false;
    41. }

    检查分片是否全部存在

    1. private boolean checkChunks(String chunkFileFolderPath, Integer totalChunks) {
    2. try {
    3. for (int i = 1; i <= totalChunks + 1; i++) {
    4. File file = new File(chunkFileFolderPath + File.separator + i);
    5. if (file.exists()) {
    6. continue;
    7. } else {
    8. return false;
    9. }
    10. }
    11. } catch (Exception e) {
    12. return false;
    13. }
    14. return true;
    15. }

    读取分片列表

    1. File[] chunks = chunkFileFolder.listFiles();

    切片排序1、2/3、—-

    1. List fileList = Arrays.asList(chunks);
    2. Collections.sort(fileList, (Comparator<File>) (o1, o2) -> {
    3. return Integer.parseInt(o1.getName()) - (Integer.parseInt(o2.getName()));
    4. });

    切片合并,生成文件

    1. RandomAccessFile randomAccessFileWriter = new RandomAccessFile(mergeFile, "rw");
    2. byte[] bytes = new byte[1024];
    3. for (File chunk : chunks) {
    4. RandomAccessFile randomAccessFileReader = new RandomAccessFile(chunk, "r");
    5. int len;
    6. while ((len = randomAccessFileReader.read(bytes)) != -1) {
    7. randomAccessFileWriter.write(bytes, 0, len);
    8. }
    9. randomAccessFileReader.close();
    10. }
    11. randomAccessFileWriter.close();

    4、分片写入redis

    判断切片是否已存在,如果未存在,则创建基础信息,并保存。

    1. /**
    2. * 分片写入Redis
    3. * @param chunkDTO
    4. */
    5. private synchronized long saveToRedis(FileChunkDTO chunkDTO) {
    6. Set<Integer> uploaded = (Set<Integer>) redisTemplate.opsForHash().get(chunkDTO.getIdentifier(), "uploaded");
    7. if (uploaded == null) {
    8. uploaded = new HashSet<>(Arrays.asList(chunkDTO.getChunkNumber()));
    9. HashMap<String, Object> objectObjectHashMap = new HashMap<>();
    10. objectObjectHashMap.put("uploaded", uploaded);
    11. objectObjectHashMap.put("totalChunks", chunkDTO.getTotalChunks());
    12. objectObjectHashMap.put("totalSize", chunkDTO.getTotalSize());
    13. // objectObjectHashMap.put("path", getFileRelativelyPath(chunkDTO.getIdentifier(), chunkDTO.getFilename()));
    14. objectObjectHashMap.put("path", chunkDTO.getFilename());
    15. redisTemplate.opsForHash().putAll(chunkDTO.getIdentifier(), objectObjectHashMap);
    16. } else {
    17. uploaded.add(chunkDTO.getChunkNumber());
    18. redisTemplate.opsForHash().put(chunkDTO.getIdentifier(), "uploaded", uploaded);
    19. }
    20. return uploaded.size();
    21. }

完成vue2前端的创建

安装uploader和spark-md5的依赖

  1. npm install --save vue-simple-uploader
  2. npm install --save spark-md5

mainjs导入uploader

  1. import uploader from 'vue-simple-uploader'
  2. Vue.use(uploader)

创建uploader组件

  1. <template>
  2. <div>
  3. <uploader
  4. :autoStart="false"
  5. :options="options"
  6. :file-status-text="statusText"
  7. class="uploader-example"
  8. @file-complete="fileComplete"
  9. @complete="complete"
  10. @file-success="fileSuccess"
  11. @files-added="filesAdded"
  12. >
  13. <uploader-unsupport></uploader-unsupport>
  14. <uploader-drop>
  15. <p>将文件拖放到此处以上传</p>
  16. <uploader-btn>选择文件</uploader-btn>
  17. <uploader-btn :attrs="attrs">选择图片</uploader-btn>
  18. <uploader-btn :directory="true">选择文件夹</uploader-btn>
  19. </uploader-drop>
  20. <!-- <uploader-list></uploader-list> -->
  21. <uploader-files> </uploader-files>
  22. </uploader>
  23. <br />
  24. <a-button @click="allStart()" :disabled="disabled">全部开始</a-button>
  25. <a-button @click="allStop()" style="margin-left: 4px">全部暂停</a-button>
  26. <a-button @click="allRemove()" style="margin-left: 4px">全部移除</a-button>
  27. </div>
  28. </template>
  29. <script>
  30. import axios from "axios";
  31. import SparkMD5 from "spark-md5";
  32. import storage from "store";
  33. import { ACCESS_TOKEN } from '@/store/mutation-types'
  34. export default {
  35. data() {
  36. return {
  37. skip: false,
  38. options: {
  39. target: "//localhost:8888/upload/chunk",
  40. // 开启服务端分片校验功能
  41. testChunks: true,
  42. parseTimeRemaining: function (timeRemaining, parsedTimeRemaining) {
  43. return parsedTimeRemaining
  44. .replace(/\syears?/, "年")
  45. .replace(/\days?/, "天")
  46. .replace(/\shours?/, "小时")
  47. .replace(/\sminutes?/, "分钟")
  48. .replace(/\sseconds?/, "秒");
  49. },
  50. // 服务器分片校验函数
  51. checkChunkUploadedByResponse: (chunk, message) => {
  52. const result = JSON.parse(message);
  53. if (result.data.skipUpload) {
  54. this.skip = true;
  55. return true;
  56. }
  57. return (result.data.uploaded || []).indexOf(chunk.offset + 1) >= 0;
  58. },
  59. headers: {
  60. // 在header中添加的验证,请根据实际业务来
  61. "Access-Token": storage.get(ACCESS_TOKEN),
  62. },
  63. },
  64. attrs: {
  65. accept: "image/*",
  66. },
  67. statusText: {
  68. success: "上传成功",
  69. error: "上传出错了",
  70. uploading: "上传中...",
  71. paused: "暂停中...",
  72. waiting: "等待中...",
  73. cmd5: "计算文件MD5中...",
  74. },
  75. fileList: [],
  76. disabled: true,
  77. };
  78. },
  79. watch: {
  80. fileList(o, n) {
  81. this.disabled = false;
  82. },
  83. },
  84. methods: {
  85. fileSuccess(rootFile, file, response, chunk) {
  86. // console.log(rootFile);
  87. // console.log(file);
  88. // console.log(message);
  89. // console.log(chunk);
  90. const result = JSON.parse(response);
  91. console.log(result.success, this.skip);
  92. if (result.success && !this.skip) {
  93. axios
  94. .post(
  95. "http://127.0.0.1:8888/upload/merge",
  96. {
  97. identifier: file.uniqueIdentifier,
  98. filename: file.name,
  99. totalChunks: chunk.offset,
  100. },
  101. {
  102. headers: { "Access-Token": storage.get(ACCESS_TOKEN) }
  103. }
  104. )
  105. .then((res) => {
  106. if (res.data.success) {
  107. console.log("上传成功");
  108. } else {
  109. console.log(res);
  110. }
  111. })
  112. .catch(function (error) {
  113. console.log(error);
  114. });
  115. } else {
  116. console.log("上传成功,不需要合并");
  117. }
  118. if (this.skip) {
  119. this.skip = false;
  120. }
  121. },
  122. fileComplete(rootFile) {
  123. // 一个根文件(文件夹)成功上传完成。
  124. // console.log("fileComplete", rootFile);
  125. // console.log("一个根文件(文件夹)成功上传完成。");
  126. },
  127. complete() {
  128. // 上传完毕。
  129. // console.log("complete");
  130. },
  131. filesAdded(file, fileList, event) {
  132. // console.log(file);
  133. file.forEach((e) => {
  134. this.fileList.push(e);
  135. this.computeMD5(e);
  136. });
  137. },
  138. computeMD5(file) {
  139. let fileReader = new FileReader();
  140. let time = new Date().getTime();
  141. let blobSlice =
  142. File.prototype.slice ||
  143. File.prototype.mozSlice ||
  144. File.prototype.webkitSlice;
  145. let currentChunk = 0;
  146. const chunkSize = 1024 * 1024;
  147. let chunks = Math.ceil(file.size / chunkSize);
  148. let spark = new SparkMD5.ArrayBuffer();
  149. // 文件状态设为"计算MD5"
  150. file.cmd5 = true; //文件状态为“计算md5...”
  151. file.pause();
  152. loadNext();
  153. fileReader.onload = (e) => {
  154. spark.append(e.target.result);
  155. if (currentChunk < chunks) {
  156. currentChunk++;
  157. loadNext();
  158. // 实时展示MD5的计算进度
  159. console.log(
  160. `第${currentChunk}分片解析完成, 开始第${
  161. currentChunk + 1
  162. } / ${chunks}分片解析`
  163. );
  164. } else {
  165. let md5 = spark.end();
  166. console.log(
  167. `MD5计算完毕:${file.name} \nMD5${md5} \n分片:${chunks} 大小:${
  168. file.size
  169. } 用时:${new Date().getTime() - time} ms`
  170. );
  171. spark.destroy(); //释放缓存
  172. file.uniqueIdentifier = md5; //将文件md5赋值给文件唯一标识
  173. file.cmd5 = false; //取消计算md5状态
  174. file.resume(); //开始上传
  175. }
  176. };
  177. fileReader.onerror = function () {
  178. this.error(`文件${file.name}读取出错,请检查该文件`);
  179. file.cancel();
  180. };
  181. function loadNext() {
  182. let start = currentChunk * chunkSize;
  183. let end =
  184. start + chunkSize >= file.size ? file.size : start + chunkSize;
  185. fileReader.readAsArrayBuffer(blobSlice.call(file.file, start, end));
  186. }
  187. },
  188. allStart() {
  189. console.log(this.fileList);
  190. this.fileList.map((e) => {
  191. if (e.paused) {
  192. e.resume();
  193. }
  194. });
  195. },
  196. allStop() {
  197. console.log(this.fileList);
  198. this.fileList.map((e) => {
  199. if (!e.paused) {
  200. e.pause();
  201. }
  202. });
  203. },
  204. allRemove() {
  205. this.fileList.map((e) => {
  206. e.cancel();
  207. });
  208. this.fileList = [];
  209. },
  210. },
  211. };
  212. </script>
  213. <style>
  214. .uploader-example {
  215. width: 100%;
  216. padding: 15px;
  217. margin: 0px auto 0;
  218. font-size: 12px;
  219. box-shadow: 0 0 10px rgba(0, 0, 0, 0.4);
  220. }
  221. .uploader-example .uploader-btn {
  222. margin-right: 4px;
  223. }
  224. .uploader-example .uploader-list {
  225. max-height: 440px;
  226. overflow: auto;
  227. overflow-x: hidden;
  228. overflow-y: auto;
  229. }
  230. </style>

Uploader组件使用

  1. <template>
  2. <div>
  3. <a-button @click="uploadFile"> 资源上传 </a-button>
  4. <a-drawer
  5. title="资源上传"
  6. placement="right"
  7. width="640"
  8. :closable="false"
  9. :visible="visible"
  10. @close="onClose"
  11. >
  12. <upload></upload>
  13. </a-drawer>
  14. </div>
  15. </template>
  16. <script>
  17. import Upload from "@/components/Upload/Index";
  18. export default {
  19. name: "WelcomePage",
  20. data() {
  21. return {
  22. visible: true,
  23. };
  24. },
  25. components: {
  26. Upload,
  27. },
  28. methods: {
  29. uploadFile() {
  30. this.visible = true;
  31. },
  32. onClose() {
  33. this.visible = false;
  34. }
  35. },
  36. };
  37. </script>
  38. <style lang="less" scoped>
  39. </style>

核心方法分析

  1. <uploader
  2. :autoStart="false"
  3. :options="options"
  4. :file-status-text="statusText"
  5. class="uploader-example"
  6. @file-complete="fileComplete"
  7. @complete="complete"
  8. @file-success="fileSuccess"
  9. @files-added="filesAdded"
  10. >
  11. <uploader-unsupport></uploader-unsupport>
  12. <uploader-drop>
  13. <p>将文件拖放到此处以上传</p>
  14. <uploader-btn>选择文件</uploader-btn>
  15. <uploader-btn :attrs="attrs">选择图片</uploader-btn>
  16. <uploader-btn :directory="true">选择文件夹</uploader-btn>
  17. </uploader-drop>
  18. <!-- <uploader-list></uploader-list> -->
  19. <uploader-files> </uploader-files>
  20. </uploader>
  21. <br />
  22. <a-button @click="allStart()" :disabled="disabled">全部开始</a-button>
  23. <a-button @click="allStop()" style="margin-left: 4px">全部暂停</a-button>
  24. <a-button @click="allRemove()" style="margin-left: 4px">全部移除</a-button>

options参数分析

参考 simple-uploader.js 配置。此外,你可以有如下配置项可选:
parseTimeRemaining(timeRemaining, parsedTimeRemaining) {Function}用于格式化你想要剩余时间,一般可以用来做多语言。参数:

  • timeRemaining{Number}, 剩余时间,秒为单位
  • parsedTimeRemaining{String}, 默认展示的剩余时间内容,你也可以这样做替换使用:

    1. parseTimeRemaining: function (timeRemaining, parsedTimeRemaining) {
    2. return parsedTimeRemaining
    3. .replace(/\syears?/, '年')
    4. .replace(/\days?/, '天')
    5. .replace(/\shours?/, '小时')
    6. .replace(/\sminutes?/, '分钟')
    7. .replace(/\sseconds?/, '秒')
    8. }

    categoryMap {Object}
    文件类型 map,默认:

    1. {
    2. image: ['gif', 'jpg', 'jpeg', 'png', 'bmp', 'webp'],
    3. video: ['mp4', 'm3u8', 'rmvb', 'avi', 'swf', '3gp', 'mkv', 'flv'],
    4. audio: ['mp3', 'wav', 'wma', 'ogg', 'aac', 'flac'],
    5. document: ['doc', 'txt', 'docx', 'pages', 'epub', 'pdf', 'numbers', 'csv', 'xls', 'xlsx', 'keynote', 'ppt', 'pptx']
    6. }

    autoStart {Boolean}默认 true, 是否选择文件后自动开始上传。
    fileStatusText {Object}默认:

    1. {
    2. success: 'success',
    3. error: 'error',
    4. uploading: 'uploading',
    5. paused: 'paused',
    6. waiting: 'waiting'
    7. }

    用于转换文件上传状态文本映射对象。
    0.6.0 版本之后,fileStatusText 可以设置为一个函数,参数为 (status, response = null), 第一个 status 为状态,第二个为响应内容,默认 null,示例:

    1. fileStatusText(status, response) {
    2. const statusTextMap = {
    3. uploading: 'uploading',
    4. paused: 'paused',
    5. waiting: 'waiting'
    6. }
    7. if (status === 'success' || status === 'error') {
    8. // 只有status为success或者error的时候可以使用 response
    9. // eg:
    10. // return response data ?
    11. return response.data
    12. } else {
    13. return statusTextMap[status]
    14. }
    15. }

    fileComplete方法

    1. fileComplete(rootFile) {
    2. // 一个根文件(文件夹)成功上传完成。
    3. // console.log("fileComplete", rootFile);
    4. // console.log("一个根文件(文件夹)成功上传完成。");
    5. },

    complete方法

    1. complete() {
    2. // 上传完毕。
    3. // console.log("complete");
    4. },

    fileSuccess方法

    文件上传成功,进行合并。

    1. fileSuccess(rootFile, file, response, chunk) {
    2. // console.log(rootFile);
    3. // console.log(file);
    4. // console.log(message);
    5. // console.log(chunk);
    6. const result = JSON.parse(response);
    7. console.log(result.success, this.skip);
    8. if (result.success && !this.skip) {
    9. axios
    10. .post(
    11. "http://127.0.0.1:8888/upload/merge",
    12. {
    13. identifier: file.uniqueIdentifier,
    14. filename: file.name,
    15. totalChunks: chunk.offset,
    16. },
    17. {
    18. headers: { "Access-Token": storage.get(ACCESS_TOKEN) }
    19. }
    20. )
    21. .then((res) => {
    22. if (res.data.success) {
    23. console.log("上传成功");
    24. } else {
    25. console.log(res);
    26. }
    27. })
    28. .catch(function (error) {
    29. console.log(error);
    30. });
    31. } else {
    32. console.log("上传成功,不需要合并");
    33. }
    34. if (this.skip) {
    35. this.skip = false;
    36. }
    37. },

    filesAdded方法

    文件选择完成,进行文件分片处理。

    1. filesAdded(file, fileList, event) {
    2. // console.log(file);
    3. file.forEach((e) => {
    4. this.fileList.push(e);
    5. this.computeMD5(e);
    6. });
    7. },
    8. computeMD5(file) {
    9. let fileReader = new FileReader();
    10. let time = new Date().getTime();
    11. let blobSlice =
    12. File.prototype.slice ||
    13. File.prototype.mozSlice ||
    14. File.prototype.webkitSlice;
    15. let currentChunk = 0;
    16. const chunkSize = 1024 * 1024;
    17. let chunks = Math.ceil(file.size / chunkSize);
    18. let spark = new SparkMD5.ArrayBuffer();
    19. // 文件状态设为"计算MD5"
    20. file.cmd5 = true; //文件状态为“计算md5...”
    21. file.pause();
    22. loadNext();
    23. fileReader.onload = (e) => {
    24. spark.append(e.target.result);
    25. if (currentChunk < chunks) {
    26. currentChunk++;
    27. loadNext();
    28. // 实时展示MD5的计算进度
    29. console.log(
    30. `第${currentChunk}分片解析完成, 开始第${
    31. currentChunk + 1
    32. } / ${chunks}分片解析`
    33. );
    34. } else {
    35. let md5 = spark.end();
    36. console.log(
    37. `MD5计算完毕:${file.name} \nMD5${md5} \n分片:${chunks} 大小:${
    38. file.size
    39. } 用时:${new Date().getTime() - time} ms`
    40. );
    41. spark.destroy(); //释放缓存
    42. file.uniqueIdentifier = md5; //将文件md5赋值给文件唯一标识
    43. file.cmd5 = false; //取消计算md5状态
    44. file.resume(); //开始上传
    45. }
    46. };
    47. fileReader.onerror = function () {
    48. this.error(`文件${file.name}读取出错,请检查该文件`);
    49. file.cancel();
    50. };
    51. function loadNext() {
    52. let start = currentChunk * chunkSize;
    53. let end =
    54. start + chunkSize >= file.size ? file.size : start + chunkSize;
    55. fileReader.readAsArrayBuffer(blobSlice.call(file.file, start, end));
    56. }
    57. },

    allStart全部开始

    1. allStart() {
    2. console.log(this.fileList);
    3. this.fileList.map((e) => {
    4. if (e.paused) {
    5. e.resume();
    6. }
    7. });
    8. },

    allStop全部停止

    1. allStop() {
    2. console.log(this.fileList);
    3. this.fileList.map((e) => {
    4. if (!e.paused) {
    5. e.pause();
    6. }
    7. });
    8. },

    allRemove全部移除

    1. allRemove() {
    2. this.fileList.map((e) => {
    3. e.cancel();
    4. });
    5. this.fileList = [];
    6. },

    文件分片

    vue-simple-uploader自动将文件进行分片,在options的chunkSize中可以设置每个分片的大小。
    如图:对于大文件来说,会发送多个请求,在设置testChunks为true后(在插件中默认就是true),会发送与服务器进行分片校验的请求,下面的第一个get请求就是该请求;后面的每一个post请求都是上传分片的请求
    image.png
    看一下发送给服务端的参数,其中chunkNumber表示当前是第几个分片,totalChunks代表所有的分片数,这两个参数都是都是插件根据你设置的chunkSize来计算的。
    springboot整合vue2-uploader文件分片上传、秒传、断点续传 - 图6
    需要注意的就是在最后文件上传成功的事件中,通过后台返回的字段,来判断是否要再给后台发送一个文件合并的请求。

    MD5的计算过程

    断点续传及秒传的基础是要计算文件的MD5,这是文件的唯一标识,然后服务器根据MD5进行判断,是进行秒传还是断点续传。
    在file-added事件之后,就计算MD5,我们最终的目的是将计算出来的MD5加到参数里传给后台,然后继续文件上传的操作,详细的思路步骤是:

  1. 把uploader组件的autoStart设为false,即选择文件后不会自动开始上传
  2. 先通过 file.pause()暂停文件,然后通过H5的FileReader接口读取文件
  3. 将异步读取文件的结果进行MD5,这里我用的加密工具是spark-md5,你可以通过npm install spark-md5 —save来安装,也可以使用其他MD5加密工具。
  4. file有个属性是uniqueIdentifier,代表文件唯一标示,我们把计算出来的MD5赋值给这个属性 file.uniqueIdentifier = md5,这就实现了我们最终的目的。
  5. 通过file.resume()开始/继续文件上传。
    1. computeMD5(file) {
    2. let fileReader = new FileReader();
    3. let time = new Date().getTime();
    4. let blobSlice =
    5. File.prototype.slice ||
    6. File.prototype.mozSlice ||
    7. File.prototype.webkitSlice;
    8. let currentChunk = 0;
    9. const chunkSize = 1024 * 1024;
    10. let chunks = Math.ceil(file.size / chunkSize);
    11. let spark = new SparkMD5.ArrayBuffer();
    12. // 文件状态设为"计算MD5"
    13. file.cmd5 = true; //文件状态为“计算md5...”
    14. file.pause();
    15. loadNext();
    16. fileReader.onload = (e) => {
    17. spark.append(e.target.result);
    18. if (currentChunk < chunks) {
    19. currentChunk++;
    20. loadNext();
    21. // 实时展示MD5的计算进度
    22. console.log(
    23. `第${currentChunk}分片解析完成, 开始第${
    24. currentChunk + 1
    25. } / ${chunks}分片解析`
    26. );
    27. } else {
    28. let md5 = spark.end();
    29. console.log(
    30. `MD5计算完毕:${file.name} \nMD5${md5} \n分片:${chunks} 大小:${
    31. file.size
    32. } 用时:${new Date().getTime() - time} ms`
    33. );
    34. spark.destroy(); //释放缓存
    35. file.uniqueIdentifier = md5; //将文件md5赋值给文件唯一标识
    36. file.cmd5 = false; //取消计算md5状态
    37. file.resume(); //开始上传
    38. }
    39. };
    40. fileReader.onerror = function () {
    41. this.error(`文件${file.name}读取出错,请检查该文件`);
    42. file.cancel();
    43. };
    44. function loadNext() {
    45. let start = currentChunk * chunkSize;
    46. let end =
    47. start + chunkSize >= file.size ? file.size : start + chunkSize;
    48. fileReader.readAsArrayBuffer(blobSlice.call(file.file, start, end));
    49. }
    50. },
    给file的uniqueIdentifier 属性赋值后,请求中的identifier即是我们计算出来的MD5
    springboot整合vue2-uploader文件分片上传、秒传、断点续传 - 图7

    秒传及断点续传

    在计算完MD5后,我们就能谈断点续传及秒传的概念了。
    服务器根据前端传过来的MD5去判断是否可以进行秒传或断点续传:
  • a. 服务器发现文件已经完全上传成功,则直接返回秒传的标识。
  • b. 服务器发现文件上传过分片信息,则返回这些分片信息,告诉前端继续上传,即断点续传

在每次上传过程的最开始,vue-simple-uploader会发送一个get请求,来问服务器我哪些分片已经上传过了,这个请求返回的结果也有几种可能:

  • a. 如果是秒传,在请求结果中会有相应的标识,比如我这里是skipUpload为true,且返回了url,代表服务器告诉我们这个文件已经有了,我直接把url给你,你不用再传了,这就是秒传
  • b. 如果后台返回了分片信息,这是断点续传。如图,返回的数据中有个uploaded的字段,代表这些分片是已经上传过的了,插件会自动跳过这些分片的上传。

图b1:断点续传情况下后台返回值
springboot整合vue2-uploader文件分片上传、秒传、断点续传 - 图8

前端做分片检验:checkChunkUploadedByResponse

插件自己是不会判断哪个需要跳过的,在代码中由options中的checkChunkUploadedByResponse控制,它会根据 XHR 响应内容检测每个块是否上传成功了,成功的分片直接跳过上传
你要在这个函数中进行处理,可以跳过的情况下返回true即可。

  1. checkChunkUploadedByResponse: function (chunk, message) {
  2. let objMessage = JSON.parse(message);
  3. if (objMessage.skipUpload) {
  4. return true;
  5. }
  6. return (objMessage.uploaded || []).indexOf(chunk.offset + 1) >= 0
  7. },

注:skipUpload 和 uploaded 是我和后台商议的字段,你要按照后台实际返回的字段名来。

优化MD5计算

原uploader中计算MD5的方式为对整个文件直接计算MD5,很吃内存,容易导致浏览器崩溃
我改成了通过分片读取文件的方式计算MD5,防止直接读取大文件时因内存占用过大导致的网页卡顿、崩溃。

自定义的状态

(之前我就封装了几种自定义状态,最近总有小伙伴问怎么没有“校验MD5”,“合并中”这些状态,我就把我的方法写出来了,方式很笨,但是能实现效果)
插件原本只支持了success、error、uploading、paused、waiting这几种状态,
由于业务需求,我额外增加了“校验MD5”、“合并中”、“转码中”、“上传失败”这几种自定义的状态
由于前几种状态是插件已经封装好的,我不能改源码,只能用比较hack的方式:
当自定义状态开始时,要手动调一下statusSet方法,生成一个p标签盖在原本的状态上面;当自定义状态结束时,还要手动调用statusRemove移除该标签。

  1. this.statusSet(file.id, 'merging');
  2. this.statusRemove(file.id);