更换镜像源

http://www.xiaochenboss.cn/article_detail/1587183252459

  1. channels:
  2. - https://mirrors.ustc.edu.cn/anaconda/pkgs/free/
  3. - defaults
  4. show_channel_urls: true
  5. channel_alias: https://mirrors.tuna.tsinghua.edu.cn/anaconda
  6. default_channels:
  7. - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
  8. - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
  9. - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
  10. - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/pro
  11. - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
  12. custom_channels:
  13. conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  14. msys2: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  15. bioconda: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  16. menpo: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  17. pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
  18. simpleitk: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud

安装第三方模块镜像加速指令

  1. pip install tensorflow==1.15.0 -i https://pypi.douban.com/simple
  2. pip install numpy -i https://mirrors.aliyun.com/pypi/simple/

安装第三方模块指令

  1. pip install pandas

卸载第三方模块指令

  1. pip uninstall pandas

查看变量的地址 id()

  1. a = {1: 'one', 2: 'two', 3: 'three', 4: 'four'}
  2. b = a # 表示赋值
  3. c = a.copy() # 表示拷贝
  4. print('a的地址:', id(a))
  5. print('b的地址:', id(b))
  6. print('c的地址:', id(c))
  7. # 打印
  8. a的地址: 6317872
  9. b的地址: 6317872
  10. c的地址: 6317952

查看内置方法函数 dir()

需要在py 的idea 才能查询到

  1. dir(dict)
  2. ['__class__', '__contains__', '__delattr__', '__delitem__', '__dir__', '__doc__', '__eq__', '__format__',
  3. '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__init_subclass__',
  4. '__iter__', '__le__', '__len__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',
  5. '__repr__', '__reversed__', '__setattr__', '__setitem__', '__sizeof__', '__str__', '__subclasshook__',
  6. 'clear', 'copy', 'fromkeys', 'get', 'items', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values']

查看类名的属性 dict

  1. class CC:
  2. def setXY(self,x,y):
  3. self.x = x
  4. self.y = y
  5. def printXY(self):
  6. print(self.x,self.y)
  7. dd =CC()
  8. dd.setXY(12,15)
  9. print(dd.__dict__)
  10. print(CC.__dict__)
  11. -------------------------
  12. {'x': 12, 'y': 15}
  13. {'__module__': '__main__', 'setXY': <function CC.setXY at 0x00A58658>, 'printXY': <function CC.printXY at 0x00A58610>, '__dict__': <attribute '__dict__' of 'CC' objects>, '__weakref__': <attribute '__weakref__' of 'CC' objects>, '__doc__': None}

查看内置方法使用文档 help()

需要在py 的idea 才能查询到

  1. help(dict)
  2. class dict(object)
  3. | dict() -> new empty dictionary
  4. | dict(mapping) -> new dictionary initialized from a mapping object's
  5. | (key, value) pairs
  6. | dict(iterable) -> new dictionary initialized as if via:
  7. | d = {}
  8. | for k, v in iterable:
  9. | d[k] = v
  10. | dict(**kwargs) -> new dictionary initialized with the name=value pairs
  11. | in the keyword argument list. For example: dict(one=1, two=2)
  12. |
  13. | Methods defined here:
  14. |
  15. | __contains__(self, key, /)
  16. | True if the dictionary has the specified key, else False.
  17. |
  18. | __delitem__(self, key, /)
  19. | Delete self[key].
  20. |
  21. | __eq__(self, value, /)
  22. | Return self==value.
  23. |
  24. | __ge__(self, value, /)
  25. | Return self>=value.
  26. |
  27. | __getattribute__(self, name, /)
  28. | Return getattr(self, name).
  29. |
  30. | __getitem__(...)
  31. | x.__getitem__(y) <==> x[y]
  32. |
  33. | __gt__(self, value, /)
  34. | Return self>value.
  35. |
  36. | __init__(self, /, *args, **kwargs)
  37. | Initialize self. See help(type(self)) for accurate signature.
  38. |
  39. | __iter__(self, /)
  40. | Implement iter(self).
  41. |
  42. | __le__(self, value, /)
  43. | Return self<=value.
  44. |
  45. | __len__(self, /)
  46. | Return len(self).
  47. |
  48. | __lt__(self, value, /)
  49. | Return self<value.
  50. |
  51. | __ne__(self, value, /)
  52. | Return self!=value.
  53. |
  54. | __repr__(self, /)
  55. | Return repr(self).
  56. |
  57. | __reversed__(self, /)
  58. | Return a reverse iterator over the dict keys.
  59. |
  60. | __setitem__(self, key, value, /)
  61. | Set self[key] to value.
  62. |
  63. | __sizeof__(...)
  64. | D.__sizeof__() -> size of D in memory, in bytes
  65. |
  66. | clear(...)
  67. | D.clear() -> None. Remove all items from D.
  68. |
  69. | copy(...)
  70. | D.copy() -> a shallow copy of D
  71. |
  72. | get(self, key, default=None, /)
  73. | Return the value for key if key is in the dictionary, else default.
  74. |
  75. | items(...)
  76. | D.items() -> a set-like object providing a view on D's items
  77. |
  78. | keys(...)
  79. | D.keys() -> a set-like object providing a view on D's keys
  80. |
  81. | pop(...)
  82. | D.pop(k[,d]) -> v, remove specified key and return the corresponding value.
  83. | If key is not found, d is returned if given, otherwise KeyError is raised
  84. |
  85. | popitem(self, /)
  86. | Remove and return a (key, value) pair as a 2-tuple.
  87. |
  88. | Pairs are returned in LIFO (last-in, first-out) order.
  89. | Raises KeyError if the dict is empty.
  90. |
  91. | setdefault(self, key, default=None, /)
  92. | Insert key with a value of default if key is not in the dictionary.
  93. |
  94. | Return the value for key if key is in the dictionary, else default.
  95. |
  96. | update(...)
  97. | D.update([E, ]**F) -> None. Update D from dict/iterable E and F.
  98. | If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]
  99. | If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v
  100. | In either case, this is followed by: for k in F: D[k] = F[k]
  101. |
  102. | values(...)
  103. | D.values() -> an object providing a view on D's values
  104. |
  105. | ----------------------------------------------------------------------
  106. | Class methods defined here:
  107. |
  108. | fromkeys(iterable, value=None, /) from builtins.type
  109. | Create a new dictionary with keys from iterable and values set to value.
  110. |
  111. | ----------------------------------------------------------------------
  112. | Static methods defined here:
  113. |
  114. | __new__(*args, **kwargs) from builtins.type
  115. | Create and return a new object. See help(type) for accurate signature.
  116. |
  117. | ----------------------------------------------------------------------
  118. | Data and other attributes defined here:
  119. |
  120. | __hash__ = None

导出环境依耐包

  1. pip freeze > requirements.txt

通过依赖包安装环境

  1. pip install -r requirements.txt
  2. pip install -r requirements.txt -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

后台启动脚本

把脚本文件后缀 改为**.pyw** 即可
没有界面的直接在 任务管理器 直接停止

  1. 鼠标双击 main.pyw 文件即可

如何查看是否安装了cudA和GPU

安装的cuda和gpu 要添加系统环境变量 在path
%CUDA_LIB_PATH%;%CUDA_BIN_PATH%;%CUDA_SDK_LIB_PATH%;%CUDA_SDK_BIN_PATH%;
D:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin
D:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin

  1. cd ‘.\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\demo_suite\

cmd 进入到这文件夹下 输入 .\bandwidthTest.exe
打印出信息 来 看最后一行 显示 Result = PASS 表示配置成功

  1. cd ‘.\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\demo_suite\

cmd 进入到这文件夹下 输入 .\deviceQuery.exe
打印出信息 来 看最后一行 显示 Result = PASS 表示配置成功

image.png

  1. ERROR chentao@null D:\ cd '.\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\demo_suite\'
  2.  chentao@null  D:  Program Files  NVIDIA GPU Computing Toolkit  CUDA  v11.6  extras  demo_suite  .\bandwidthTest.exe
  3. [CUDA Bandwidth Test] - Starting...
  4. Running on...
  5. Device 0: NVIDIA GeForce GTX 1050
  6. Quick Mode
  7. Host to Device Bandwidth, 1 Device(s)
  8. PINNED Memory Transfers
  9. Transfer Size (Bytes) Bandwidth(MB/s)
  10. 33554432 12520.5
  11. Device to Host Bandwidth, 1 Device(s)
  12. PINNED Memory Transfers
  13. Transfer Size (Bytes) Bandwidth(MB/s)
  14. 33554432 12764.8
  15. Device to Device Bandwidth, 1 Device(s)
  16. PINNED Memory Transfers
  17. Transfer Size (Bytes) Bandwidth(MB/s)
  18. 33554432 71500.4
  19. Result = PASS
  1. chentao@null D: Program Files NVIDIA GPU Computing Toolkit CUDA v11.6 extras demo_suite .\deviceQuery.exe
  2. D:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\demo_suite\deviceQuery.exe Starting...
  3. CUDA Device Query (Runtime API) version (CUDART static linking)
  4. Detected 1 CUDA Capable device(s)
  5. Device 0: "NVIDIA GeForce GTX 1050"
  6. CUDA Driver Version / Runtime Version 11.6 / 11.6
  7. CUDA Capability Major/Minor version number: 6.1
  8. Total amount of global memory: 3072 MBytes (3221028864 bytes)
  9. ( 6) Multiprocessors, (128) CUDA Cores/MP: 768 CUDA Cores
  10. GPU Max Clock rate: 1442 MHz (1.44 GHz)
  11. Memory Clock rate: 3504 Mhz
  12. Memory Bus Width: 96-bit
  13. L2 Cache Size: 786432 bytes
  14. Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  15. Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
  16. Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
  17. Total amount of constant memory: zu bytes
  18. Total amount of shared memory per block: zu bytes
  19. Total number of registers available per block: 65536
  20. Warp size: 32
  21. Maximum number of threads per multiprocessor: 2048
  22. Maximum number of threads per block: 1024
  23. Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  24. Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
  25. Maximum memory pitch: zu bytes
  26. Texture alignment: zu bytes
  27. Concurrent copy and kernel execution: Yes with 5 copy engine(s)
  28. Run time limit on kernels: Yes
  29. Integrated GPU sharing Host Memory: No
  30. Support host page-locked memory mapping: Yes
  31. Alignment requirement for Surfaces: Yes
  32. Device has ECC support: Disabled
  33. CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
  34. Device supports Unified Addressing (UVA): Yes
  35. Device supports Compute Preemption: Yes
  36. Supports Cooperative Kernel Launch: Yes
  37. Supports MultiDevice Co-op Kernel Launch: No
  38. Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
  39. Compute Mode:
  40. < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
  41. deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.6, CUDA Runtime Version = 11.6, NumDevs = 1, Device0 = NVIDIA GeForce GTX 1050
  42. Result = PASS

vscode VS pycharm

pycharm 编辑器

配置简单, 无需关心环境 , 软件自带 项目环境管理 , 一键创建配置好项目

image.png

vscode 编辑器