平时遇到的问题以及工作中遇到的问题汇总在这里

问题1

在idea中添加Scala编译器的时候出现一下问题:

  • 错误: 找不到或无法加载主类 TestLocal scala

| 解决方案:|

|-|

|Project struct->Global Libraries->添加Scala SDK—>重新build工程|

在进行解压文件是出现以下错误

  1. tar -zxvf cloudera-manager-centos7-cm5.14.1_x86_64.tar.gz /data0/software/
  • tar: /data0/software: Not found in archive tar: Exiting with failure status due to previous errors

解决方案

  1. | tar -zxvf cloudera-manager-centos7-cm5.14.1_x86_64.tar.gz
  2. // 直接解压到本地目录下,然后移到指定目录下

连接数据库时一下错误

  1. java.sql.SQLException: null, message from server: "Host 'c4' is not allowed to connect to this MariaDB server"
  2. 原因:它的意思就是安装了数据库的服务器不允许部署项目的服务器进行远程连接。也就是权限问题,修改权限就可以了,修改方法是。

解决方案

  1. 1. mysql -uroot -p密码 登陆到数据
  2. 2.use mysql;
  3. // 可以看到user为root,host为localhost的话,说明mysql只允许本机连接,那么外网,本地软件客户端就无法连接了。
  4. 3.select host,user,password from user;
  5. 4.update user set host='%' where user ='root';
  6. 5.执行刷新权限:flush privileges;

安装Cloudera manager过程中出现以下错误
001

  1. Message from syslogd@c4 at Mar 26 15:34:56 ...
  2. kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [gzip:64825]
  3. You have mail in /var/spool/mail/root

解决方案

002

  1. s使用Cloudera manager配置集群过程中,出现以下错误
  2. Command failed to run because service HDFS has an invalid configuration. Review and correct its configuration. First error: HDFS service not configured for High Availability must have a SecondaryNameNode

解决方案:

  1. Cloudera manager中添加secondaryNameNode的节点

003

  1. error [05:08:58.204] [warning][process] UnhandledPromiseRejectionWarning: [security_exception] missing authentication token for REST request [/_cluster/settings?include_defaults=true], with { header={ WWW-Authenticate="Basic realm=\"security\" charset=\"UTF-8\"" } } :: {"path":"/_cluster/settings","query":{"include_defaults":true},"statusCode":401,"response":"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"missing authentication token for REST request [/_cluster/settings?include_defaults=true]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}}],\"type\":\"security_exception\",\"reason\":\"missing authentication token for REST request [/_cluster/settings?include_defaults=true]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}},\"status\":401}","wwwAuthenticateDirective":"Basic realm=\"security\" charset=\"UTF-8\""}

解决方案:

  1. 原因:由于es处于安全的考虑,需要设置用户组和用户名
  2. 修改kibana的登录es的用户名和密码
  3. kibana.yml
  4. #elasticsearch.username: "elastic"
  5. #elasticsearch.password: "changeme"

004

  1. 在同一台机器中,重新安装cmd-server
  2. 老是出现一下现象:
  3. cloudera-scm-server 已死,但 pid 文件存在
  4. Caused by: org.hibernate.exception.GenericJDBCException: Could not open connection
  5. at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54)
  6. at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125)
  7. at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110)
  8. at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:221)
  9. at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:157)
  10. at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67)
  11. at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:160)
  12. at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1426)
  13. at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:59)
  14. ... 28 more
  15. Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database!
  16. at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
  17. at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:529)
  18. at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
  19. at org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:84)
  20. at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:292)
  21. at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:214)
  22. ... 33 more
  23. Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
  24. at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)
  25. at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
  26. at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)
  27. at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)
  28. ... 37 more
  29. [root@cdh1 cloudera-scm-server]# ls

解决方案:

  1. 1.重新删除数据库中的数据 cmdb 的数据
  2. 删除数据库
  3. show databases;
  4. drop database cmdb;
  5. 删除数据库中的scm的用户信息
  6. use mysql;
  7. select user,host,password from user;
  8. delete from user where user='scm';
  9. 重新启动
  10. /opt/cm-5.14.1/share/cmf/schema/scm_prepare_database.sh mysql cmdb -h"cdh1" -uroot -proot --scm-host cdh1 scm scm scm

005.编写java UI界面时,出现一下的错误

  1. Error:java: 无效的源发行版: 9

解决方案:

  1. 1.打开 'File->Project Structure'
  2. 2.选择 Modules -> Sources,将Language level 改成其它的,比如,本例中改成 8

006

  1. Exception in thread "main" java.awt.IllegalComponentStateException: contentPane cannot be set to null.
  2. at javax.swing.JRootPane.setContentPane(JRootPane.java:621)
  3. at javax.swing.JFrame.setContentPane(JFrame.java:698)
  4. at BackRunner.main(BackRunner.java:12)

解决方案:

007

  1. jps时出现process information unavailable情况分析
  2. [root@cdh1 tmp]# jps
  3. 32072 Jps
  4. 29818 -- process information unavailable
  5. 29820 -- process information unavailable
  6. 31153 -- process information unavailable
  7. 原因分析:
  8. 由于用户的原因,出现了程序假死的现象
  9. 使用ls -l 查看到hsperfdata-* 查看到该进程的用户是别人的用户
  10. [root@cdh1 tmp]# ls -l
  11. 总用量 356
  12. -rw------- 1 cloudera-scm cloudera-scm 20124 4 3 17:59 7kQWq33M
  13. -rw------- 1 root root 0 4 4 03:49 cmflistener-stderr---agent-1005-1554276369-LZ7Aqd.log
  14. -rw------- 1 root root 0 4 4 03:53 cmflistener-stderr---agent-1842-1554321191-uXQwWP.log
  15. -rw------- 1 root root 0 4 4 04:33 cmflistener-stderr---agent-2192-1554323613-DCz4eD.log
  16. -rw------- 1 root root 0 4 6 05:52 cmflistener-stderr---agent-3686-1554501153-7oV0rq.log
  17. -rw------- 1 root root 0 4 4 03:49 cmflistener-stdout---agent-1005-1554276369-vJ_VFG.log
  18. -rw------- 1 root root 380 4 4 04:12 cmflistener-stdout---agent-1842-1554321191-5lhlBt.log
  19. -rw------- 1 root root 210 4 4 13:20 cmflistener-stdout---agent-2192-1554323613-RvEHGS.log
  20. -rw------- 1 root root 839 4 6 10:37 cmflistener-stdout---agent-3686-1554501153-S7ZFtA.log
  21. -rw------- 1 cloudera-scm cloudera-scm 264 4 3 15:26 f7VRInqv
  22. drwxr-xr-x 3 hbase hbase 19 4 4 04:08 hbase-hbase
  23. drwxr-xr-x 2 centos centos 6 4 6 10:32 hsperfdata_centos
  24. drwxr-xr-x 2 hdfs hdfs 45 4 6 10:39 hsperfdata_hdfs
  25. drwxr-xr-x 2 root root 6 4 6 10:44 hsperfdata_root
  26. -rw------- 1 cloudera-scm cloudera-scm 20124 4 3 15:26 jEt9oiI5
  27. drwxr-x--- 4 root root 32 4 6 05:54 Jetty_0_0_0_0_7180_webapp____.3x0fy6
  28. drwxr-xr-x 4 hdfs hdfs 32 4 6 10:28 Jetty_cdh1_50070_hdfs____.y9zuof
  29. drwxr-xr-x 4 hdfs hdfs 32 4 6 10:28 Jetty_cdh1_50090_secondary____.mewnam
  30. drwxr-xr-x 4 hdfs hdfs 32 4 6 06:47 Jetty_localhost_34374_datanode____ovjauu
  31. drwxr-xr-x 4 hdfs hdfs 32 4 6 09:20 Jetty_localhost_34848_datanode____d5ckrm
  32. drwxr-xr-x 4 hdfs hdfs 32 4 6 10:28 Jetty_localhost_36121_datanode____ehjfx0
  33. drwxr-xr-x 4 hdfs hdfs 32 4 6 10:37 Jetty_localhost_39222_datanode____98jb1v
  34. drwxr-xr-x 4 hdfs hdfs 32 4 6 10:22 Jetty_localhost_40314_datanode____.av5cov
  35. drwxr-xr-x 4 hdfs hdfs 32 4 6 06:44 Jetty_localhost_45820_datanode____blh1a4
  36. -rw-r----- 1 root root 71819 4 4 04:34 jffi2836949287839359412.tmp
  37. -rw-r----- 1 root root 71819 4 6 05:54 jffi3732987555595003582.tmp
  38. -rwx------. 1 root root 836 4 3 05:43 ks-script-XmtlZB
  39. -rw------- 1 cloudera-scm cloudera-scm 264 4 3 17:59 OBdeOj2V
  40. drwx------ 4 root root 268 4 3 18:07 scm_prepare_node.0sW6WTQ0
  41. drwx------ 4 root root 283 4 4 03:46 scm_prepare_node.Xjg78weR
  42. drwx------ 4 root root 283 4 4 03:48 scm_prepare_node.YqfKp9oi
  43. drwx------ 3 root root 17 4 6 05:11 systemd-private-1455cc887c404e8d9798e166184e734c-mariadb.service-UDim9J
  44. drwx------. 3 root root 17 4 3 05:44 systemd-private-ac56878008954fbab79be9311130eee0-systemd-hostnamed.service-Ir8Xc5
  45. drwx------ 3 root root 17 4 4 04:23 systemd-private-dfad3f20de2046a7bda5772bef7f96aa-mariadb.service-YrXmRz
  46. -rw-------. 1 root root 0 4 3 05:39 yum.log
  47. -rw-------. 1 root root 69633 4 3 06:55 yum_save_tx.2019-04-03.06-55.yYyG0q.yumtx
  48. -rw-------. 1 root root 69633 4 3 07:05 yum_save_tx.2019-04-03.07-05.u7ZgJs.yumtx
  49. -rw------- 1 root root 605 4 4 03:40 yum_save_tx.2019-04-04.03-40.ePwE9J.yumtx

解决方案:

  1. 可以使用ps -ef | grep pid 查看到进程
  2. [root@cdh1 tmp]# ps -ef | grep 29818
  3. 对于假死的现象,可以将该进程修改为root用户
  4. [root@cdh1 tmp]# chown -R root:root ./*

008 编译redis源码的时候,出现一下的错误

  1. [root@cdh1 redis-2.8.17]# make
  2. cd src && make all
  3. make[1]: 进入目录“/root/download/redis-2.8.17/src
  4. CC adlist.o
  5. In file included from adlist.c:34:0:
  6. zmalloc.h:50:31: 致命错误:jemalloc/jemalloc.h:没有那个文件或目录
  7. #include <jemalloc/jemalloc.h>
  8. ^
  9. 编译中断。
  10. make[1]: *** [adlist.o] 错误 1
  11. make[1]: 离开目录“/root/download/redis-2.8.17/src
  12. make: *** [all] 错误 2

解决方案:

  1. make MALLOC=libc

009 mysql 连接过程中 出现的一下错误

  1. java.sql.SQLNonTransientConnectionException: Cannot load connection class becaus
  2. e of underlying exception: com.mysql.cj.exceptions.WrongArgumentException: Malfo
  3. rmed database URL, failed to parse the connection string near ';characterEncodin
  4. g=utf-8'.

解决方案:

  1. 原因:mysql的驱动发生了更新,之前的链接方式需要改变
  2. 之前:jdbc:mysql://localhost:3306/tree?useUnicode=true&amp;characterEncoding=utf-8
  3. 现在:jdbc:mysql://localhost:3306/tree?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone = GMT

010

  1. java.sql.SQLNonTransientConnectionException: CLIENT_PLUGIN_AUTH is required
  2. at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.jav
  3. a:110)

解决方案:

  1. 降低jdbc的版本
  2. 原来是8.0.11 修改为 5.1.37

011

  1. 问题描述:
  2. 在原来的js添加新标签后,运行后,页面无内容显示

解决方案:

  1. web端有缓存,需要强制刷新,重新加载,后页面就有显示

012

  1. 在重新机器过程中,使用ssh连接,出现连接不上问题,systectl restart sshd等指令启动不起来,使用jouralctl -ex 查看到一下log
  2. ssh-keygen: generating new host keys: DSA
  3. /usr/sbin/sshd -t -f /etc/ssh/sshd_config
  4. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  5. @ WARNING: UNPROTECTED PRIVATE KEY FILE! @
  6. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  7. Permissions 0640 for '/etc/ssh/ssh_host_rsa_key' are too open.
  8. It is required that your private key files are NOT accessible by others.
  9. This private key will be ignored.
  10. key_load_private: bad permissions
  11. Could not load host key: /etc/ssh/ssh_host_rsa_key
  12. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  13. @ WARNING: UNPROTECTED PRIVATE KEY FILE! @
  14. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  15. Permissions 0640 for '/etc/ssh/ssh_host_ecdsa_key' are too open.
  16. It is required that your private key files are NOT accessible by others.
  17. This private key will be ignored.
  18. key_load_private: bad permissions
  19. Could not load host key: /etc/ssh/ssh_host_ecdsa_key
  20. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  21. @ WARNING: UNPROTECTED PRIVATE KEY FILE! @
  22. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
  23. Permissions 0640 for '/etc/ssh/ssh_host_ed25519_key' are too open.
  24. It is required that your private key files are NOT accessible by others.
  25. This private key will be ignored.
  26. key_load_private: bad permissions
  27. Could not load host key: /etc/ssh/ssh_host_ed25519_key
  28. 出现:Permissions 0640 for '/etc/ssh/ssh_host_ecdsa_key' are too open. 权限问题

解决方案:

  1. 修改一下的权限
  2. chmod 600 /etc/ssh/ssh_host_rsa_key
  3. chmod 600 /etc/ssh/ssh_host_ecdsa_key
  4. chmod 600 /etc/ssh/ssh_host_ed25519_key

013

  1. [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'

解决方案:

  1. # yum remove ceph-release
  2. 把这个东西卸了,应该是这个的版本不兼容 亲测有效

014

  1. 安装ceph过程中,出现以下错误:
  2. unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory

解决方案:

  1. 在主节点sqh0生成的五个keyring文件需要放置在其他两个节点sqh1sqh2两个节点的/etc/ceph/目录下,不然连接集群时,会出现失败。在主节点也需要把当前目录cluster下生成的五个keyring也传递到/etc/ceph/
  2. [root@sqh0 cluster]# scp *.keyring sqh1:/etc/ceph/

015

  1. Spring Boot 踩坑之路之 Configuration Annotation Proessor not found in classpath
  2. 1.出现spring boot Configuration Annotation Proessor not found in classpath的提示是在用了@ConfigurationProperties这个注解时,所以问题出现在ConfigurationProperties注解。
  3. 2.根据提示的not found in classpath,查询此注解的使用关于怎么指定classpath,进而查询locationspring boot1.5以上版本@ConfigurationProperties取消location注解

解决方案:

  1. <dependency>
  2. <groupId> org.springframework.boot </groupId>
  3. <artifactId> spring-boot-configuration-processor </artifactId>
  4. <optional> true </optional>
  5. </dependency>

016

  1. RuntimeError: Failed to execute command: ceph --version
  1. 4.1 在安装的过程中发生了错误提示:[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version
  2. 原因:找不到ceph -v 这条命令
  3. 4.2 重新执行安装命令后:
  4. [ceph-node1][WARNIN] Another app is currently holding the yum lock; waiting for it to exit...
  5. [ceph-node1][WARNIN] The other application is: yum
  6. [ceph-node1][WARNIN] Memory : 115 M RSS (507 MB VSZ)
  7. [ceph-node1][WARNIN] Started: Wed Jan 3 03:01:23 2018 - 06:32 ago
  8. [ceph-node1][WARNIN] State : Sleeping, pid: 1742
  9. 结果说是yum处于睡眠状态
  10. 4.3 于是查看yum进程
  11. [root@ceph-node1 my-cluster]# ps -ef | grep yum
  12. root 1742 1 0 03:01 pts/0 00:00:03 /usr/bin/python /usr/bin/yum -y install ceph ceph-radosgw
  13. 结果是在后台还运行,也就是还在安装中ceph
  14. 原因是:
  15. 在安装的过程中,走这一步的时候[ceph-node1][DEBUG ] Downloading packages,是在下载ceph包,由于包的大小是200M(当时看一下)
  16. 加上网络太差,导致ceph-deploy工具超时。
  17. 解决办法:
  18. 不要强制杀掉yum进程,就让yum在后台慢慢装,等了一小段时间后,果然如此!
  19. 查看:
  20. [root@ceph-node1 my-cluster]# ceph -v
  21. ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
  22. [root@ceph-node1 my-cluster]# ceph
  23. ceph ceph-clsinfo ceph-deploy cephfs-data-scan ceph-objectstore-tool ceph-run
  24. ceph-authtool ceph-conf ceph-detect-init cephfs-journal-tool ceph-osd ceph-syn
  25. ceph-bluefs-tool ceph-create-keys ceph-disk cephfs-table-tool ceph-post-file
  26. ceph-brag ceph-crush-location ceph-disk-udev ceph-mds ceph-rbdnamer
  27. ceph-client-debug ceph-dencoder cephfs ceph-mon ceph-rest-api
  28. 结果:ceph已经安装了

017

  1. [ceph_deploy][ERROR ] RuntimeError: bootstrap-osd keyring not found; run 'gatherkeys'

018

  1. [ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb