平时遇到的问题以及工作中遇到的问题汇总在这里
问题1
在idea中添加Scala编译器的时候出现一下问题:
- 错误: 找不到或无法加载主类 TestLocal scala
| 解决方案:|
|-|
|Project struct->Global Libraries->添加Scala SDK—>重新build工程|
在进行解压文件是出现以下错误
tar -zxvf cloudera-manager-centos7-cm5.14.1_x86_64.tar.gz /data0/software/
- tar: /data0/software: Not found in archive tar: Exiting with failure status due to previous errors
解决方案
| tar -zxvf cloudera-manager-centos7-cm5.14.1_x86_64.tar.gz
// 直接解压到本地目录下,然后移到指定目录下
连接数据库时一下错误
java.sql.SQLException: null, message from server: "Host 'c4' is not allowed to connect to this MariaDB server"
原因:它的意思就是安装了数据库的服务器不允许部署项目的服务器进行远程连接。也就是权限问题,修改权限就可以了,修改方法是。
解决方案
1. mysql -uroot -p密码 登陆到数据
2.use mysql;
// 可以看到user为root,host为localhost的话,说明mysql只允许本机连接,那么外网,本地软件客户端就无法连接了。
3.select host,user,password from user;
4.update user set host='%' where user ='root';
5.执行刷新权限:flush privileges;
安装Cloudera manager过程中出现以下错误
001
Message from syslogd@c4 at Mar 26 15:34:56 ...
kernel:NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [gzip:64825]
You have mail in /var/spool/mail/root
解决方案
002
s使用Cloudera manager配置集群过程中,出现以下错误
Command failed to run because service HDFS has an invalid configuration. Review and correct its configuration. First error: HDFS service not configured for High Availability must have a SecondaryNameNode
解决方案:
在Cloudera manager中添加secondaryNameNode的节点
003
error [05:08:58.204] [warning][process] UnhandledPromiseRejectionWarning: [security_exception] missing authentication token for REST request [/_cluster/settings?include_defaults=true], with { header={ WWW-Authenticate="Basic realm=\"security\" charset=\"UTF-8\"" } } :: {"path":"/_cluster/settings","query":{"include_defaults":true},"statusCode":401,"response":"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"missing authentication token for REST request [/_cluster/settings?include_defaults=true]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}}],\"type\":\"security_exception\",\"reason\":\"missing authentication token for REST request [/_cluster/settings?include_defaults=true]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}},\"status\":401}","wwwAuthenticateDirective":"Basic realm=\"security\" charset=\"UTF-8\""}
解决方案:
原因:由于es处于安全的考虑,需要设置用户组和用户名
修改kibana的登录es的用户名和密码
kibana.yml:
#elasticsearch.username: "elastic"
#elasticsearch.password: "changeme"
004
在同一台机器中,重新安装cmd-server
老是出现一下现象:
cloudera-scm-server 已死,但 pid 文件存在
Caused by: org.hibernate.exception.GenericJDBCException: Could not open connection
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:221)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:157)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:160)
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1426)
at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:59)
... 28 more
Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database!
at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:529)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:84)
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:292)
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:214)
... 33 more
Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source.
at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319)
at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)
... 37 more
[root@cdh1 cloudera-scm-server]# ls
解决方案:
1.重新删除数据库中的数据 cmdb 的数据
删除数据库
show databases;
drop database cmdb;
删除数据库中的scm的用户信息
use mysql;
select user,host,password from user;
delete from user where user='scm';
重新启动
/opt/cm-5.14.1/share/cmf/schema/scm_prepare_database.sh mysql cmdb -h"cdh1" -uroot -proot --scm-host cdh1 scm scm scm
005.编写java UI界面时,出现一下的错误
Error:java: 无效的源发行版: 9
解决方案:
1.打开 'File->Project Structure'
2.选择 Modules -> Sources,将Language level 改成其它的,比如,本例中改成 8
006
Exception in thread "main" java.awt.IllegalComponentStateException: contentPane cannot be set to null.
at javax.swing.JRootPane.setContentPane(JRootPane.java:621)
at javax.swing.JFrame.setContentPane(JFrame.java:698)
at BackRunner.main(BackRunner.java:12)
解决方案:
007
jps时出现process information unavailable情况分析
[root@cdh1 tmp]# jps
32072 Jps
29818 -- process information unavailable
29820 -- process information unavailable
31153 -- process information unavailable
原因分析:
由于用户的原因,出现了程序假死的现象
使用ls -l 查看到hsperfdata-* 查看到该进程的用户是别人的用户
[root@cdh1 tmp]# ls -l
总用量 356
-rw------- 1 cloudera-scm cloudera-scm 20124 4月 3 17:59 7kQWq33M
-rw------- 1 root root 0 4月 4 03:49 cmflistener-stderr---agent-1005-1554276369-LZ7Aqd.log
-rw------- 1 root root 0 4月 4 03:53 cmflistener-stderr---agent-1842-1554321191-uXQwWP.log
-rw------- 1 root root 0 4月 4 04:33 cmflistener-stderr---agent-2192-1554323613-DCz4eD.log
-rw------- 1 root root 0 4月 6 05:52 cmflistener-stderr---agent-3686-1554501153-7oV0rq.log
-rw------- 1 root root 0 4月 4 03:49 cmflistener-stdout---agent-1005-1554276369-vJ_VFG.log
-rw------- 1 root root 380 4月 4 04:12 cmflistener-stdout---agent-1842-1554321191-5lhlBt.log
-rw------- 1 root root 210 4月 4 13:20 cmflistener-stdout---agent-2192-1554323613-RvEHGS.log
-rw------- 1 root root 839 4月 6 10:37 cmflistener-stdout---agent-3686-1554501153-S7ZFtA.log
-rw------- 1 cloudera-scm cloudera-scm 264 4月 3 15:26 f7VRInqv
drwxr-xr-x 3 hbase hbase 19 4月 4 04:08 hbase-hbase
drwxr-xr-x 2 centos centos 6 4月 6 10:32 hsperfdata_centos
drwxr-xr-x 2 hdfs hdfs 45 4月 6 10:39 hsperfdata_hdfs
drwxr-xr-x 2 root root 6 4月 6 10:44 hsperfdata_root
-rw------- 1 cloudera-scm cloudera-scm 20124 4月 3 15:26 jEt9oiI5
drwxr-x--- 4 root root 32 4月 6 05:54 Jetty_0_0_0_0_7180_webapp____.3x0fy6
drwxr-xr-x 4 hdfs hdfs 32 4月 6 10:28 Jetty_cdh1_50070_hdfs____.y9zuof
drwxr-xr-x 4 hdfs hdfs 32 4月 6 10:28 Jetty_cdh1_50090_secondary____.mewnam
drwxr-xr-x 4 hdfs hdfs 32 4月 6 06:47 Jetty_localhost_34374_datanode____ovjauu
drwxr-xr-x 4 hdfs hdfs 32 4月 6 09:20 Jetty_localhost_34848_datanode____d5ckrm
drwxr-xr-x 4 hdfs hdfs 32 4月 6 10:28 Jetty_localhost_36121_datanode____ehjfx0
drwxr-xr-x 4 hdfs hdfs 32 4月 6 10:37 Jetty_localhost_39222_datanode____98jb1v
drwxr-xr-x 4 hdfs hdfs 32 4月 6 10:22 Jetty_localhost_40314_datanode____.av5cov
drwxr-xr-x 4 hdfs hdfs 32 4月 6 06:44 Jetty_localhost_45820_datanode____blh1a4
-rw-r----- 1 root root 71819 4月 4 04:34 jffi2836949287839359412.tmp
-rw-r----- 1 root root 71819 4月 6 05:54 jffi3732987555595003582.tmp
-rwx------. 1 root root 836 4月 3 05:43 ks-script-XmtlZB
-rw------- 1 cloudera-scm cloudera-scm 264 4月 3 17:59 OBdeOj2V
drwx------ 4 root root 268 4月 3 18:07 scm_prepare_node.0sW6WTQ0
drwx------ 4 root root 283 4月 4 03:46 scm_prepare_node.Xjg78weR
drwx------ 4 root root 283 4月 4 03:48 scm_prepare_node.YqfKp9oi
drwx------ 3 root root 17 4月 6 05:11 systemd-private-1455cc887c404e8d9798e166184e734c-mariadb.service-UDim9J
drwx------. 3 root root 17 4月 3 05:44 systemd-private-ac56878008954fbab79be9311130eee0-systemd-hostnamed.service-Ir8Xc5
drwx------ 3 root root 17 4月 4 04:23 systemd-private-dfad3f20de2046a7bda5772bef7f96aa-mariadb.service-YrXmRz
-rw-------. 1 root root 0 4月 3 05:39 yum.log
-rw-------. 1 root root 69633 4月 3 06:55 yum_save_tx.2019-04-03.06-55.yYyG0q.yumtx
-rw-------. 1 root root 69633 4月 3 07:05 yum_save_tx.2019-04-03.07-05.u7ZgJs.yumtx
-rw------- 1 root root 605 4月 4 03:40 yum_save_tx.2019-04-04.03-40.ePwE9J.yumtx
解决方案:
可以使用ps -ef | grep pid 查看到进程
[root@cdh1 tmp]# ps -ef | grep 29818
对于假死的现象,可以将该进程修改为root用户
[root@cdh1 tmp]# chown -R root:root ./*
008 编译redis源码的时候,出现一下的错误
[root@cdh1 redis-2.8.17]# make
cd src && make all
make[1]: 进入目录“/root/download/redis-2.8.17/src”
CC adlist.o
In file included from adlist.c:34:0:
zmalloc.h:50:31: 致命错误:jemalloc/jemalloc.h:没有那个文件或目录
#include <jemalloc/jemalloc.h>
^
编译中断。
make[1]: *** [adlist.o] 错误 1
make[1]: 离开目录“/root/download/redis-2.8.17/src”
make: *** [all] 错误 2
解决方案:
make MALLOC=libc
009 mysql 连接过程中 出现的一下错误
java.sql.SQLNonTransientConnectionException: Cannot load connection class becaus
e of underlying exception: com.mysql.cj.exceptions.WrongArgumentException: Malfo
rmed database URL, failed to parse the connection string near ';characterEncodin
g=utf-8'.
解决方案:
原因:mysql的驱动发生了更新,之前的链接方式需要改变
之前:jdbc:mysql://localhost:3306/tree?useUnicode=true&characterEncoding=utf-8
现在:jdbc:mysql://localhost:3306/tree?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone = GMT
010
java.sql.SQLNonTransientConnectionException: CLIENT_PLUGIN_AUTH is required
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.jav
a:110)
解决方案:
降低jdbc的版本
原来是8.0.11 修改为 5.1.37
011
问题描述:
在原来的js添加新标签后,运行后,页面无内容显示
解决方案:
web端有缓存,需要强制刷新,重新加载,后页面就有显示
012
在重新机器过程中,使用ssh连接,出现连接不上问题,systectl restart sshd等指令启动不起来,使用jouralctl -ex 查看到一下log
ssh-keygen: generating new host keys: DSA
/usr/sbin/sshd -t -f /etc/ssh/sshd_config
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0640 for '/etc/ssh/ssh_host_rsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
key_load_private: bad permissions
Could not load host key: /etc/ssh/ssh_host_rsa_key
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0640 for '/etc/ssh/ssh_host_ecdsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
key_load_private: bad permissions
Could not load host key: /etc/ssh/ssh_host_ecdsa_key
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0640 for '/etc/ssh/ssh_host_ed25519_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
key_load_private: bad permissions
Could not load host key: /etc/ssh/ssh_host_ed25519_key
出现:Permissions 0640 for '/etc/ssh/ssh_host_ecdsa_key' are too open. 权限问题
解决方案:
修改一下的权限
chmod 600 /etc/ssh/ssh_host_rsa_key
chmod 600 /etc/ssh/ssh_host_ecdsa_key
chmod 600 /etc/ssh/ssh_host_ed25519_key
013
[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'
解决方案:
# yum remove ceph-release
把这个东西卸了,应该是这个的版本不兼容 亲测有效
014
安装ceph过程中,出现以下错误:
unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
解决方案:
在主节点sqh0生成的五个keyring文件需要放置在其他两个节点sqh1,sqh2两个节点的/etc/ceph/目录下,不然连接集群时,会出现失败。在主节点也需要把当前目录cluster下生成的五个keyring也传递到/etc/ceph/
[root@sqh0 cluster]# scp *.keyring sqh1:/etc/ceph/
015
Spring Boot 踩坑之路之 Configuration Annotation Proessor not found in classpath
1.出现spring boot Configuration Annotation Proessor not found in classpath的提示是在用了@ConfigurationProperties这个注解时,所以问题出现在ConfigurationProperties注解。
2.根据提示的not found in classpath,查询此注解的使用关于怎么指定classpath,进而查询location,spring boot1.5以上版本@ConfigurationProperties取消location注解
解决方案:
<dependency>
<groupId> org.springframework.boot </groupId>
<artifactId> spring-boot-configuration-processor </artifactId>
<optional> true </optional>
</dependency>
016
RuntimeError: Failed to execute command: ceph --version
4.1 在安装的过程中发生了错误提示:[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version
原因:找不到ceph -v 这条命令
4.2 重新执行安装命令后:
[ceph-node1][WARNIN] Another app is currently holding the yum lock; waiting for it to exit...
[ceph-node1][WARNIN] The other application is: yum
[ceph-node1][WARNIN] Memory : 115 M RSS (507 MB VSZ)
[ceph-node1][WARNIN] Started: Wed Jan 3 03:01:23 2018 - 06:32 ago
[ceph-node1][WARNIN] State : Sleeping, pid: 1742
结果说是yum处于睡眠状态
4.3 于是查看yum进程
[root@ceph-node1 my-cluster]# ps -ef | grep yum
root 1742 1 0 03:01 pts/0 00:00:03 /usr/bin/python /usr/bin/yum -y install ceph ceph-radosgw
结果是在后台还运行,也就是还在安装中ceph中
原因是:
在安装的过程中,走这一步的时候[ceph-node1][DEBUG ] Downloading packages,是在下载ceph包,由于包的大小是200多M(当时看一下)
加上网络太差,导致ceph-deploy工具超时。
解决办法:
不要强制杀掉yum进程,就让yum在后台慢慢装,等了一小段时间后,果然如此!
查看:
[root@ceph-node1 my-cluster]# ceph -v
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
[root@ceph-node1 my-cluster]# ceph
ceph ceph-clsinfo ceph-deploy cephfs-data-scan ceph-objectstore-tool ceph-run
ceph-authtool ceph-conf ceph-detect-init cephfs-journal-tool ceph-osd ceph-syn
ceph-bluefs-tool ceph-create-keys ceph-disk cephfs-table-tool ceph-post-file
ceph-brag ceph-crush-location ceph-disk-udev ceph-mds ceph-rbdnamer
ceph-client-debug ceph-dencoder cephfs ceph-mon ceph-rest-api
结果:ceph已经安装了
017
[ceph_deploy][ERROR ] RuntimeError: bootstrap-osd keyring not found; run 'gatherkeys'
018
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb