年度计划
2021年OKR
- Kafka
- mysql
- Redis
- k8s
- Go 毛剑
2021年间接学习
- xorm
- 网络丢包
RPC grpc
2021年 Q2&Q3目标
go 训练营笔记
1-go error note
- effective go
- Go 为什么用err 而不用exception
- Error type
- Handing error 怎么减少 if err!=nil{}
- panic main函数中强依赖的必须panic ,对于一些配置值不对不对,必须panic。
- 强依赖 弱依赖
go学习路线
- 服务发现
- request
- 日志
- orm
- 限流及熔断
- config->toml
- tools -> apollo-sdk sets
go 全局概览
- go如何复习
go的标准库的注释和使用姿势,go的内存模型。 - errors包使用 使用%w把原始的错误包住。go1.13没有携带堆栈信息。 github.
- go项目目录结构如何组织。
- 先搞定runtime的原理。再去看源码。再去搜文章。
- dao和业务逻辑层为啥分层
go有两种形态:运行时、编译期间
git
Git 回滚到指定版本并推送到远程分支回滚操作回滚到指定的版本git reset —hard e377f60e28c8b84158强制提交 覆盖远程资源git push -f origin master
php 扩展位置
/usr/local/etc/php/7.2/php.ini
reset by peer
/usr/local/Cellar/php@7.2/7.2.34_1/bin/php -m
PHP rpc 调用
protoc —proto_path=./ —php_out=./../../ —grpc_out=./../../ —plugin=protoc-gen-grpc=/Users/superatom/phpextension/grpc/cmake/build/grpc_php_plugin ./userProduct.proto
php redis
/usr/local/Cellar/php@7.2/7.2.34_1/pecl/20170718/
Kafka 消费数据
kafka-console-consumer —topic tekwang_user_loan_info_change_push —from-beginning —bootstrap-server 127.0.0.1:9092
Kafka config
CONFIG VALUE DEFAULT DESCRIPTION
message.downconversion.enable true
file.delete.delay.ms 60000
segment.ms 604800000 7 days This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.
min.compaction.lag.ms 0
retention.bytes -1 None This configuration controls the maximum size a log can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit.
segment.index.bytes 10485760 10 MB This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.
cleanup.policy delete delete A string that is either "delete" or "compact". This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting will enable log compaction on the topic.
follower.replication.throttled.replicas
message.timestamp.difference.max.ms 9223372036854775807
segment.jitter.ms 0 0 The maximum jitter to subtract from logRollTimeMillis.
preallocate false
message.timestamp.type CreateTime
message.format.version 2.2-IV1
segment.bytes 104857600 1 GB This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.
unclean.leader.election.enable false
max.message.bytes 1000012 1000000 This is largest message size Kafka will allow to be appended to this topic. Note that if you increase this size you must also increase your consumer's fetch size so they can fetch messages this large.
retention.ms 604800000 7 days This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data.
flush.ms 9223372036854775807 None This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.
delete.retention.ms 86400000 86400000 The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan). Default is 24 hours
leader.replication.throttled.replicas
min.insync.replicas 1 1 When a producer sets acks to "all", min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
flush.messages 9223372036854775807 None This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section).
compression.type producer
index.interval.bytes 4096 4096 This setting controls how frequently Kafka adds an index entry to it's offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.
min.cleanable.dirty.ratio 0.5 0.5 This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log.
nginx 配置
Https 证书配置
server { listen 443; server_name 149.129.216.117 sta.api-pay.uangme.com; root /data/app/pay_order/public; ssl on; ssl_certificate ssl/sta.api-pay.uangme.com.pem;#配置证书位置 ssl_certificate_key ssl/sta.api-pay.uangme.com.key;#配置秘钥位置 ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; index index.php; access_log /data/logs/nginx/api-pay.nginx.access.log main; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php last; } location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 256k; fastcgi_index index.php; include fastcgi.conf; } }