https://www.mobile01.com/newsdetail/23483/fried-chicken-nugget-blind-taste-test-2017
python 3 pandas matplotlib.pyplot
寫程式這麼久,python真的有夠難寫的,網路上一堆範例都有語法上的錯誤或不完整的,後來經由大陸某教學網終於成功
https://morvanzhou.github.io/tutorials/data-manipulation/np-pd/3-8-pd-plot/
1、donwload python3 from python website
2、create aaa.py put context
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
print(ts)
ts.plot()
plt.show() //This is best important!!
3、python aaa.py then see error. Usually is package not install. So just pip install
import matplotlib.pyplot as plt # pip install matplotlib
import pandas as pd # pip install pandas
https://morvanzhou.github.io/tutorials/data-manipulation/np-pd/3-8-pd-plot/
1、donwload python3 from python website
2、create aaa.py put context
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
print(ts)
ts.plot()
plt.show() //This is best important!!
3、python aaa.py then see error. Usually is package not install. So just pip install
import matplotlib.pyplot as plt # pip install matplotlib
import pandas as pd # pip install pandas
see screen have notis, just follow install.
windows docker go
1、go
mkdir directory aaa
create file go_httpserver.go
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Welcome to my Website!!!\n %s", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe("0.0.0.0:80", nil)
}
set GOARCH=amd64
FROM scratch
ADD go_httpserver /
ENTRYPOINT ["/go_httpserver"]
mkdir directory aaa
create file go_httpserver.go
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Welcome to my Website!!!\n %s", r.URL.Path[1:])
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe("0.0.0.0:80", nil)
}
set GOOS=linux
go build go_httpserver.go => get linux go program
2、create docker image
2.1 install docker toolbox Link
2.2 run kitematic then see left-down DOCKER CLI , click it, then
2.2.1 create directory bbb
2.2.2 create Dockerfile
ADD go_httpserver /
ENTRYPOINT ["/go_httpserver"]
docker buid -t testdockergoweb:v1 .
2.2.3 use kitematic left-up +NEW -> My Images -> CREATE just crate image
2.2.4 now Containers list have testdockergoweb running. see right-up Settings
-> Hostname / Ports -> change
DOCKER PORT 80
PUBLISHED IP:PORT port->3111
SAVE
then
click PUBLISHED IP:PORT list have bule link.
3、export docker image
in DOCKER CLI -> docker export testdockergoweb > testdockergoweb.tar
眼光費
http://stockresearchsociety.blogspot.tw/2017/12/1211414.html
- 做短線、想做差價前,就下虛擬單 你寫下來進入點, 要出時寫下賣出點,十次實驗,對的機率有7成以上,可以做差價
- 如果你的公司值20元,一年能賺3元,目前股價60元,你能說,一年賺3元,本益比20倍,所以60元很便宜,說這種話,我聽了都覺得很好笑。一年能賺3元,公司值20元,你要年年賺3元賺13.x年才賺到到60元,結果你竟然在一季賺了0.75元,你就開始幻想這樣的賺法13年後你就回本。
- 巴非特之所以不敗,原因很簡單,公司值20元,他在股價打7折、8折時,14~16元然後看出這家公司體質改善了,未來會賺錢,走上正確的方向機率很大,巴非特就大力的買在那個價格,然後用時間去換報酬,這樣的投資才是最安全的,沒有被一些人拿走未來15年,20年的獲利當眼光費。
- 投資為什麼常常會賠錢?因為你沒有估價的能力,也沒有耐心,也不會為你的投資做長久的設想,所以很容易一時亂了手腳
- 眼光費
加回匯率因素,XXX淨值約19.7元,目前看一年能賺1~1.2元,我20投資XXX,我等一年,XXX賺1~1.2元,我等兩年XXX賺2~2.4元,兩年後,三年後XXX獲利穩定後,一般世俗的眼光費套在XXX上,大家再選一個氣氛很好,如財報很好或是大盤氣氛很好的時間點,隨隨便便也是一個很大的報酬率,除了本身公司獲利你穩賺的股息外,總是能等到別人給你10~15年獲利當眼光費的一天,一等到你就賺了10~15年的獲利了嗎?要最大化你在XXX最大的報酬,絕對不會是20,21,21元去賣掉覺得自已賺好多,你要最大化XXX的獲利,一定是在一年後,兩年後的某一天,而那一天只要XXX持續優化的情況下,一定會到來,那時才會是你得到最大報酬率的時候
Azure function
headers: {
'Content-Type': 'application/json'
},
body: {
ip: (
req.headers['x-forwarded-for'].split(":")[0]
//req.headers['x-forwarded-for'] //||
//req.connection.remoteAddress ||
//req.socket.remoteAddress //||
//req.connection.socket.remoteAddress
)
}
[轉]第一次在 GCP Storage 放置靜態網頁就上手
https://coder.tw/?p=7626
1. install gcloud
2. gcloud auth login `email` #login auth email, get rights for change 公開
2.1 gcloud auth revoke `email` #remove auth email
1. install gcloud
2. gcloud auth login `email` #login auth email, get rights for change 公開
2.1 gcloud auth revoke `email` #remove auth email
https://cloud.google.com/sdk/gcloud/reference/auth/login
設定 Bucket 新上傳的檔案預設公開,只能針對 Bucket 設定,不朔及既往
設定某資料夾(or Bucket)底下所有檔案、目錄為公開
設定 Bucket 新上傳的檔案預設公開,只能針對 Bucket 設定,不朔及既往
gsutil defacl set public-read gs://s.ntust.me
設定某資料夾(or Bucket)底下所有檔案、目錄為公開
gsutil acl set -r public-read gs://s.ntust.me/iatp
股
- 每季報告
- 第四季:因為明年3月才會公布財報,所以第三季公佈後到明年3月,近半年時間,是沒有任何依據,第3季公佈成績好,自然可以撐到明年3月,公佈成績不好,也是同樣不好到明年3月,除非有特殊情況,否則沒人看第四季的。
- 外資
- 在大洗盤,把散戶洗走:外資看好這支股票,為了增加獲利,壓低股價,嚇散戶自願出場,買下低成本股票,增加持有股數,等待最終獲利。
- 這段期間不可能讓成本上升,就算上升也會想盡辨法壓低,如果壓不下來,外資持有成本也同樣壓不下來,獲利就不能提高。
- 一般散戶受不了壓力,就直接出場,因為這是長時間,非短期操作。
- 要補單要在大盤情況也不好大跌跟補單
nginx proxy pass [ best practices ] 2 for sysctl tcp
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
http://www.queryadmin.com/1654/tuning-linux-kernel-tcp-parameters-sysctl/
https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
Don't USE
net.ipv4.tcp_tw_recycle=1
—don’t use it—it was already broken for users behind NAT, but if you upgrade your kernel, it will be broken for everyone.
net.ipv4.tcp_timestamps=0
—don’t disable them unless you know all side-effects and you are OK with them. For example, one of non-obvious side effects is that you will loose window scaling and SACK options on syncookies.
https://read01.com/zh-tw/KBgmj7.html
Don't USE
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1 /*Only you know, sometime can do*/
***********sysctl**************
net.ipv4.tcp_mtu_probing=1
# Increase number of max open-files
fs.file-max = 150000
# Increase max number of PIDs
kernel.pid_max = 4194303
# Increase range of ports that can be used
net.ipv4.ip_local_port_range = 1024 65535
# https://tweaked.io/guide/kernel/
# Forking servers, like PostgreSQL or Apache, scale to much higher levels of concurrent connections if this is made larger
kernel.sched_migration_cost_ns=5000000
# https://tweaked.io/guide/kernel/
# Various PostgreSQL users have reported (on the postgresql performance mailing list) gains up to 30% on highly concurrent workloads on multi-core systems
kernel.sched_autogroup_enabled = 0
# https://github.com/ton31337/tools/wiki/tcp_slow_start_after_idle---tcp_no_metrics_save-performance
# Avoid falling back to slow start after a connection goes idle
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save=0
# https://github.com/ton31337/tools/wiki/Is-net.ipv4.tcp_abort_on_overflow-good-or-not%3F
net.ipv4.tcp_abort_on_overflow=0
# Enable TCP window scaling (enabled by default)
# https://en.wikipedia.org/wiki/TCP_window_scale_option
net.ipv4.tcp_window_scaling=1
# Enables fast recycling of TIME_WAIT sockets.
# (Use with caution according to the kernel documentation!)
#net.ipv4.tcp_tw_recycle = 1
# Allow reuse of sockets in TIME_WAIT state for new connections
# only when it is safe from the network stack’s perspective.
#net.ipv4.tcp_tw_reuse = 1
# Turn on SYN-flood protections
net.ipv4.tcp_syncookies=1
# Only retry creating TCP connections twice
# Minimize the time it takes for a connection attempt to fail
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_orphan_retries=2
# How many retries TCP makes on data segments (default 15)
# Some guides suggest to reduce this value
net.ipv4.tcp_retries2=8
# Optimize connection queues
# https://www.linode.com/docs/web-servers/nginx/configure-nginx-for-optimized-performance
# Increase the number of packets that can be queued
net.core.netdev_max_backlog = 3240000
# Max number of "backlogged sockets" (connection requests that can be queued for any given listening socket)
net.core.somaxconn = 50000
# Increase max number of sockets allowed in TIME_WAIT
net.ipv4.tcp_max_tw_buckets = 1440000
# Number of packets to keep in the backlog before the kernel starts dropping them
# A sane value is net.ipv4.tcp_max_syn_backlog = 3240000
net.ipv4.tcp_max_syn_backlog = 3240000
# TCP memory tuning
# View memory TCP actually uses with: cat /proc/net/sockstat
# *** These values are auto-created based on your server specs ***
# *** Edit these parameters with caution because they will use more RAM ***
# Changes suggested by IBM on https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Welcome%20to%20High%20Performance%20Computing%20%28HPC%29%20Central/page/Linux%20System%20Tuning%20Recommendations
# Increase the default socket buffer read size (rmem_default) and write size (wmem_default)
# *** Maybe recommended only for high-RAM servers? ***
net.core.rmem_default=16777216
net.core.wmem_default=16777216
# Increase the max socket buffer size (optmem_max), max socket buffer read size (rmem_max), max socket buffer write size (wmem_max)
# 16MB per socket - which sounds like a lot, but will virtually never consume that much
# rmem_max over-rides tcp_rmem param, wmem_max over-rides tcp_wmem param and optmem_max over-rides tcp_mem param
net.core.optmem_max=16777216
net.core.rmem_max=16777216
net.core.wmem_max=16777216
# Configure the Min, Pressure, Max values (units are in page size)
# Useful mostly for very high-traffic websites that have a lot of RAM
# Consider that we already set the *_max values to 16777216
# So you may eventually comment these three lines
net.ipv4.tcp_mem=16777216 16777216 16777216
net.ipv4.tcp_wmem=4096 87380 16777216
net.ipv4.tcp_rmem=4096 87380 16777216
# Keepalive optimizations
# By default, the keepalive routines wait for two hours (7200 secs) before sending the first keepalive probe,
# and then resend it every 75 seconds. If no ACK response is received for 9 consecutive times, the connection is marked as broken.
# The default values are: tcp_keepalive_time = 7200, tcp_keepalive_intvl = 75, tcp_keepalive_probes = 9
# We would decrease the default values for tcp_keepalive_* params as follow:
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 9
# The TCP FIN timeout belays the amount of time a port must be inactive before it can reused for another connection.
# The default is often 60 seconds, but can normally be safely reduced to 30 or even 15 seconds
# https://www.linode.com/docs/web-servers/nginx/configure-nginx-for-optimized-performance
net.ipv4.tcp_fin_timeout = 7
***********sysctl**************
==PS==
.net.ipv4.tcp_slow_start_after_idle & net.ipv4.tcp_no_metrics_save
https://github.com/ton31337/tools/wiki/tcp_slow_start_after_idle---tcp_no_metrics_save-performance
http://www.queryadmin.com/1654/tuning-linux-kernel-tcp-parameters-sysctl/
https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
Don't USE
net.ipv4.tcp_tw_recycle=1
—don’t use it—it was already broken for users behind NAT, but if you upgrade your kernel, it will be broken for everyone.
net.ipv4.tcp_timestamps=0
—don’t disable them unless you know all side-effects and you are OK with them. For example, one of non-obvious side effects is that you will loose window scaling and SACK options on syncookies.
https://read01.com/zh-tw/KBgmj7.html
Don't USE
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1 /*Only you know, sometime can do*/
***********sysctl**************
net.ipv4.tcp_mtu_probing=1
# Increase number of max open-files
fs.file-max = 150000
# Increase max number of PIDs
kernel.pid_max = 4194303
# Increase range of ports that can be used
net.ipv4.ip_local_port_range = 1024 65535
# https://tweaked.io/guide/kernel/
# Forking servers, like PostgreSQL or Apache, scale to much higher levels of concurrent connections if this is made larger
kernel.sched_migration_cost_ns=5000000
# https://tweaked.io/guide/kernel/
# Various PostgreSQL users have reported (on the postgresql performance mailing list) gains up to 30% on highly concurrent workloads on multi-core systems
kernel.sched_autogroup_enabled = 0
# https://github.com/ton31337/tools/wiki/tcp_slow_start_after_idle---tcp_no_metrics_save-performance
# Avoid falling back to slow start after a connection goes idle
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save=0
# https://github.com/ton31337/tools/wiki/Is-net.ipv4.tcp_abort_on_overflow-good-or-not%3F
net.ipv4.tcp_abort_on_overflow=0
# Enable TCP window scaling (enabled by default)
# https://en.wikipedia.org/wiki/TCP_window_scale_option
net.ipv4.tcp_window_scaling=1
# Enables fast recycling of TIME_WAIT sockets.
# (Use with caution according to the kernel documentation!)
#net.ipv4.tcp_tw_recycle = 1
# Allow reuse of sockets in TIME_WAIT state for new connections
# only when it is safe from the network stack’s perspective.
#net.ipv4.tcp_tw_reuse = 1
# Turn on SYN-flood protections
net.ipv4.tcp_syncookies=1
# Only retry creating TCP connections twice
# Minimize the time it takes for a connection attempt to fail
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_orphan_retries=2
# How many retries TCP makes on data segments (default 15)
# Some guides suggest to reduce this value
net.ipv4.tcp_retries2=8
# Optimize connection queues
# https://www.linode.com/docs/web-servers/nginx/configure-nginx-for-optimized-performance
# Increase the number of packets that can be queued
net.core.netdev_max_backlog = 3240000
# Max number of "backlogged sockets" (connection requests that can be queued for any given listening socket)
net.core.somaxconn = 50000
# Increase max number of sockets allowed in TIME_WAIT
net.ipv4.tcp_max_tw_buckets = 1440000
# Number of packets to keep in the backlog before the kernel starts dropping them
# A sane value is net.ipv4.tcp_max_syn_backlog = 3240000
net.ipv4.tcp_max_syn_backlog = 3240000
# TCP memory tuning
# View memory TCP actually uses with: cat /proc/net/sockstat
# *** These values are auto-created based on your server specs ***
# *** Edit these parameters with caution because they will use more RAM ***
# Changes suggested by IBM on https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Welcome%20to%20High%20Performance%20Computing%20%28HPC%29%20Central/page/Linux%20System%20Tuning%20Recommendations
# Increase the default socket buffer read size (rmem_default) and write size (wmem_default)
# *** Maybe recommended only for high-RAM servers? ***
net.core.rmem_default=16777216
net.core.wmem_default=16777216
# Increase the max socket buffer size (optmem_max), max socket buffer read size (rmem_max), max socket buffer write size (wmem_max)
# 16MB per socket - which sounds like a lot, but will virtually never consume that much
# rmem_max over-rides tcp_rmem param, wmem_max over-rides tcp_wmem param and optmem_max over-rides tcp_mem param
net.core.optmem_max=16777216
net.core.rmem_max=16777216
net.core.wmem_max=16777216
# Configure the Min, Pressure, Max values (units are in page size)
# Useful mostly for very high-traffic websites that have a lot of RAM
# Consider that we already set the *_max values to 16777216
# So you may eventually comment these three lines
net.ipv4.tcp_mem=16777216 16777216 16777216
net.ipv4.tcp_wmem=4096 87380 16777216
net.ipv4.tcp_rmem=4096 87380 16777216
# Keepalive optimizations
# By default, the keepalive routines wait for two hours (7200 secs) before sending the first keepalive probe,
# and then resend it every 75 seconds. If no ACK response is received for 9 consecutive times, the connection is marked as broken.
# The default values are: tcp_keepalive_time = 7200, tcp_keepalive_intvl = 75, tcp_keepalive_probes = 9
# We would decrease the default values for tcp_keepalive_* params as follow:
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 9
# The TCP FIN timeout belays the amount of time a port must be inactive before it can reused for another connection.
# The default is often 60 seconds, but can normally be safely reduced to 30 or even 15 seconds
# https://www.linode.com/docs/web-servers/nginx/configure-nginx-for-optimized-performance
net.ipv4.tcp_fin_timeout = 7
***********sysctl**************
==PS==
.net.ipv4.tcp_slow_start_after_idle & net.ipv4.tcp_no_metrics_save
https://github.com/ton31337/tools/wiki/tcp_slow_start_after_idle---tcp_no_metrics_save-performance
bbr Kernel 4.9.51
aws linux 官方
https://aws.amazon.com/tw/amazon-linux-ami/2017.09-release-notes/
ubuntu
https://segmentfault.com/a/1190000008395823
https://farer.org/2017/05/18/build-kernel-with-bbr-on-ec2-amazon-linux/
修改/etc/sysctl.conf文件,加入如下两行:
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
檢查
cat /proc/sys/net/ipv4/tcp_congestion_control
https://aws.amazon.com/tw/amazon-linux-ami/2017.09-release-notes/
ubuntu
https://segmentfault.com/a/1190000008395823
https://farer.org/2017/05/18/build-kernel-with-bbr-on-ec2-amazon-linux/
修改/etc/sysctl.conf文件,加入如下两行:
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
檢查
cat /proc/sys/net/ipv4/tcp_congestion_control
move to ionic3
1、update every things to ionic3 uninstall, install , update npm....etc
2、use ionic 3, create new project, run, sure can run on android.
3、copy old config.xml data to new config.xml
4、change new project src -> src1, then copy old project src.
4.1、 code have any ionic-native change @ionic-native/xxxxxxxxx
4.2 、src/app/app.module.ts
.top add import { BrowserModule } from '@angular/platform-browser';
.import、providers same 4.1
4.3、copy new project src/index.html 、 src/service-worker.js
then try ionic cordova android --emulator
2、use ionic 3, create new project, run, sure can run on android.
3、copy old config.xml data to new config.xml
4、change new project src -> src1, then copy old project src.
4.1、 code have any ionic-native change @ionic-native/xxxxxxxxx
4.2 、src/app/app.module.ts
.top add import { BrowserModule } from '@angular/platform-browser';
.import、providers same 4.1
4.3、copy new project src/index.html 、 src/service-worker.js
then try ionic cordova android --emulator
[轉]在大陸來往近10年定居5年,而去年我回台灣了。
https://www.mobile01.com/topicdetail.php?f=651&t=5310046
https://www.mobile01.com/topicdetail.php?f=651&t=5310046#10
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=2#11
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=2#14
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=2#15
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=3#26
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=4#40
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=18#176
https://www.mobile01.com/topicdetail.php?f=651&t=5310046#10
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=2#11
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=2#14
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=2#15
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=3#26
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=4#40
https://www.mobile01.com/topicdetail.php?f=651&t=5310046&p=18#176
nginx proxy pass [ best practices ]
1、/etc/nginx/nginx.conf
worker_processes 1; #auto;
events {
worker_connections 3000; #786;
# multi_accept on;
}
http {
server_tokens off; #open this line
resolver 8.8.8.8 8.8.4.4 valid=300s; #resolver dns server
proxy_cache_path /var/cache/proxy-nginx levels=1:2 keys_zone=proxy-cache:10m max_size=3g inactive=1d use_temp_path=off;
log_format main '$remote_addr $status $request $body_bytes_sent [$time_local] $http_user_agent $http_referer $http_x_forwarded_for $upstream_addr $upstream_status $upstream_cache_status $upstream_response_time';
access_log /var/log/nginx/access.log main buffer=1m; #or maybe note # because disk space
log_format cache_status '[$time_local] "$request" $upstream_cache_status';
access_log /var/log/nginx/cache_access.log cache_status;
client_max_body_size 100M;
------------------
#slice 1m; # for slice_range
#try_files $uri $uri/ =404;
server {
set $ds_host_ip 'xxx.xxx.xxx.xxx';
set $ds_hostname 'ooo.ooo.ooo.ooo';
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
location /nginx_status {
stub_status on;
access_log off;
}
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
}
make clean; ./configure --with-file-aio --without-http_autoindex_module --without-http_browser_module --without-http_geo_module --without-http_empty_gif_module --without-http_map_module --without-http_memcached_module --without-http_userid_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_split_clients_module --without-http_uwsgi_module --without-http_scgi_module --without-http_referer_module --without-http_upstream_ip_hash_module && make && make install
----------------------------------------------
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
https://www.mobile01.com/topicdetail.php?f=506&t=5147355
--auto nginx mod CENTMIN MOD
https://centminmod.com/
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
https://cipherli.st/
On ssl_session_tickets dropbox & cipherli.st have different way............. maybe use cipherli.st
#ssl_session_timeout 1h;
#ssl_session_ticket_key /run/nginx-ephemeral/nginx_session_ticket_curr;
#ssl_session_ticket_key /run/nginx-ephemeral/nginx_session_ticket_prev;
#ssl_session_ticket_key /run/nginx-ephemeral/nginx_session_ticket_next;
http://fangpeishi.com/optimizing-tls-record-size.html
http://fangpeishi.com/optimizing-tls-record-size.html
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
ssl_ciphers '[ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305|ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]:ECDHE+AES128:RSA+AES128:ECDHE+AES256:RSA+AES256:ECDHE+3DES:RSA+3DES';
ssl_prefer_server_ciphers on;
aio_write on;
http://www.infoq.com/cn/articles/thread-pools-boost-performance-9x
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
open_file_cache_min_uses 2;
open_file_cache_errors on;
http://blog.justwd.net/snippets/nginx/nginx-open-file-cache/
.auto nginx mod CENTMIN MOD
https://centminmod.com/
======clern cache=======
https://leokongwq.github.io/2016/11/25/nginx-cache.html
worker_processes 1; #auto;
events {
worker_connections 3000; #786;
# multi_accept on;
}
http {
server_tokens off; #open this line
resolver 8.8.8.8 8.8.4.4 valid=300s; #resolver dns server
proxy_cache_path /var/cache/proxy-nginx levels=1:2 keys_zone=proxy-cache:10m max_size=3g inactive=1d use_temp_path=off;
add_header X-Cache $upstream_cache_status; #讓Header顯示是否有Cache:HIT命中 MISS失敗 BYPASS略過
proxy_headers_hash_max_size 51200; #add this line
proxy_headers_hash_bucket_size 6400; #add this line
log_format main '$remote_addr $status $request $body_bytes_sent [$time_local] $http_user_agent $http_referer $http_x_forwarded_for $upstream_addr $upstream_status $upstream_cache_status $upstream_response_time';
access_log /var/log/nginx/access.log main buffer=1m; #or maybe note # because disk space
log_format cache_status '[$time_local] "$request" $upstream_cache_status';
access_log /var/log/nginx/cache_access.log cache_status;
gzip_proxied any; #open this line, because CDN
2、/etc/nginx/proxy_params put all or maybe find document for practices
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
client_max_body_size 100M;
client_body_buffer_size 1m;
proxy_intercept_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 256 16k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 0;
proxy_read_timeout 300;
------------------
#slice 1m; # for slice_range
proxy_cache_key $scheme$host$proxy_host$request_uri; # $slice_range
#proxy_cache_key "$scheme://$host$request_uri";
#proxy_cache_key $host:$server_port$uri$is_args$args; #通过key来hash,定义KEY的值
#proxy_cache_valid 15m;
proxy_cache_valid 200 301 302 304 1h; #206 -> slice_range
proxy_cache_valid 404 1m;
proxy_cache_valid any 1m;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
#proxy_set_header Range $slice_range; #for slice_range
proxy_cache_revalidate on;
# Set some good timeouts
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
#proxy_cache_key "$scheme://$host$request_uri";
#proxy_cache_key $host:$server_port$uri$is_args$args; #通过key来hash,定义KEY的值
#proxy_cache_valid 15m;
proxy_cache_valid 200 301 302 304 1h; #206 -> slice_range
proxy_cache_valid 404 1m;
proxy_cache_valid any 1m;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
#proxy_set_header Range $slice_range; #for slice_range
proxy_cache_revalidate on;
# Set some good timeouts
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
#proxy_cache_min_uses 3; #只要统一个url,在磁盘文件删除之前,总次数访问到达3次,就开始缓存。
proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment; # 如果任何一个参数值不为空,或者不等于0,nginx就不会查找缓存,直接进行代理转发
------------------
aio threads;
aio_write on;
------------------
open_file_cache_errors on;
2.1、SSL
./etc/nginx/snippets/ssl-example.com.conf
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; #crt
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; #key
./etc/nginx/snippets/ssl-params.conf
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
#resolver 8.8.8.8 8.8.4.4 valid=300s; #move to nginx.conf http
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment; # 如果任何一个参数值不为空,或者不等于0,nginx就不会查找缓存,直接进行代理转发
------------------
aio threads;
aio_write on;
------------------
open_file_cache max=10000;
open_file_cache_min_uses 2;open_file_cache_errors on;
2.1、SSL
./etc/nginx/snippets/ssl-example.com.conf
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; #crt
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; #key
./etc/nginx/snippets/ssl-params.conf
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
#resolver 8.8.8.8 8.8.4.4 valid=300s; #move to nginx.conf http
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
3、/etc/nginx/sites-available/default
# Default server configuration
#server1
server {
set $ds_host_ip 'xxx.xxx.xxx.xxx'; #destination host ip
set $ds_hostname 'ooo.ooo.ooo.ooo'; #destination hostname
set $ds_hostname 'ooo.ooo.ooo.ooo'; #destination hostname
listen 80 reuseport;
#listen [::]:80 default_server;
#root /var/www/html;
#index index.html index.htm index.nginx-debian.html;
#server_name _;
location / {
proxy_pass http://$ds_host_ip:$server_port;
proxy_pass http://$ds_hostname:$server_port;
proxy_pass http://$ds_hostname:$server_port;
include /etc/nginx/proxy_params;
#try_files $uri $uri/ =404;
}
location /nginx_status {
stub_status on;
access_log off;
}
}
server {
set $ds_host_ip 'xxx.xxx.xxx.xxx';
set $ds_hostname 'ooo.ooo.ooo.ooo';
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
location / {
proxy_pass http://$ds_host_ip:$server_port;
proxy_pass http://$ds_hostname:$server_port;
proxy_pass http://$ds_hostname:$server_port;
include /etc/nginx/proxy_params;
}location /nginx_status {
stub_status on;
access_log off;
}
include snippets/ssl-example.com.conf;
include snippets/ssl-params.conf;
}
#server2
server {
set $ds_host_ip 'xxx.xxx.xxx.xxx';
listen 8881 reuseport;
location / {
proxy_pass http://$ds_host_ip:$server_port;
include /etc/nginx/proxy_params;
}
}
#server3
server {
set $ds_host_ip 'xxx.xxx.xxx.xxx';
listen 3333 reuseport;
location / {
proxy_pass http://$ds_host_ip:$server_port;
include /etc/nginx/proxy_params;
}
}
server {
set $ds_host_ip 'xxx.xxx.xxx.xxx';
listen 81 reuseport;
location / {
proxy_pass http://$ds_host_ip:$server_port;
include /etc/nginx/proxy_params;
}
}
server {
set $ds_host_ip 'xxx.xxx.xxx.xxx';
listen 8080 reuseport;
location / {
proxy_pass http://$ds_host_ip:$server_port;
include /etc/nginx/proxy_params;
}
}
===== =====
/etc/security/limits.conf
* soft nproc 65535
===== =====
/etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
echo "net.core.somaxconn=1024" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
echo "net.ipv4.ip_forward=0" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
===== =====
#单个IP在60秒内只允许新建20个连接
-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW -m recent --update --seconds 60 --hitcount 20 --name DEFAULT --rsource -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW -m recent --set --name DEFAULT --rsource
#控制单个IP的最大并发连接数为20
-I INPUT -p tcp --dport 80 -m connlimit --connlimit-above 20 -j REJECT
#每个IP最多20个初始连接
-A INPUT -p tcp --syn -m connlimit --connlimit-above 20 -j DROP
http://seanlook.com/2015/05/17/nginx-location-rewrite/
http://xyz.cinc.biz/2016/06/nginx-if-and-host-get-variable.html
http://siwei.me/blog/posts/nginx-built-in-variables
https://www.52os.net/articles/nginx-anti-ddos-setting.html
https://www.52os.net/articles/nginx-anti-ddos-setting-2.html
https://gagor.pl/2016/01/optimize-nginx-for-performance/
------------------
https://gryzli.info/2017/05/09/nginx-configuring-reverse-proxy-caching/
https://www.nginx.com/blog/nginx-high-performance-caching/
https://guides.wp-bullet.com/how-to-configure-nginx-reverse-proxy-wordpress-cache-apache/
https://tweaked.io/guide/nginx-proxying/
http://www.jianshu.com/p/625c2b15dad5
http://phl.iteye.com/blog/2256857
https://gist.github.com/regadas/7381125
https://calomel.org/nginx.html
Building the Nginx Reverse Proxy example
echo "net.core.somaxconn=1024" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
echo "net.ipv4.ip_forward=0" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
===== =====
iptables限制tcp连接和频率
#单个IP在60秒内只允许新建20个连接
-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW -m recent --update --seconds 60 --hitcount 20 --name DEFAULT --rsource -j DROP
-A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW -m recent --set --name DEFAULT --rsource
#控制单个IP的最大并发连接数为20
-I INPUT -p tcp --dport 80 -m connlimit --connlimit-above 20 -j REJECT
#每个IP最多20个初始连接
-A INPUT -p tcp --syn -m connlimit --connlimit-above 20 -j DROP
http://seanlook.com/2015/05/17/nginx-location-rewrite/
http://xyz.cinc.biz/2016/06/nginx-if-and-host-get-variable.html
http://siwei.me/blog/posts/nginx-built-in-variables
https://www.52os.net/articles/nginx-anti-ddos-setting.html
https://www.52os.net/articles/nginx-anti-ddos-setting-2.html
https://gagor.pl/2016/01/optimize-nginx-for-performance/
------------------
https://gryzli.info/2017/05/09/nginx-configuring-reverse-proxy-caching/
https://www.nginx.com/blog/nginx-high-performance-caching/
https://guides.wp-bullet.com/how-to-configure-nginx-reverse-proxy-wordpress-cache-apache/
https://tweaked.io/guide/nginx-proxying/
http://www.jianshu.com/p/625c2b15dad5
http://phl.iteye.com/blog/2256857
https://gist.github.com/regadas/7381125
https://calomel.org/nginx.html
Building the Nginx Reverse Proxy example
----------------------------------------------
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
Deploying Brotli for static contentcloudflare/ngx_brotli_module https://github.com/cloudflare/ngx_brotli_module
https://www.mobile01.com/topicdetail.php?f=506&t=5147355
--auto nginx mod CENTMIN MOD
https://centminmod.com/
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
https://cipherli.st/
On ssl_session_tickets dropbox & cipherli.st have different way............. maybe use cipherli.st
TLS#ssl_session_tickets on;
#ssl_session_timeout 1h;
#ssl_session_ticket_key /run/nginx-ephemeral/nginx_session_ticket_curr;
#ssl_session_ticket_key /run/nginx-ephemeral/nginx_session_ticket_prev;
#ssl_session_ticket_key /run/nginx-ephemeral/nginx_session_ticket_next;
http://fangpeishi.com/optimizing-tls-record-size.html
http://fangpeishi.com/optimizing-tls-record-size.html
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
ssl_ciphers '[ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305|ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]:ECDHE+AES128:RSA+AES128:ECDHE+AES256:RSA+AES256:ECDHE+3DES:RSA+3DES';
ssl_prefer_server_ciphers on;
AIOaio threads;
aio_write on;
http://www.infoq.com/cn/articles/thread-pools-boost-performance-9x
.https://blogs.dropbox.com/tech/2017/09/optimizing-web-servers-for-high-throughput-and-low-latency/
Open file cacheopen_file_cache max=10000;
open_file_cache_min_uses 2;
open_file_cache_errors on;
http://blog.justwd.net/snippets/nginx/nginx-open-file-cache/
.auto nginx mod CENTMIN MOD
https://centminmod.com/
======clern cache=======
https://leokongwq.github.io/2016/11/25/nginx-cache.html
Port Forwarding Gateway via iptables on Linux On AWS
Better way is Amazon Linux and
Enhanced Networking on Linux C3、C4、M4
Enhanced Networking on Linux C3、C4、M4
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sriov-networking.html
I think is better for performace.
Port Forwarding Gateway via iptables on Linux
1、
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html#NATSG
1.1、Create instance form ami :ami-vpc-nat-hvm choese best new
***NAT AMI***
1.2、
Check.IPv4 forwarding is enabled and ICMP redirects are disabled in /etc/sysctl.d/10-nat-settings.conf
2、
https://holtstrom.com/michael/blog/post/400/Port-Forwarding-Gateway-via-iptables-on-Linux.html
eth0 10.0.0.219 52.78.165.129
eth1 10.0.1.149
web server 10.0.1.249
iptables -vxnL --line-numbers
iptables -t nat -vxnL --line-numbers
watch -n 1 sudo iptables -vxnL --line-numbers
watch -n 1 sudo iptables -t nat -vxnL --line-numbers
===Start===
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -A PREROUTING -i eth0 -p tcp -d 10.0.0.219 --dport 888 \
-j DNAT --to-destination 10.0.1.249:80
☆ iptables -t nat -A POSTROUTING -j MASQUERADE //key point, can't use out eth0
===抓封包===
tcpdump -i eth0 -p tcp and port 888 -n -v
===刪除===
iptables -D INPUT 2
iptables -t nat -D PREROUTING 2
iptables -t nat -D POSTROUTING 2
===無用===
iptables -A FORWARD -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -d 10.0.1.249 -j ACCEPT
iptables -A FORWARD -d 10.0.1.249 -p tcp --dport 80 -j ACCEPT
iptables -t nat -A POSTROUTING -j SNAT --to-source 10.0.0.219
===無用===
===無用 這行解決 telnet localhost 888===
iptables -t nat -A OUTPUT -p tcp -o lo --dport 888 -j DNAT --to 10.0.1.249:80
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html#NATSG
1.1、Create instance form ami :ami-vpc-nat-hvm choese best new
***NAT AMI***
1.2、
Check.IPv4 forwarding is enabled and ICMP redirects are disabled in /etc/sysctl.d/10-nat-settings.conf
IPv4 forwarding =1
Run.A script located at /usr/sbin/configure-pat.sh runs at startup and configures iptables IP masquerading.
Here have problem, so must delete POSTROUTING
Run.A script located at /usr/sbin/configure-pat.sh runs at startup and configures iptables IP masquerading.
Here have problem, so must delete POSTROUTING
sudo iptables -t nat -D POSTROUTING 1
2、
https://holtstrom.com/michael/blog/post/400/Port-Forwarding-Gateway-via-iptables-on-Linux.html
eth0 10.0.0.219 52.78.165.129
eth1 10.0.1.149
web server 10.0.1.249
iptables -vxnL --line-numbers
iptables -t nat -vxnL --line-numbers
watch -n 1 sudo iptables -vxnL --line-numbers
watch -n 1 sudo iptables -t nat -vxnL --line-numbers
===Start===
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -A PREROUTING -i eth0 -p tcp -d 10.0.0.219 --dport 888 \
-j DNAT --to-destination 10.0.1.249:80
☆ iptables -t nat -A POSTROUTING -j MASQUERADE //key point, can't use out eth0
===抓封包===
tcpdump -i eth0 -p tcp and port 888 -n -v
===刪除===
iptables -D INPUT 2
iptables -t nat -D PREROUTING 2
iptables -t nat -D POSTROUTING 2
===無用===
iptables -A FORWARD -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -d 10.0.1.249 -j ACCEPT
iptables -A FORWARD -d 10.0.1.249 -p tcp --dport 80 -j ACCEPT
iptables -t nat -A POSTROUTING -j SNAT --to-source 10.0.0.219
===無用===
===無用 這行解決 telnet localhost 888===
iptables -t nat -A OUTPUT -p tcp -o lo --dport 888 -j DNAT --to 10.0.1.249:80
JSMpeg – MPEG1 Video & MP2 Audio Decoder in JavaScript for ios stream
First try this demo can play on ios
http://jsmpeg.com/
Second
https://github.com/phoboslab/jsmpeg#encoding-videoaudio-for-jsmpeg
view-stream.html (nodejs localhost:8080 maybe can see)
websocket-relay.js this is use node.js then get ws server
-re 參數表示以視頻實際播放速率解碼,不加這個參數一般會導致直播速率變得飛快
-re一定要加,代表按照帧率发送,否则ffmpeg会一股脑地按最高的效率发送数据。
http://jsmpeg.com/
Second
https://github.com/phoboslab/jsmpeg#encoding-videoaudio-for-jsmpeg
view-stream.html (nodejs localhost:8080 maybe can see)
Import !!
ffmpeg -re -i rtmp://xxx.xxx.xxx.xxx/live/ooo -f mpegts -codec:v mpeg1video -bf 0 -codec:a mp2 -s 640x360 -r 30 -q 4 -muxdelay 0.001 http://localhost:8081/password
-re 參數表示以視頻實際播放速率解碼,不加這個參數一般會導致直播速率變得飛快
-re一定要加,代表按照帧率发送,否则ffmpeg会一股脑地按最高的效率发送数据。
-muxdelay 0.001 reduce lag (maybe no use)
-q 4 Need test
-q 4 Need test
You're setting -q 1, which is the best possible quality. Try setting it to 4 or 8 instead.
compression_level (-q)
Set algorithm quality. Valid arguments are integers in the 0-9 range, with 0 meaning highest quality but slowest, and 9 meaning fastest while producing the worst quality. Can put more than 9.
compression_level (-q)
Set algorithm quality. Valid arguments are integers in the 0-9 range, with 0 meaning highest quality but slowest, and 9 meaning fastest while producing the worst quality. Can put more than 9.
mpegts this is see error, then find some guy say use mpegts.
Don't use mpeg1video
https://github.com/phoboslab/jsmpeg/issues/149
https://github.com/phoboslab/jsmpeg/issues/149
localhost:8081/password this is "node websocket-relay.js" screen have notify, just try
Example: node websocket-relay.js password 8081 8080
https://github.com/phoboslab/jsmpeg#example-setup-for-streaming-raspberry-pi-live-webcam
mariadb galera cluster ubuntu 16 diy
https://www.howtoforge.com/tutorial/how-to-install-and-configure-galera-cluster-on-ubuntu-1604/
nginx proxy pass
see http://sueboy.blogspot.tw/2017/11/nginx-proxy-pass-best-practices.html
http://sueboy.blogspot.tw/2017/11/nginx-proxy-pass-best-practices-2-for.html
server {
listen 80;
server_name aaaa.bbbbb.com;
location /
{
proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
proxy_pass http://xxx.xxx.xxx.xxx:80;
}
}
http://sueboy.blogspot.tw/2017/11/nginx-proxy-pass-best-practices-2-for.html
server {
listen 80;
server_name aaaa.bbbbb.com;
location /
{
proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
proxy_pass http://xxx.xxx.xxx.xxx:80;
}
}
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header X-Original-Host $http_host;
proxy_set_header X-Original-Scheme $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
client_max_body_size 10m;
client_body_buffer_size 128k;
# client_body_temp_path /var/nginx/client_body_temp;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
#proxy_send_lowat 12000;
proxy_buffer_size 32k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
# proxy_temp_path /var/nginx/proxy_temp;
proxy_ignore_client_abort on;
proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
proxy_max_temp_file_size 128m;
大陸 vps 測試
https://raw.githubusercontent.com/oooldking/script/master/superspeed.sh
#!/usr/bin/env bash
#
# Description: Test your server's network with Speedtest to China
#
# Copyright (C) 2017 - 2017 Oldking
#
# URL: https://www.oldking.net/305.html
#
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
SKYBLUE='\033[0;36m'
PLAIN='\033[0m'
# check root
[[ $EUID -ne 0 ]] && echo -e "${RED}Error:${PLAIN} This script must be run as root!" && exit 1
# check python
if [ ! -e '/usr/bin/python' ]; then
echo -e
read -p "${RED}Error:${PLAIN} python is not install. You must be install python command at first.\nDo you want to install? [y/n]" is_install
if [[ ${is_install} == "y" || ${is_install} == "Y" ]]; then
if [ "${release}" == "centos" ]; then
yum -y install python
else
apt-get -y install python
fi
else
exit
fi
fi
# check wget
if [ ! -e '/usr/bin/wget' ]; then
echo -e
read -p "${RED}Error:${PLAIN} wget is not install. You must be install wget command at first.\nDo you want to install? [y/n]" is_install
if [[ ${is_install} == "y" || ${is_install} == "Y" ]]; then
if [ "${release}" == "centos" ]; then
yum -y install wget
else
apt-get -y install wget
fi
else
exit
fi
fi
clear
echo "#############################################################"
echo "# Description: Test your server's network with Speedtest #"
echo "# Intro: https://www.oldking.net/305.html #"
echo "# Author: Oldking#"
echo "# Github: https://github.com/oooldking #"
echo "#############################################################"
echo
echo "测试服务器到"
echo -ne "1.中国电信 2.中国联通 3.中国移动 4.本地默认 5.全面测速"
while :; do echo
read -p "请输入数字选择: " telecom
if [[ ! $telecom =~ ^[1-5]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
if [[ ${telecom} == 1 ]]; then
telecomName="电信"
echo -e "\n选择最靠近你的方位"
echo -ne "1.北方 2.南方"
while :; do echo
read -p "请输入数字选择: " pos
if [[ ! $pos =~ ^[1-2]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
echo -e "\n选择最靠近你的城市"
if [[ ${pos} == 1 ]]; then
echo -ne "1.郑州 2.襄阳"
while :; do echo
read -p "请输入数字选择: " city
if [[ ! $city =~ ^[1-2]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
if [[ ${city} == 1 ]]; then
num=4595
cityName="郑州"
fi
if [[ ${city} == 2 ]]; then
num=12637
cityName="襄阳"
fi
fi
if [[ ${pos} == 2 ]]; then
echo -ne "1.上海 2.杭州 3.南宁 4.南昌 5.长沙 6.深圳 7.重庆 8.成都"
while :; do echo
read -p "请输入数字选择: " city
if [[ ! $city =~ ^[1-8]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
if [[ ${city} == 1 ]]; then
num=3633
cityName="上海"
fi
if [[ ${city} == 2 ]]; then
num=7509
cityName="杭州"
fi
if [[ ${city} == 3 ]]; then
num=10305
cityName="南宁"
fi
if [[ ${city} == 4 ]]; then
num=7230
cityName="南昌"
fi
if [[ ${city} == 5 ]]; then
num=6132
cityName="长沙"
fi
if [[ ${city} == 6 ]]; then
num=5081
cityName="深圳"
fi
if [[ ${city} == 7 ]]; then
num=6592
cityName="重庆"
fi
if [[ ${city} == 8 ]]; then
num=4624
cityName="成都"
fi
fi
fi
if [[ ${telecom} == 2 ]]; then
telecomName="联通"
echo -ne "\n1.北方 2.南方"
while :; do echo
read -p "请输入数字选择: " pos
if [[ ! $pos =~ ^[1-2]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
echo -e "\n选择最靠近你的城市"
if [[ ${pos} == 1 ]]; then
echo -ne "1.沈阳 2.长春 3.哈尔滨 4.天津 5.济南 6.北京 7.郑州 8.西安 9.太原 10.宁夏 11.兰州 12.西宁"
while :; do echo
read -p "请输入数字选择: " city
if [[ ! $city =~ ^(([1-9])|(1([0-2]{1})))$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
if [[ ${city} == 1 ]]; then
num=5017
cityName="沈阳"
fi
if [[ ${city} == 2 ]]; then
num=9484
cityName="长春"
fi
if [[ ${city} == 3 ]]; then
num=5460
cityName="哈尔滨"
fi
if [[ ${city} == 4 ]]; then
num=5475
cityName="天津"
fi
if [[ ${city} == 5 ]]; then
num=5039
cityName="济南"
fi
if [[ ${city} == 6 ]]; then
num=5145
cityName="北京"
fi
if [[ ${city} == 7 ]]; then
num=5131
cityName="郑州"
fi
if [[ ${city} == 8 ]]; then
num= 4863
cityName="西安"
fi
if [[ ${city} == 9 ]]; then
num=12868
cityName="太原"
fi
if [[ ${city} == 10 ]]; then
num=5509
cityName="宁夏"
fi
if [[ ${city} == 11 ]]; then
num=4690
cityName="兰州"
fi
if [[ ${city} == 12 ]]; then
num=5992
cityName="西宁"
fi
fi
if [[ ${pos} == 2 ]]; then
echo -ne "1.上海 2.杭州 3.南宁 4.合肥 5.南昌 6.长沙 7.深圳 8.广州 9.重庆 10.昆明 11.成都"
while :; do echo
read -p "请输入数字选择: " city
if [[ ! $city =~ ^(([1-9])|(1([0-1]{1})))$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
if [[ ${city} == 1 ]]; then
num=5083
cityName="上海"
fi
if [[ ${city} == 2 ]]; then
num=5300
cityName="杭州"
fi
if [[ ${city} == 3 ]]; then
num=5674
cityName="南宁"
fi
if [[ ${city} == 4 ]]; then
num=5724
cityName="合肥"
fi
if [[ ${city} == 5 ]]; then
num=5079
cityName="南昌"
fi
if [[ ${city} == 6 ]]; then
num=4870
cityName="长沙"
fi
if [[ ${city} == 7 ]]; then
num=10201
cityName="深圳"
fi
if [[ ${city} == 8 ]]; then
num=3891
cityName="广州"
fi
if [[ ${city} == 9 ]]; then
num=5726
cityName="重庆"
fi
if [[ ${city} == 10 ]]; then
num=5103
cityName="昆明"
fi
if [[ ${city} == 11 ]]; then
num=2461
cityName="成都"
fi
fi
fi
if [[ ${telecom} == 3 ]]; then
telecomName="移动"
echo -ne "\n1.北方 2.南方"
while :; do echo
read -p "请输入数字选择: " pos
if [[ ! $pos =~ ^[1-2]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
echo -e "\n选择最靠近你的城市"
if [[ ${pos} == 1 ]]; then
echo -ne "1.西安"
while :; do echo
read -p "请输入数字选择: " city
if [[ ! $city =~ ^[1]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
if [[ ${city} == 1 ]]; then
num=5292
fi
fi
if [[ ${pos} == 2 ]]; then
echo -ne "1.上海 2.宁波 3.无锡 4.杭州 5.合肥 6.成都"
while :; do echo
read -p "请输入数字选择: " city
if [[ ! $city =~ ^[1-6]$ ]]; then
echo "输入错误! 请输入正确的数字!"
else
break
fi
done
if [[ ${city} == 1 ]]; then
num=4665
cityName="上海"
fi
if [[ ${city} == 2 ]]; then
num=6715
cityName="宁波"
fi
if [[ ${city} == 3 ]]; then
num=5122
cityName="无锡"
fi
if [[ ${city} == 4 ]]; then
num=4647
cityName="杭州"
fi
if [[ ${city} == 5 ]]; then
num=4377
cityName="合肥"
fi
if [[ ${city} == 6 ]]; then
num=4575
cityName="成都"
fi
fi
fi
# install speedtest
if [ ! -e './speedtest.py' ]; then
wget https://raw.github.com/sivel/speedtest-cli/master/speedtest.py > /dev/null 2>&1
fi
chmod a+rx speedtest.py
result() {
download=`cat speed.log | awk -F ':' '/Download/{print $2}'`
upload=`cat speed.log | awk -F ':' '/Upload/{print $2}'`
hostby=`cat speed.log | awk -F ':' '/Hosted/{print $1}'`
latency=`cat speed.log | awk -F ':' '/Hosted/{print $2}'`
clear
echo "$hostby"
echo "延迟 : $latency"
echo "上传 : $upload"
echo "下载 : $download"
echo -ne "\n当前时间: "
echo $(date +%Y-%m-%d" "%H:%M:%S)
}
speed_test(){
temp=$(python speedtest.py --server $1 --share 2>&1)
is_down=$(echo "$temp" | grep 'Download')
if [[ ${is_down} ]]; then
local REDownload=$(echo "$temp" | awk -F ':' '/Download/{print $2}')
local reupload=$(echo "$temp" | awk -F ':' '/Upload/{print $2}')
local relatency=$(echo "$temp" | awk -F ':' '/Hosted/{print $2}')
temp=$(echo "$relatency" | awk -F '.' '{print $1}')
if [[ ${temp} -gt 1000 ]]; then
relatency=" 000.000 ms"
fi
local nodeName=$2
printf "${YELLOW}%-17s${GREEN}%-18s${RED}%-20s${SKYBLUE}%-12s${PLAIN}\n" "${nodeName}" "${reupload}" "${REDownload}" "${relatency}"
else
local cerror="ERROR"
fi
}
if [[ ${telecom} =~ ^[1-3]$ ]]; then
python speedtest.py --server ${num} --share 2>/dev/null | tee speed.log 2>/dev/null
is_down=$(cat speed.log | grep 'Download')
if [[ ${is_down} ]]; then
result
echo "测试到 ${cityName}${telecomName} 完成!"
rm -rf speedtest.py
rm -rf speed.log
else
echo -e "\n${RED}ERROR:${PLAIN} 当前节点不可用,请更换其他节点,或换个时间段再测试。"
fi
fi
if [[ ${telecom} == 4 ]]; then
python speedtest.py | tee speed.log
result
echo "本地测试完成!"
rm -rf speedtest.py
rm -rf speed.log
fi
if [[ ${telecom} == 5 ]]; then
echo ""
printf "%-14s%-18s%-20s%-12s\n" "Node Name" "Upload Speed" "Download Speed" "Latency"
start=$(date +%s)
speed_test '12637' '襄阳电信'
speed_test '5081' '深圳电信'
speed_test '3633' '上海电信'
speed_test '4624' '成都电信'
speed_test '5017' '沈阳联通'
speed_test '4863' '西安联通'
speed_test '5083' '上海联通'
speed_test '5726' '重庆联通'
speed_test '5192' '西安移动'
speed_test '4665' '上海移动'
speed_test '6715' '宁波移动'
speed_test '4575' '成都移动'
end=$(date +%s)
rm -rf speedtest.py
echo ""
time=$(( $end - $start ))
if [[ $time -gt 60 ]]; then
min=$(expr $time / 60)
sec=$(expr $time % 60)
echo -ne "花费时间:${min} 分 ${sec} 秒"
else
echo -ne "花费时间:${time} 秒"
fi
echo -ne "\n当前时间: "
echo $(date +%Y-%m-%d" "%H:%M:%S)
echo "全面测试完成!"
fi
[轉]Linux服务器被黑遭敲诈,3小时紧急逆袭!
http://www.yunweipai.com/archives/22780.html
#查看是否为管理员增加或者修改 find / -type f -perm 4000 #显示文件中查看是否存在系统以外的文件 rpm -Vf /bin/ls rpm -Vf /usr/sbin/sshd rpm -Vf /sbin/ifconfig rpm -Vf /usr/sbin/lsof #检查系统是否有elf文件被替换 #在web目录下运行 grep -r “getRuntime” ./ #查看是否有木马 find . -type f -name “*.jsp” | xargs grep -i “getRuntime” #运行的时候被连接或者被任何程序调用 find . -type f -name “*.jsp” | xargs grep -i “getHostAddress” #返回ip地址字符串 find . -type f -name “*.jsp” | xargs grep -i “wscript.shell” #创建WshShell对象可以运行程序、操作注册表、创建快捷方式、访问系统文件夹、管理环境变量 find . -type f -name “*.jsp” | xargs grep -i “gethostbyname” #gethostbyname()返回对应于给定主机名的包含主机名字和地址信息的hostent结构指针 find . -type f -name “*.jsp” | xargs grep -i “bash” #调用系统命令提权 find . -type f -name “*.jsp” | xargs grep -i “jspspy” #Jsp木马默认名字 find . -type f -name “*.jsp” | xargs grep -i “getParameter” fgrep – R “admin_index.jsp” 20120702.log > log.txt #检查是否有非授权访问管理日志 #要进中间件所在日志目录运行命令 fgrep – R “and1=1″*.log>log.txt fgrep – R “select “*.log>log.txt fgrep – R “union “*.log>log.txt fgrep – R “../../”*.log >log.txt fgrep – R “Runtime”*.log >log.txt fgrep – R “passwd”*.log >log.txt #查看是否出现对应的记录 fgrep – R “uname -a”*.log>log.txt fgrep – R “id”*.log>log.txt fgrep – R “ifconifg”*.log>log.txt fgrep – R “ls -l”*.log>log.txt #查看是否有shell攻击 #以root权限执行 cat /var/log/secure #查看是否存在非授权的管理信息 tail -n 10 /var/log/secure last cat /var/log/wtmp cat /var/log/sulog #查看是否有非授权的su命令 cat /var/log/cron #查看计划任务是否正常 tail -n 100 ~./bash_history | more 查看临时目录是否存在攻击者入侵时留下的残余文件 ls -la /tmp ls -la /var/tmp #如果存在.c .py .sh为后缀的文件或者2进制elf文件。
vi /etc/motd 发现
历史记录和相关访问日志已经被删除,痕迹清除。
安装chrootkit检查是否有rootkit mkdir chrootkit cd chrootkit/ wget ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.tar.gz tar zxvf chkrootkit.tar.gz cd chkrootkit-0.50/ ls yum install -y glibc-static make sense ./chkrootkit
vi /etc/motd 发现
[轉]OVS (Open vSwitch) 心得 楊金龍
https://www.facebook.com/dominic16y/posts/1378435818935569
今天跑去台中聽了OVS (Open vSwitch)講座,收獲頗豐(真的值回車票價了!),這個東西我自己的研究可能要花好幾個月的時間,還不一定能抓對方向或抓到重點,講師今天一個下午就把所有關鍵的問題都提到了,比如說你linux bridge 用的好好的幹嘛要轉到ovs來,這是大家都會關心的問題。
底下一些重點整理:
1.linux bridge效能很好,沒有什麼特殊的需求的話,不用轉到OVS去。
2.linux bridge效能的極限,kernel 跑不滿10G的頻寬(大約7G/s左右),若你不用10G的網路的話,linux bridge很好用。
3.OVS原生的效能比linux bridge效能差
4.OVS必須要另外加DPDK才能夠跑滿10G頻寬。(大約可跑到15~20G/s)
5.OVS單台機器沒有什麼用,它的功能跟linux bridge一樣
6.OVS必須要多台,並加上controller才能發揮它真正的功能
7.controller 推薦ONOS、OVN
8.OVS支援所有L2的功能(和靜態路由L3功能),它要使用controller之後,就可以不用STP,可充份利用所有頻寬,也不怕looping所造成的廣播風暴。
srs rtmp forward (push) out || pull in push out
master 192.168.105.20 srs.conf
isten 1935;
max_connections 1000;
srs_log_tank console; #file; #console;
srs_log_file ./objs/srs.log;
daemon off; #on or delete this line
http_api {
enabled on;
isten 1935;
max_connections 1000;
srs_log_tank console; #file; #console;
srs_log_file ./objs/srs.log;
daemon off; #on or delete this line
http_api {
enabled on;
listen 1985;
}
http_server {
enabled on;
listen 8080;
dir ./objs/nginx/html;
}
stats {
network 0;
disk sda sdb xvda xvdb;
}
vhost __defaultVhost__ {
ingest livestream {
enabled on;
input {
type stream;
url rtmp://xxx.ooo.xxx.ooo/live/nna1
}
ffmpeg ./objs/ffmpeg; #if no build, just install then link -s
engine {
enabled off;
output rtmp://127.0.0.1:[port]/live?vhost=[vhost]/nna2 #this push back this server
}
}
forward 192.168.105.21:1935;
gop_cache off;
queue_length 10;
min_latency on;
mr {
enabled off;
}
mw_latency 100;
tcp_nodelay on;
}
slave 192.168.105.21 srs.conf
listen 1935;
max_connections 1000;
srs_log_tank file; #console;
srs_log_file ./objs/srs.log;
#daemon off;
#http_api {
# enabled on;
# listen 1985;
#}
#http_server {
# enabled on;
# listen 8080;
# dir ./objs/nginx/html;
#}
#stats {
# network 0;
# disk sda sdb xvda xvdb;
#}
vhost __defaultVhost__ {
}
==========finsih===========for test======
obs push video to master rtmp://192.168.105.20/live/obs1
master player rtmp://192.168.105.20/live/obs1
slave player rtpm://192.168.105.21/live/obs1
***srs pull rtmp nna1 then push localhost nna2 ***
check rtmp://xxx.ooo.xxx.ooo/live/nna1 first
master player rtmp://192.168.105.20/live/nna2
slave player rtmp://192.168.105.21/live/nna2
PS:
1、debug just
a. daemon off
b. srs_log_tank console;
then see screen have any error msg. Some info like restart ffmpeg.
2、if slave can't play, just use port :1935
============
go-oryx
2、if slave can't play, just use port :1935
============
go-oryx
linux usb boot windows
https://www.pendrivelinux.com/yumi-multiboot-usb-creator/
https://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/
https://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/
[轉]在Ubuntu 使用 Lets Encrypt 與 Nginx
https://blog.technologyofkevin.com/?p=591
see http://sueboy.blogspot.tw/2017/11/nginx-proxy-pass-best-practices.html
http://sueboy.blogspot.tw/2017/11/nginx-proxy-pass-best-practices-2-for.html
see http://sueboy.blogspot.tw/2017/11/nginx-proxy-pass-best-practices.html
http://sueboy.blogspot.tw/2017/11/nginx-proxy-pass-best-practices-2-for.html
docker db Volumes
http://dockone.io/article/128
http://container42.com/2014/11/18/data-only-container-madness/
http://dockone.io/article/129
在volume产生时,是docker run的准备阶段(create),而执行entrypoint.sh则是在启动阶段(start)
http://container42.com/2014/11/18/data-only-container-madness/
http://dockone.io/article/129
在volume产生时,是docker run的准备阶段(create),而执行entrypoint.sh则是在启动阶段(start)
clustercontrol install finish
ssh-copy-id better use root
ssh-copy-id finish
Reboot clustercontrol server
======sysbench======
[sysbench]
select
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/select.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 prepare
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/select.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 run
Oltp read/write
* 产生数据
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 prepare
* 执行
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 run
*清除
最後面改成 cleanup
ssh-copy-id finish
Reboot clustercontrol server
======sysbench======
[sysbench]
select
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/select.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 prepare
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/select.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 run
Oltp read/write
* 产生数据
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 prepare
* 执行
sysbench --num-threads=128 --report-interval=3 --max-requests=0 --max-time=300 --test=/usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-table-engine=innodb --oltp-table-size=5000000 --mysql-user=root --mysql-password="E<8r)^t-E<8r)^t-" --oltp-tables-count=1 --db-driver=mysql --mysql-host=192.168.xxx.xxx --mysql-port=3306 run
*清除
最後面改成 cleanup
Operating System User clustercontrol
https://severalnines.com/docs/requirements.html#operating-system-user
firebase hosts cloudflare
ERR_TOO_MANY_REDIRECTS
Or
Error 526
SSL change Full or Full (strict)
just try
go gae telegram bot telegrambot
https://blog.sean.taipei/2017/05/telegram-bot
https://github.com/cortinico/telebotgae
Use this token to access the HTTP API:
445067055:AAG8Lz1EL---------------------------------
.https://api.telegram.org/bot<Token>/<Method Name>
https://api.telegram.org/bot445067055:445067055:AAG8Lz1EL---------------------------------/getMe
https://api.telegram.org/bot445067055:445067055:AAG8Lz1EL---------------------------------/getUpdates
.https://api.telegram.org/bot<Token>/sendMessage?chat_id=<Chat_id>&text=<Message>
chat_id 使用者送訊息給telegram bot後,去getUpdates,就會有chat id
https://api.telegram.org/bot445067055:AAG8Lz1EL---------------------------------/sendMessage?chat_id=265----------&text=Hello+World
.https://api.telegram.org/bot<Token>/setWebhook?url=<Url>
刪除是用 deleteWebhook
取得Webhook資料 getWebhookInfo
https://api.telegram.org/bot[API_KEY]/setWebhook?url=https://[PROJECT-ID].appspot.com
https://api.telegram.org/bot445067055:AAG8Lz1EL---------------------------------/setWebhook?url=https://xxx---bot-17------.appspot.com
https://github.com/cortinico/telebotgae
Use this token to access the HTTP API:
445067055:AAG8Lz1EL---------------------------------
.https://api.telegram.org/bot<Token>/<Method Name>
https://api.telegram.org/bot445067055:445067055:AAG8Lz1EL---------------------------------/getMe
https://api.telegram.org/bot445067055:445067055:AAG8Lz1EL---------------------------------/getUpdates
.https://api.telegram.org/bot<Token>/sendMessage?chat_id=<Chat_id>&text=<Message>
chat_id 使用者送訊息給telegram bot後,去getUpdates,就會有chat id
https://api.telegram.org/bot445067055:AAG8Lz1EL---------------------------------/sendMessage?chat_id=265----------&text=Hello+World
.https://api.telegram.org/bot<Token>/setWebhook?url=<Url>
刪除是用 deleteWebhook
取得Webhook資料 getWebhookInfo
https://api.telegram.org/bot[API_KEY]/setWebhook?url=https://[PROJECT-ID].appspot.com
https://api.telegram.org/bot445067055:AAG8Lz1EL---------------------------------/setWebhook?url=https://xxx---bot-17------.appspot.com
inscapsula cdn ddos free plans
https://www.incapsula.com/pricing-and-plans.html
最近才知道 Incapsula 蠻有名的,有免費試用喔
免費的額度是
頻寬 5Mb/s
1個月2T以內
最近才知道 Incapsula 蠻有名的,有免費試用喔
免費的額度是
頻寬 5Mb/s
1個月2T以內
ip firewall filiter raw input forward
https://www.mobile01.com/topicdetail.php?f=110&t=3205444&p=574#5738
ip firewall filter是在nat動作後
ip firewall raw在nat動作前
當封包透過nat進來後會分兩個路徑: 1.提供路由器(Router)自用 2.給區域網路的電腦使用
若要指定給路由器(Router),進入chain使用input
若要指定給區域網路的電腦,chain則一律用forward
一般病毒防護都著重在區網的桌機,所以要在firewall filter進行過濾,chain設為forward即可.
RouterOS路由器的smb伺服器預設是關閉的,所以若沒開啟不太需要再新增一筆chain=input封鎖項目.
/ip firewall raw操作:
在nat之前的意思即,封包在進入路由器前就先判定有敵意.
所以在這操作即不管這封包最終是路由器使用,還是給區網的電腦操作一律都先行丟棄,這樣明白吧XD
input/forward都是在nat後的動作,所以在firewall raw看不到這兩個chain.
prerouting是在nat前的動作,所以在firewall filter看不到這個chain.
ip firewall filter是在nat動作後
ip firewall raw在nat動作前
當封包透過nat進來後會分兩個路徑: 1.提供路由器(Router)自用 2.給區域網路的電腦使用
若要指定給路由器(Router),進入chain使用input
若要指定給區域網路的電腦,chain則一律用forward
一般病毒防護都著重在區網的桌機,所以要在firewall filter進行過濾,chain設為forward即可.
RouterOS路由器的smb伺服器預設是關閉的,所以若沒開啟不太需要再新增一筆chain=input封鎖項目.
/ip firewall raw操作:
在nat之前的意思即,封包在進入路由器前就先判定有敵意.
所以在這操作即不管這封包最終是路由器使用,還是給區網的電腦操作一律都先行丟棄,這樣明白吧XD
input/forward都是在nat後的動作,所以在firewall raw看不到這兩個chain.
prerouting是在nat前的動作,所以在firewall filter看不到這個chain.
go gae mux static html css js images
1、裝官方gcloud工具
2、裝python 2.7
3、到官方工具目錄 ..........\Local\Google\Cloud SDK\google-cloud-sdk
3-1、設定path //執行 install.bat 會有說明
source
在原本開發go的目錄執行下面指令及可,但要特別注意 出現套件未安裝的話 go get github.com/xxxx 裝一裝就行了
dev_appserver.py app.yaml //這裡要特注意,因為是local測試,會出錯!因為init()問題,如果是用flexible就不會,但又要改app.yaml
gcloud app deploy
gcloud app browse
======
gcloud init //reset project id
注意:gae flexible environment vs standard environment
https://cloud.google.com/appengine/docs/the-appengine-environments
所以要用 standard environment
====================
https://www.slideshare.net/takuyaueda967/gaegoweb
2、裝python 2.7
3、到官方工具目錄 ..........\Local\Google\Cloud SDK\google-cloud-sdk
3-1、設定path //執行 install.bat 會有說明
source
在原本開發go的目錄執行下面指令及可,但要特別注意 出現套件未安裝的話 go get github.com/xxxx 裝一裝就行了
dev_appserver.py app.yaml //這裡要特注意,因為是local測試,會出錯!因為init()問題,如果是用flexible就不會,但又要改app.yaml
gcloud app deploy
gcloud app browse
======
gcloud init //reset project id
注意:gae flexible environment vs standard environment
https://cloud.google.com/appengine/docs/the-appengine-environments
所以要用 standard environment
====================
https://www.slideshare.net/takuyaueda967/gaegoweb
elk Elasticsearch Logstash and Kibana fortigate ubuntu
https://www.rosehosting.com/blog/install-and-configure-the-elk-stack-on-ubuntu-16-04/
https://www.elastic.co/guide/en/logstash/current/configuration.html
https://dotblogs.com.tw/supershowwei/2016/05/25/185741
install finish
1、/etc/logstash/conf.d/ put some logstash conf
2、ubuntu have logstash listen error, so nano /etc/logstash/startup.options
LS_USER = root
3、/usr/share/logstash/bin# ./system-install reuse LS_USER for config
注意:
mutate {
add_field => {
"logTime" => "%{+YYYY-MM-dd} %{time}"
}
https://www.elastic.co/guide/en/logstash/current/configuration.html
https://dotblogs.com.tw/supershowwei/2016/05/25/185741
install finish
1、/etc/logstash/conf.d/ put some logstash conf
2、ubuntu have logstash listen error, so nano /etc/logstash/startup.options
LS_USER = root
3、/usr/share/logstash/bin# ./system-install reuse LS_USER for config
注意:
mutate {
add_field => {
"logTime" => "%{+YYYY-MM-dd} %{time}"
}
aws linux partition resize
http://docs.aws.amazon.com/zh_cn/AWSEC2/latest/UserGuide/ebs-expand-volume.html#recognize-expanded-volume-linux
1、AWS console EC2 resizse
2、login EC2
2.1、 lsblk get info to disk size. Is resize ok?
2.2 resize2fs /dev/xvda1
If 2.2 finish, then resize faild. follow 2.3
2.3 parted /dev/xvda
2.3.1 parted> print all -> get disk real size
2.3.2 parted> resizepart
2.3.3 parted> 1
2.3.4 parted End?>put size is 2.3.1
2.3.5 parted>exit
then 2.2 again. If ok then 2.1 check age.
PS:
2.3.4
1、AWS console EC2 resizse
2、login EC2
2.1、 lsblk get info to disk size. Is resize ok?
2.2 resize2fs /dev/xvda1
If 2.2 finish, then resize faild. follow 2.3
2.3 parted /dev/xvda
2.3.1 parted> print all -> get disk real size
2.3.2 parted> resizepart
2.3.3 parted> 1
2.3.4 parted End?>put size is 2.3.1
2.3.5 parted>exit
then 2.2 again. If ok then 2.1 check age.
PS:
2.3.4
End?> -1
-1 最大磁碟空間
go control docker
https://blog.kowalczyk.info/article/w4re/using-mysql-in-docker-for-local-testing-in-go.html
https://github.com/kjk/go-cookbook/tree/master/start-mysql-in-docker-go
https://github.com/kjk/go-cookbook/tree/master/start-mysql-in-docker-go
[轉]谁吃了我的Linux内存?
http://colobu.com/2017/03/07/what-is-in-linux-cached/#more
slabtop -s c
pcstat
slabtop -s c
======================
https://github.com/tobert/pcstat
#!/bin/bash#you have to install pcstatif [ ! -f /data0/brokerproxy/pcstat ]thenecho "You haven't installed pcstat yet"echo "run \"go get github.com/tobert/pcstat\" to install"exitfi#find the top 10 processs' cache fileps -e -o pid,rss|sort -nk2 -r|head -10 |awk '{print $1}'>/tmp/cache.pids#find all the processs' cache file#ps -e -o pid>/tmp/cache.pidsif [ -f /tmp/cache.files ]thenecho "the cache.files is exist, removing now "rm -f /tmp/cache.filesfiwhile read linedolsof -p $line 2>/dev/null|awk '{print $9}' >>/tmp/cache.filesdone</tmp/cache.pidsif [ -f /tmp/cache.pcstat ]thenecho "the cache.pcstat is exist, removing now"rm -f /tmp/cache.pcstatfifor i in `cat /tmp/cache.files`doif [ -f $i ]thenecho $i >>/tmp/cache.pcstatfidone/data0/brokerproxy/pcstat `cat /tmp/cache.pcstat`rm -f /tmp/cache.{pids,files,pcstat}
訂閱:
文章 (Atom)