EExcel 丞燕快速查詢2

EExcel 丞燕快速查詢2
EExcel 丞燕快速查詢2 https://sandk.ffbizs.com/

docker iptables part 2

http://sueboy.blogspot.com/2018/11/docker-iptables_12.html


sudo iptables -vnL


======NAT=====

sudo iptables -t nat -vnL


顯示行數
sudo iptables -t nat -L POSTROUTING  -n --line-numbers


-D 刪除那行
sudo iptables -t nat -D POSTROUTING x     x is which line delete


確認docker網段補上
sudo iptables -t nat -A POSTROUTING -s 172.19.0.0/16 ! -o docker0 -j MASQUERADE

docker volume or create directory, get access deny

https://www.centos.bz/2018/02/%E5%AE%9A%E5%88%B6entrypoint%E8%87%AA%E5%8A%A8%E4%BF%AE%E6%94%B9docker%E4%B8%ADvolume%E7%9A%84%E6%9D%83%E9%99%90/


..........

Other way


1、make shell and run shell, before run docker-compose.


mkdir ./data
sudo chown docker ./data
#sudo chown user01 ./data
sudo chmod g+rwx ./data
sudo chgrp 994 ./data

Directory exist and correct Access. 994 or 50 or 1000 see /etc/passwd or /etc/group



2、But sometime OS install docker by you don't know way... 
maybe have user dockerroot、group dockerroot or only group docker

you maybe already run   sudo usermod -a -G docker $(whoami)  100% in docker group.


A. This time chmod 777 ./data , let docker-compose run

B. ls -al ./data   see who make directory or file

C. get this user  "user01", chown user01 ./data

D. run docker-compose now have correct aceess Permission

E. chmod 774 ./data

[轉]Redux状态管理之痛点、分析与改良

https://segmentfault.com/a/1190000009540007


WebApp场景下的隐患

举个我曾经在另一篇博客中提到过的例子,一个业务流程有三个页面A/B/C,用户通常按顺序访问它们,每步都会提交一些信息,如果把信息存在Store中,在不刷新的情况下C页面可以直接访问A/B页面存进Store的数据,而一旦用户刷新C页面,这些数据便不复存在,使用这些数据很可能导致程序异常。




它可以从两方面缓解上述问题:


  1. 抽象问题:每个Page独立创建store,解决状态树的一对多问题,一个状态树(store)对应一个组件树(page),page在设计store时不用考虑其它页面,仅服务当前页。当然,由于Reducer仍然需要独立于组件树声明,抽象问题并没有根治,面向领域数据和App state的抽象仍然比UI state更自然。它仅仅带给你更大的自由度:不再担心有限的状态树如何设计才能满足近乎无限的UI state。
  2. 刷新、分享隐患:每个Page创建的Store都是完全不同的对象,且只存在于当前Page生命周期内,其它Page不可能访问到,从根本上杜绝跨页面的store访问。这意味着能够从store中访问到的数据,一定是可靠的。



2017.08.27 更新

这个方案在实践中,仍然遇到了一些问题,其中最最重要的,则是替换store后,跨页面action的问题




为了应对这个问题,我考虑了几种方案:


  1. 回到应用单一store:pageReducer的特性通过store.replaceReducer完成。当初为每个页面创建store是想让状态彻底隔离,而在replaceReducer后页面之间如果有相同的reducer则状态不会被重置,这是一个担心点。同时一个副作用是牺牲掉每个page定制化middleware的能力
  2. 为这类跨页面的action建立一个队列,在上个页面将action推进队列,下个页面取出再执行。此方案属于头痛医头,只能解决当前的case,对于websocket等类似问题比较无力。
  3. 定制thunk middleware,通过闭包获取最新的store



在权衡方案的通用性、理解难度等方面后,目前选择了第一种。

kibana geo_point How to Part 3

Now check again....


1、template_filebeat.json

Can only


{
  "index_patterns": ["filebeat*"],
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
  "doc": {
   "properties": {
    "geoip.location": {
     "type": "geo_point"
        },
    "geoip.coordinates": {
     "type": "geo_point"
        }
   }
  }
 }
  
}


Here Import"location" is Error, Must "geoip.location"

But sometime why no use, because my way insert index-pattern, so geoip.location no field, always is
geoip.location.lat  and geoip.location.lon overwrtie.

see 2.

2、index-pattern  index-pattern-export.json

one way just try to put

{\"name\":\"geoip.location\",\"type\":\"geo_point\",\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true}

and remove geoip.location.lat  and geoip.location.lon .


When put template, index in kibana index Patterns delete is NO Use. Must delete in Dev Tools

DELETE filebeat-6.4.2-2018.11.19

GET _cat/indices?v
GET _cat/indices?v&s=index

check exist or not. Then import data again, index recreate, then apply template.








elk ingest plugs pipeline


Filebeat + Elasticsearch + Kibana 轻量日志收集与展示系统

https://wzyboy.im/post/1111.html?utm_source=tuicool&utm_medium=referral



提到

beat -> logstash -> elk

可以

beat -> elk ingest plugs (  Elasticsearch Ingest Node )


Elasticsearch Ingest Node 是 Elasticsearch 5.0 起新增的功能。在 Ingest Node 出现之前,人们通常会在 ES 前置一个 Logstash Indexer,用于对数据进行预处理。有了 Ingest Node 之后,Logstash Indexer 的大部分功能就可以被它替代了,grok, geoip 等 Logstash 用户所熟悉的处理器,在 Ingest Node 里也有。对于数据量较小的 ES 用户来说,省掉一台 Logstash 的开销自然是令人开心的,对于数据量较大的 ES 用户来说,Ingest Node 和 Master Node, Data Node 一样也是可以分配独立节点并横向扩展的,也不用担心性能瓶颈。

目前 Ingest Node 已支持数十种处理器,其中的 script 处理器具有最大的灵活性。

与 /_template 类似,Ingest API 位于 /_ingest 下面。用户将 pipeline 定义提交之后,在 Beats 中即可指定某 pipeline 为数据预处理器。





FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.2

已經內建了
https://www.elastic.co/guide/en/elasticsearch/plugins/6.5/ingest-geoip.html
https://www.elastic.co/guide/en/elasticsearch/plugins/6.5/ingest-user-agent.html





===============

.filebeat

filebeat.yml

補上 like example


output.elasticsearch:

  hosts: ["http://localhost:9200/"]

  pipelines:
    - pipeline: nginx.access
      when.equals:
        fields.type: nginx.access
    - pipeline: nginx.error
      when.equals:
        fields.type: nginx.error

OK, use bottom way to make pipeline.


.pipeline

https://www.elastic.co/guide/en/elasticsearch/reference/current/simulate-pipeline-api.html
https://qbox.io/blog/indexing-elastic-stack-5-0-ingest-apis
https://dev.classmethod.jp/server-side/elasticsearch/elasticsearch-ingest-node/
https://qbox.io/blog/how-to-index-geographical-location-of-ip-addresses-to-elasticsearch-5-0-1

Get a pipeline

GET _ingest/pipeline/geoippipeline


write a pipeline

PUT _ingest/pipeline/geoippipeline
{
  "description" : "Add geoip information to the given IP address",
  "processors": [
    {
      "geoip" :  {
        "field" : "ip",
        "ignore_missing": true
      }
    },
    {
      "geoip" :  {
        "field" : "src_ip",
        "ignore_missing": true
      }
    },
    {
      "geoip" :  {
        "field" : "clientip",
        "ignore_missing": true
      }
    },
    {
      "set" : {
        "field" : "location",
        "value" : "{{geoip.location.lon}}, {{geoip.location.lat}}"
      }
    }
  ]
}


real use pipeline with test data, check is ok.

POST _ingest/pipeline/geoippipeline/_simulate
{
  "docs":[
    {
      "_source": {
        "ip": "8.8.0.0",
        "src_ip": "8.8.0.0",
        "clientip": "8.8.0.0"
      }
    }
  ]
}



Developer test


POST _ingest/pipeline/_simulate
{
  "pipeline": {
  "description" : "parse multiple patterns",
  "processors": [
    {
      "geoip" :  {
        "field" : "ip",
        "ignore_missing": true
      }
    },
    {
      "geoip" :  {
        "field" : "src_ip",
        "ignore_missing": true
      }
    },
    {
      "geoip" :  {
        "field" : "clientip",
        "ignore_missing": true
      }
    },
    {
      "set" : {
        "field" : "location",
        "value" : "{{geoip.location.lon}}, {{geoip.location.lat}}"
      }
    }
  ]
},
"docs":[
  {
    "_source": {
      "ip": "8.8.0.0",
      "src_ip": "8.8.0.0",
      "clientip": "8.8.0.0"
    }
  }
  ]
}






logstash kibana ssh log

1、filebeat    /var/log/secure

2、



filter {
  grok {
    #type => "syslog"
    match => ["message", "%{SYSLOGBASE} Failed password for (invalid user |)%{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"]
    add_tag => "ssh_brute_force_attack"
  }
  grok {
    #type => "syslog"
    match => ["message", "%{SYSLOGBASE} Accepted password for %{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"]
    add_tag => "ssh_sucessful_login"
  }

  geoip {
    source => "src_ip"
    target => "geoip"
    add_tag => [ "ssh-geoip" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
    add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    add_field => [ "geoipflag", "true" ]
  }

}

kibana geo_point How to Part 2

Step:

.Change Kibana & elk order.  Now elk import template_filebeat, then wait logstash put log to elk. elk can get index EX:filebeat-6.4.2-2018.11.19 filebeat-6.4.2-2018.11.20
Then kibana import index-partten and set default.



#!/bin/bash

echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/main' >> /etc/apk/repositories
echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/community' >> /etc/apk/repositories
echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories
apk --no-cache upgrade
apk --no-cache add curl

echo "=====Elk config ========"
until echo | nc -z -v elasticsearch 9200; do
    echo "Waiting Elk Kibana to start..."
    sleep 2
done

code="400"
until [ "$code" != "400" ]; do
    echo "=====Elk importing mappings json ======="
    curl -v -XPUT elasticsearch:9200/_template/template_filebeat -H 'Content-Type: application/json' -d @/usr/share/elkconfig/config/template_filebeat.json 2>/dev/null | head -n 1 | cut -d ':' -f2|cut -d ',' -f1 > code.txt
    code=`cat code.txt`
    sleep 2
done


#reload index for geo_point
echo "=====Get kibana idnex lists ======="
indexlists=()
while [ ${#indexlists[@]} -eq 0 ]
do
    sleep 2
    indexlists=($(curl -s elasticsearch:9200/_aliases?pretty=true | awk -F\" '!/aliases/ && $2 != "" {print $2}' | grep filebeat-))
done

sleep 10


#========kibana=========
id="f1836c20-e880-11e8-8d66-7d7b4c3a5906"

echo "=====Kibana default index-pattern ========"
until echo | nc -z -v kibana 5601; do
    echo "Waiting for Kibana to start..."
    sleep 2
done

code="400"
until [ "$code" != "400" ]; do
    echo "=====kibana importing json ======="
    curl -v -XPOST kibana:5601/api/kibana/dashboards/import?force=true -H "kbn-xsrf:true" -H "Content-type:application/json" -d @/usr/share/elkconfig/config/index-pattern-export.json 2>/dev/null | head -n 1 | cut -d ':' -f2|cut -d ',' -f1 > code.txt
    code=`cat code.txt`
    sleep 2
done

code="400"
until [ "$code" != "400" ]; do
    curl -v -XPOST kibana:5601/api/kibana/settings/defaultIndex -H "kbn-xsrf:true"  -H "Content-Type: application/json" -d "{\"value\": \"$id\"}"  2>/dev/null | head -n 1 | cut -d ':' -f2|cut -d ',' -f1 > code.txt
    code=`cat code.txt`
    sleep 2
done

tail -f /dev/null




template_filebeat template_filebeat.json

* template_filebeat.json  is from

GET _cat/indices?v
you can see some index like //GET _cat/indices?v&s=index



GET filebeat-6.4.2-2018.11.19



ok use your mappings replace this mappings




{
  "index_patterns": ["filebeat*"],
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "doc": {
      "properties": {
        "@timestamp": {
          "type": "date"
        },

  ...

}


Only replace mappings. Official website have example.
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

And  Change

"coordinates": {
"type": "float"  => "geo_point"
},
Save file name:template_filebeat.json  

Usually new docker elk logstash already have geoip. add_field like picture and mutate add some item. Here is change type with templates.



So this step mean you must let logstash send log to elk, get fileds to become template.



index-partten index-pattern-export.json

see this url, know how to do
https://sueboy.blogspot.com/2018/11/kibana-default-index-pattern.html

Important:Do this must refresh, then export json that is corrent file.




Now 100% can see map.

[轉]有便宜的4g分享器嗎

https://www.mobile01.com/topicdetail.php?f=18&t=5638571&p=1





E8372h-153


https://shopee.tw/-%E7%8F%BE%E8%B2%A8-%E5%8F%AF%E5%9B%9E%E5%BE%A9%E5%87%BA%E5%BB%A0%E5%80%BC-%E4%BF%9D%E5%9B%BA%E4%B8%80%E5%B9%B4%EF%BC%BD%E8%8F%AF%E7%82%BA-E8372h-153-4G-Wifi%E5%88%86%E4%BA%AB-E8372-E8372-153-i.24086409.308705863

[Failed again!!] kibana geo_point How to

Fxxx kibana elk  Now try to do again. But can't get geo_point....
reindex no use

No Use
POST /_refresh
POST /_flush/synced
POST /_cache/clear

Only do this can apply


Wast time Fxxx system.
..................
..................
..................
..................
..................
..................
..................
..................
..................
..................
..................
..................
..................
..................
..................

very bad document, very bad change version............Everythings is BAD for elk kibana



1、 Every time see this "PUT GET or DELETE" command. Use  where ???
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html


Use in Kibana 



AND Question is curl 




2、Please Watch 6.5   Not old version


You maybe see many document on Internet that check version First.



3、Before geo_point

keep this command : (or find Internet know this mean)

GET _cat/
GET _cat/indices?v
GET _cat/indices?v&s=index

GET /_settings

GET filebeat*

GET /_template

PUT _template/template_filebeat

POST _reindex


=================Begin================

Firest Must already have  default index




If want to auto, see  http://sueboy.blogspot.com/2018/11/kibana-default-index-pattern.html


Second



#!/bin/bash

echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/main' >> /etc/apk/repositories
echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/community' >> /etc/apk/repositories
echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories
apk --no-cache upgrade
apk --no-cache add curl

echo "=====Elk config ========"
until echo | nc -z -v elasticsearch 9200; do
    echo "Waiting Elk Kibana to start..."
    sleep 2
done

code="400"
until [ "$code" != "400" ]; do
    echo "=====Elk importing mappings json ======="
    curl -v -XPUT elasticsearch:9200/_template/template_filebeat -H 'Content-Type: application/json' -d @/usr/share/elkconfig/config/template_filebeat.json 2>/dev/null | head -n 1 | cut -d ':' -f2|cut -d ',' -f1 > code.txt
    code=`cat code.txt`
    sleep 2
done

#reload index for geo_point
echo "=====Get kibana idnex lists ======="
indexlists=()
while [ ${#indexlists[@]} -eq 0 ]
do
    sleep 2
    indexlists=($(curl -s elasticsearch:9200/_aliases?pretty=true | awk -F\" '!/aliases/ && $2 != "" {print $2}' | grep filebeat-))
done

for i in "${indexlists[@]}"
do
    echo "=====reindex filebeat for geo_point ======="
    curl -v -XPOST "http://elasticsearch:9200/_reindex" -H 'Content-Type: application/json' -d'{ "source": { "index": "'$i'" }, "dest": { "index": "'$i-reindex'" } }'
done
    
#curl -XDELETE "http://elasticsearch:9200/filebeat-*"
#curl -XPUT "http://elasticsearch:9200/filebeat"

tail -f /dev/null




* template_filebeat.json  is from

GET _cat/indices?v
you can see some index like



GET filebeat-6.4.2-2018.11.19



ok use your mappings replace this mappings




{
  "index_patterns": ["filebeat*"],
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "doc": {
      "properties": {
        "@timestamp": {
          "type": "date"
        },

  ...

}


Only replace mappings. Official website have example.
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

And  Change

"coordinates": {
"type": "float"  => "geo_point"
},
Save file name:template_filebeat.json  

Usually new docker elk logstash already have geoip. add_field like picture and mutate add some item. Here is change type with templates.



Back shell, move to  =====Get kibana idnex lists=====

This is get use indexlist now after used late.

Then reindex


Why do this Because reindex let  geo_point  remake.  Already inside index. corrdinates type is float.




If you want to change type, usually get error or maybe success, success is fake.





So only use reindex, let it can do.
https://medium.com/@sami.jan/fundamentals-of-elasticsearch-cbb273160f60


I think use docker elk logstash kibana that want to use quickly. Setting config must set default. Change config only use docker image offer. So docker image No offer and don't change docker image, only use API. But API not everything same use config.

All step

1、elk put template for geo_point

"coordinates": {
   "type": "geo_point"
},

2、get already used idnex

3、reindex  a -> a_reindex

4、Visualize ->  create a visualizaition -> Coordinate Map -> choese Filter  "filebeat-*"  Maybe your different name, by default index

-> Buckets -> Geo Coordinates -> Aggregation -> Geohash -> Field -> Geoip.coordinates  (geo_point)  -> RUN



Now 100% can see map.



[轉]聰明的投資者都是看財報,看法說會 財報、法說會在我心目中才是第一,這才是專業的人給出的結果,這種結果可信度就非常的高。

https://www.mobile01.com/topicdetail.php?f=291&t=5107288&p=1039#10381


聰明的投資者都是看財報,看法說會,一般網友說的會比較法說會及財報準嗎?我個人是不相信,所以我在追蹤一檔股票時,財報、法說會在我心目中才是第一,這才是專業的人給出的結果,這種結果可信度就非常的高


docker alpine

docker & docker-compose 一堆坑




FROM alpine

RUN apk --no-cache upgrade
RUN apk update &&\
apk add bash

docker-compose write file always root

Use chown 1000 xxxoo 

xxxooo file name






  logtest:
    build:
      context: logtest/
    volumes:
      - ./logtest/logs:./logs:rw
    networks:
      - elk
    command: |
      /bin/sh -c '/bin/sh -s << EOF
      echo "Start filebeat...."
      filebeat run -c ./filebeat.yml -v &
      sleep 2
      while [ ! -f ./logs/filebeat ]
      do
        sleep 2
      done
      chown 1000 ./logs/filebeat
      tail -f /dev/null
      EOF'


[轉]【踩坑】ELK6.0已取消filebeat配置document_type



http://blog.51cto.com/kexiaoke/2092029




解决方案为在filebeat里面新增一个fields字段,service : GameStatis都是自己定义的,定义完成后使用Logstash的if 判断,条件为if [fields][service] == "GameStatis".就可以了。