https://www.mobile01.com/topicdetail.php?f=291&t=5107288&p=1103#11022
所以你與其想了解美林這算法的原理,倒不如你去觀察美林進出宏碁的成本,我記得之前我都有追蹤過,只是時間太久了,大部份的人也都忘了,所謂的研究、觀察、記錄是要找出個股中誰說的算,如果自已是小咖,那就要摸清你有興趣個股中大咖人在想什麼,他們相信什麼,他們對什麼財報、什麼消息買單,很多人喜歡用一套方法套到所有的股票中,我說過,這是沒有用的,雖有找出個股中贏家在想啥,你順著他做,最終那一套方法就是準則
https://www.mobile01.com/topicdetail.php?f=291&t=5107288&p=1103#11024
ans:那個...在這個版這麼久了,你還是搞不清楚這種事嗎?任何人都能用不同的觀點去估股價合理性,但重點是在於,這個人夠不夠力,如果美林這兩三年買了二十萬張,今天他用淨現金比較去估16元,如果大家不信,他就把這二十萬張股價倒出來,倒到股價到16元為止,這時你就會發覺,美林估的好準哦,如果此時,台灣摩根士認為宏碁淨值19元,一年賺1元,願意給宏碁3年眼光費,所以值22元,這時美林和台灣摩根士就比看看誰比較大咖,美林倒1萬張,台灣摩根士接1萬張,此時股價不跌,美林再倒5萬張,台灣摩根士再接5萬張,此時股價還是不跌,美林all out 20萬張,台灣摩根士嚇到,並且手上也沒有這麼多資金,這時美林就獲勝,這時就是美林說的算。
Golang で Vault を操作
https://christina04.hatenablog.com/entry/vault-login-golang
user passwd get token
https://www.vaultproject.io/docs/auth/userpass.html
setting token
https://www.vaultproject.io/docs/auth/token.html
user passwd get token
https://www.vaultproject.io/docs/auth/userpass.html
setting token
https://www.vaultproject.io/docs/auth/token.html
elk kibana search geth ethereum
https://blog.csdn.net/qq_38486203/article/details/80817037
Search minedNumber
Get minedNumber
Search minedNumber
GET /filebeat-6.*-geth*/_search?q=geth_ip:xxx.xxx.xxx.xxx
{
"_source": ["name", "minedNumber", "gethdate"],
"sort": [
{
"gethdate": {
"order": "desc"
}
}
],
"from": 1,
"size": 1
}
Get minedNumber
curl -XGET "http://xxx.xxx.xxx.xxx:9200/filebeat-6.*-geth*/_search?q=geth_ip:xxx.xxx.xxx.xxx" -H 'Content-Type: application/json' -d'
{
"_source": ["name", "minedNumber", "gethdate"],
"sort": [
{
"gethdate": {
"order": "desc"
}
}
],
"from": 1,
"size": 1
}' | jq ".hits.hits[]._source.minedNumber"
ethereum-etl ethereumetl docker part2 use .env
docker-compose.yml
version: '3.3'
services:
ethereum_etl:
build:
context: .
env_file: .env
volumes:
- /var/log/hardblue/etl/:/ethereum-etl/output:rw
#- /root/go/src/github.com/ethereum/go-ethereum/build/bin/data:/ethereum-etl/ipc
#restart: unless-stopped
networks:
- etl
networks:
etl:
driver: bridge
.env
STARTBLOCK=00000000
DOCKERFILE
FROM python:3.6-alpine
MAINTAINER Eric Lim
ENV PROJECT_DIR=ethereum-etl
RUN apk add unzip
RUN wget https://github.com/blockchain-etl/ethereum-etl/archive/develop.zip \
&& unzip develop.zip && rm develop.zip
RUN mv ethereum-etl-develop /$PROJECT_DIR
WORKDIR /$PROJECT_DIR
RUN apk add --no-cache gcc musl-dev #for C libraries:
RUN pip install --upgrade pip && pip install -e /$PROJECT_DIR/
#CMD ["export_all", "-s", "01990000", "-e", "99999999", "-p", "http://xxx.xxx.xxx.xxx:8545", "-o", "output"]
#CMD ["sh","-c", "echo startblock=$STARTBLOCK endblock=$ENDBLOCK"]
CMD ["sh","-c","python ethereumetl export_all -s $STARTBLOCK -e $ENDBLOCK -p http://xxx.xxx.xxx.xxx:8545 -o output"]
crontab -e
#!/bin/sh
cd /docker-compose directory/ethereum_etl
docker-compose up
PS:First docker-compose build. Use docker-compose up, don't use docker-compose start that can't import .env STARTBLOCK
kibana geo_point How to Part 6
Kibana Dev Tools
GET _cat
GET _cat/indices?v
GET _cat/indices?v&s=index
GET _cat/segments?v
GET /_settings
GET /_stats
GET /_template
GET _cluster/health
GET filebeat-6.5.1-2019.01.01
POST filebeat-6.5.1-2019.01.01
PUT filebeat-6.5.1-2019.01.01
DELETE filebeat-6.5.1-2019.01.01
GET filebeat-6.5.1-2019.01.*
POST filebeat-6.5.1-2019.01.*
PUT filebeat-6.5.1-2019.01.*
DELETE filebeat-6.5.1-2019.01.*
GET filebeat-6.5.1-2019.01.01/_stats
GET filebeat-6.5.1-2019.01.01/_mapping
POST /_refresh
POST /_cache/clear
POST /_flush/synced
?v
show column nameSegments Merge
https://my.oschina.net/fufangchun/blog/1541156https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-forcemerge.html#forcemerge-multi-index
GET _cat/segments?v
POST /filebeat-6.5.1-2019.01.01/_forcemerge?max_num_segments=1&flush=true
https://my.oschina.net/weiweiblog/blog/2989931
ethereum-etl ethereumetl elk logstash kibana part2
filter {
if [etltype] == "blocks" { #[fields][srctype]
csv {
columns => [
"number", "hash", "parent_hash", "nonce", "sha3_uncles", "logs_bloom", "transactions_root",
"state_root", "receipts_root", "miner", "difficulty", "total_difficulty", "size", "extra_data",
"gas_limit", "gas_used", "timestamp", "transaction_count"
]
separator => ","
remove_field => ["message"]
skip_empty_columns => true
skip_empty_rows => true
}
}else if [etltype] == "contracts" { #[fields][srctype]
csv {
columns => [
"address", "bytecode", "function_sighashes", "is_erc20", "is_erc721"
]
separator => ","
remove_field => ["message"]
skip_empty_columns => true
skip_empty_rows => true
}
}else if [etltype] == "logs" { #[fields][srctype]
csv {
columns => [
"log_index", "transaction_hash", "transaction_index", "block_hash", "block_number",
"address", "data", "topics"
]
separator => ","
remove_field => ["message"]
skip_empty_columns => true
skip_empty_rows => true
}
}else if [etltype] == "receipts" { #[fields][srctype]
csv {
columns => [
"transaction_hash", "transaction_index", "block_hash", "block_number", "cumulative_gas_used",
"gas_used", "contract_address", "root", "status"
]
separator => ","
remove_field => ["message"]
skip_empty_columns => true
skip_empty_rows => true
}
}else if [etltype] == "token_transfers" { #[fields][srctype]
csv {
columns => [
""
]
separator => ","
remove_field => ["message"]
skip_empty_columns => true
skip_empty_rows => true
}
}else if [etltype] == "tokens" { #[fields][srctype]
csv {
columns => [
""
]
separator => ","
remove_field => ["message"]
skip_empty_columns => true
skip_empty_rows => true
}
}else if [etltype] == "transactions" { #[fields][srctype]
csv {
columns => [
"hash", "nonce", "block_hash", "block_number", "transaction_index", "from_address",
"to_address", "value", "gas", "gas_price", "inputcontext"
]
separator => ","
remove_field => ["message"]
skip_empty_columns => true
skip_empty_rows => true
}
}
}
output {
if [etltype] == "blocks" {
elasticsearch {
hosts => "xxx.xxx.xxx.xxx:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-blocks-%{+YYYY.MM.dd}"
document_id => "%{[hash]}"
}
}else if [etltype] == "logs" {
elasticsearch {
hosts => "xxx.xxx.xxx.xxx:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-logs-%{+YYYY.MM.dd}"
}
}else if [etltype] == "transactions" {
elasticsearch {
hosts => "xxx.xxx.xxx.xxx:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-transactions-%{+YYYY.MM.dd}"
document_id => "%{[hash]}"
}
}else if [etltype] == "contracts" {
elasticsearch {
hosts => "xxx.xxx.xxx.xxx:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-contracts-%{+YYYY.MM.dd}"
document_id => "%{[address]}"
}
}else{
elasticsearch {
hosts => "xxx.xxx.xxx.xxx:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
stdout { codec => rubydebug }
}
transactions csv fileds are
hash,nonce,block_hash,block_number,transaction_index,from_address,to_address,value,gas,gas_price,input
input
must change to other name likeinputcontext
like this:
hash,nonce,block_hash,block_number,transaction_index,from_address,to_address,value,gas,gas_price,inputcontext
Fxxxx No change name can't import success, even logstash get correct. But this bug sometime use new docker-compose ELK can import success. So just change name more easy.
===============================
Error No Use
if [etltype] in ["blocks"]
Correct
if [etltype] == "blocks"
Only more then two args
if [etltype] in ["blocks", "transactions" ...]
This is ok
訂閱:
文章 (Atom)