EExcel 丞燕快速查詢2

EExcel 丞燕快速查詢2
EExcel 丞燕快速查詢2 https://sandk.ffbizs.com/

建議每天

https://www.mobile01.com/topicdetail.php?f=291&t=5107288&p=1156#11551

有一些版友會自行操作一些股票,在買進前,記得基本的功課要先做,買進後每天,每周,每月要查尋的東西也不要太偷懶,不要a大在休養期,你們就隨便憑感覺開始買賣了。大笑大笑

我這樣說好了,投資股票和一般職業的運動家沒有什麼兩樣,例如田徑選手,如果沒有賽事時,也許每天鍛鍊的沒有這麼勤,一周也許三次,當賽事近了,鍛鍊量當然會開始加量,每天要做的功課就很多,如練體能,練衝刺,負重訓練…等,會以最佳的狀態出賽,比賽時差一點就差很多,所以狀態要在最佳狀態,但沒有比賽時,基本的訓練還是得做,平常的鍛練是能累積的,就算不是出賽狀態,至少還是能有七、八成的功力,和一般人比起來還是強很多,如果有人問這位田徑選手平常怎麼訓練,而這個人僅照作了一天或一周,成效當然是比不上長期鍛鍊的人,股市知識也是一樣的道理,雖然下面的事很簡單,但你累積久了,融會貫通了,你就能抓住別人的習慣,也能避開一些陷阱,也能了解一般判斷公司價值的準則,有一些人會問我說判斷準則,我就和他說不同的個股就是會有不同的情況,這沒有什麼好懷疑的,如果每一間公司都能用同一套準則去判斷,股市中就不會有輸家了,就是因為每一家判斷方法都不同,很多投資人傻傻的不知變通想要用同一個方法套入,套久了後,有時輸,有時贏,久了就亂套用,當然就是輸家。

我舉一個簡單的例子,每一個人都不同,興趣不同,志向不同,智商不同…等,我們的國民十二年教育就用同一套方法套到所有的小孩身上,最終的結果就是能適應的小朋友生存下來,不是這小朋友很厲害,只是恰好這教育剛好適合這小孩,一個女生的個性很適合某一個男生,卻可能在另一個男生的眼裡就是眼中釘。

操作股票前要做的功課

買進前要先查尋七季以上的簡易財報,再看七年左右的簡易年報

https://goodinfo.tw/StockInfo/StockFinDetail.asp?RPT_CAT=IS_M_QUAR_ACC&STOCK_ID=2353

1、本業,注意營收,毛利率,營利率逐季的變化
2、業外,注意其它收入,財務成本及匯損益
3、若為轉機股要看資產負債及現金流量表,且不能放過細節,如果是穩健股,1、看完就差不多,看一下年線,三年線,五年線,十年線,及季對季的獲利變化,每一季的法人說明會要看。

每天查尋
1、查尋台灣五十成份股各報價變化
2、查尋三大人買期現貨買賣超
3、查尋個股三大法人買賣超,融資券,借券還券情況
4、注意公司消息或動態(有一些公司很低調,平常就沒啥新聞)

每周查尋
1、集保股權分佈

每月查尋
1、個股營收
2、三大法人一個買買賣超,融資券變化,借券還券情況
3、外資群累積買賣超

每季查尋
1、財報的研讀
2、觀看法說會

常用籌碼面,基本面查尋網站
常用籌碼查尋網址

1、玩股網
https://www.wantgoo.com/

可以查尋當天主力進出成本均價,比較少用,除非我想看當天主力在想啥

2、histock
https://histock.tw/stock/main.aspx?no=2353

用於查尋某一段時間內,主力外資群買賣張數及平均成本

3、集保股權分怖
https://www.tdcc.com.tw/smWeb/QryStock.jsp

用於每周追蹤超級大戶及散戶持股變化

4、goodinfo
https://goodinfo.tw/StockInfo/StockFinDetail.asp?RPT_CAT=IS_M_QUAR_ACC&STOCK_ID=2353

用於看基本面及內有財報的網結,也能查尋連長期的技術線型

用於查尋當天三大法人進出,融資融券,主力進出,也可以查尋一年內三大法人、融資、融券,主力進出資料。

5、三大法人買賣日報
http://www.twse.com.tw/zh/page/trading/fund/T86.html

用於4點10分時看三大法人當天進出的情況

6、三大法人買賣月報
http://www.twse.com.tw/zh/page/trading/fund/TWT47U.html

用於查尋較長時間三大法人的進出情況

7、股市籌碼分析_股狗網
https://www.stockdog.com.tw/stockdog/index.php?m=home

用於查尋10日借券賣出,借券回補的情況

8、股市金字塔
http://norway.twsthr.info/StockHolders.aspx?stock=2353

用於看長周期大股東持股變化

k8s nfs v4 kubernetes

https://hub.docker.com/r/itsthenetwork/nfs-server-alpine

https://sueboy.blogspot.com/2019/11/kubernetes-nodeport.html

ConfigMap


apiVersion: v1
kind: ConfigMap
metadata:
  name: nfs
  namespace: nfs
data:
  exports: |
    {{SHARED_DIRECTORY}} {{PERMITTED}}({{READ_ONLY}},fsid=0,{{SYNC}},no_subtree_check,no_auth_nlm,insecure,no_root_squash)
  nfsd: |
    #!/bin/bash

    # Make sure we react to these signals by running stop() when we see them - for clean shutdown
    # And then exiting
    trap "stop; exit 0;" SIGTERM SIGINT

    stop()
    {
      # We're here because we've seen SIGTERM, likely via a Docker stop command or similar
      # Let's shutdown cleanly
      echo "SIGTERM caught, terminating NFS process(es)..."
      /usr/sbin/exportfs -uav
      /usr/sbin/rpc.nfsd 0
      pid1=`pidof rpc.nfsd`
      pid2=`pidof rpc.mountd`
      # For IPv6 bug:
      pid3=`pidof rpcbind`
      kill -TERM $pid1 $pid2 $pid3 > /dev/null 2>&1
      echo "Terminated."
      exit
    }
    
    # Check if the SHARED_DIRECTORY variable is empty
    if [ -z "${SHARED_DIRECTORY}" ]; then
      echo "The SHARED_DIRECTORY environment variable is unset or null, exiting..."
      exit 1
    else
      echo "Writing SHARED_DIRECTORY to /etc/exports file"
      /bin/sed -i "s@{{SHARED_DIRECTORY}}@${SHARED_DIRECTORY}@g" /etc/exports
    fi

    # This is here to demonsrate how multiple directories can be shared. You
    # would need a block like this for each extra share.
    # Any additional shares MUST be subdirectories of the root directory specified
    # by SHARED_DIRECTORY.

    # Check if the SHARED_DIRECTORY_2 variable is empty
    if [ ! -z "${SHARED_DIRECTORY_2}" ]; then
      echo "Writing SHARED_DIRECTORY_2 to /etc/exports file"
      echo "{{SHARED_DIRECTORY_2}} {{PERMITTED}}({{READ_ONLY}},{{SYNC}},no_subtree_check,no_auth_nlm,insecure,no_root_squash)" >> /etc/exports
      /bin/sed -i "s@{{SHARED_DIRECTORY_2}}@${SHARED_DIRECTORY_2}@g" /etc/exports
    fi

    # Check if the PERMITTED variable is empty
    if [ -z "${PERMITTED}" ]; then
      echo "The PERMITTED environment variable is unset or null, defaulting to '*'."
      echo "This means any client can mount."
      /bin/sed -i "s/{{PERMITTED}}/*/g" /etc/exports
    else
      echo "The PERMITTED environment variable is set."
      echo "The permitted clients are: ${PERMITTED}."
      /bin/sed -i "s/{{PERMITTED}}/"${PERMITTED}"/g" /etc/exports
    fi

    # Check if the READ_ONLY variable is set (rather than a null string) using parameter expansion
    if [ -z ${READ_ONLY+y} ]; then
      echo "The READ_ONLY environment variable is unset or null, defaulting to 'rw'."
      echo "Clients have read/write access."
      /bin/sed -i "s/{{READ_ONLY}}/rw/g" /etc/exports
    else
      echo "The READ_ONLY environment variable is set."
      echo "Clients will have read-only access."
      /bin/sed -i "s/{{READ_ONLY}}/ro/g" /etc/exports
    fi

    # Check if the SYNC variable is set (rather than a null string) using parameter expansion
    if [ -z "${SYNC+y}" ]; then
      echo "The SYNC environment variable is unset or null, defaulting to 'async' mode".
      echo "Writes will not be immediately written to disk."
      /bin/sed -i "s/{{SYNC}}/async/g" /etc/exports
    else
      echo "The SYNC environment variable is set, using 'sync' mode".
      echo "Writes will be immediately written to disk."
      /bin/sed -i "s/{{SYNC}}/sync/g" /etc/exports
    fi

    # Partially set 'unofficial Bash Strict Mode' as described here: http://redsymbol.net/articles/unofficial-bash-strict-mode/
    # We don't set -e because the pidof command returns an exit code of 1 when the specified process is not found
    # We expect this at times and don't want the script to be terminated when it occurs
    set -uo pipefail
    IFS=$'\n\t'

    # This loop runs till until we've started up successfully
    while true; do

      # Check if NFS is running by recording it's PID (if it's not running $pid will be null):
      pid=`pidof rpc.mountd`

      # If $pid is null, do this to start or restart NFS:
      while [ -z "$pid" ]; do
        echo "Displaying /etc/exports contents:"
        cat /etc/exports
        echo ""

        # Normally only required if v3 will be used
        # But currently enabled to overcome an NFS bug around opening an IPv6 socket
        echo "Starting rpcbind..."
        /sbin/rpcbind -w
        echo "Displaying rpcbind status..."
        /sbin/rpcinfo

        # Only required if v3 will be used
        # /usr/sbin/rpc.idmapd
        # /usr/sbin/rpc.gssd -v
        # /usr/sbin/rpc.statd

        echo "Starting NFS in the background..."
        /usr/sbin/rpc.nfsd --debug 8 --no-udp --no-nfs-version 2 --no-nfs-version 3
        echo "Exporting File System..."
        if /usr/sbin/exportfs -rv; then
          /usr/sbin/exportfs
        else
          echo "Export validation failed, exiting..."
          exit 1
        fi
        echo "Starting Mountd in the background..."These
        /usr/sbin/rpc.mountd --debug all --no-udp --no-nfs-version 2 --no-nfs-version 3
    # --exports-file /etc/exports

        # Check if NFS is now running by recording it's PID (if it's not running $pid will be null):
        pid=`pidof rpc.mountd`

        # If $pid is null, startup failed; log the fact and sleep for 2s
        # We'll then automatically loop through and try again
        if [ -z "$pid" ]; then
          echo "Startup of NFS failed, sleeping for 2s, then retrying..."
          sleep 2
        fi

      done

      # Break this outer loop once we've started up successfully
      # Otherwise, we'll silently restart and Docker won't know
      echo "Startup successful."
      break

    done

    while true; do

      # Check if NFS is STILL running by recording it's PID (if it's not running $pid will be null):
      pid=`pidof rpc.mountd`
      # If it is not, lets kill our PID1 process (this script) by breaking out of this while loop:
      # This ensures Docker observes the failure and handles it as necessary
      if [ -z "$pid" ]; then
        echo "NFS has failed, exiting, so Docker can restart the container..."
        break
      fi

      # If it is, give the CPU a rest
      sleep 1

    done

    sleep 1
    exit 1
  bashrc: |
    # General Aliases
    alias apk='apk --progress'
    alias ll="ls -ltan"

    alias hosts='cat /etc/hosts'
    alias ..="cd .."
    alias ...="cd ../.."
    alias ....="cd ../../.."
    alias untar="tar xzvkf"
    alias mv="mv -nv"
    alias cp="cp -i"
    alias ip4="ip -4 addr"
    alias ip6="ip -6 addr"

    COL_YEL="\[\e[1;33m\]"
    COL_GRA="\[\e[0;37m\]"
    COL_WHI="\[\e[1;37m\]"
    COL_GRE="\[\e[1;32m\]"
    COL_RED="\[\e[1;31m\]"

    # Bash Prompt
    if test "$UID" -eq 0 ; then
        _COL_USER=$COL_RED
        _p=" #"
    else
        _COL_USER=$COL_GRE
        _p=">"
    fi
    COLORIZED_PROMPT="${_COL_USER}\u${COL_WHI}@${COL_YEL}\h${COL_WHI}:\w${_p} \[\e[m\]"
    case $TERM in
        *term | rxvt | screen )
            PS1="${COLORIZED_PROMPT}\[\e]0;\u@\h:\w\007\]" ;;
        linux )
            PS1="${COLORIZED_PROMPT}" ;;
        * ) 
            PS1="\u@\h:\w${_p} " ;;
    esac


Deployment

Different is ln /exports/exports /etc/exports
ConfigMAP /etc have read-only problem


apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-service
  namespace: nfs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-service
  template:
    metadata:
      labels:
        app: nfs-service
    spec:
      #restartPolicy: Always
      volumes:
        - name: exports
          configMap:
            name: nfs
            items:
              - key: exports
                path: exports
                mode: 0744
        - name: nfsd
          configMap:
            name: nfs
            items:
              - key: nfsd
                path: nfsd.sh
                mode: 0744
        - name: bashrc
          configMap:
            name: nfs
            items:
              - key: bashrc
                path: .bashrc
                mode: 0744
        - name: nfsshare
          emptyDir: {}  
      containers:
        - name: nfs-server-container
          image: alpine:latest
          securityContext:
            privileged: true
          command:
            - /bin/sh
            - -c
            - |
              echo nameserver 8.8.8.8 >> /etc/resolv.conf
              apk add --no-cache --update --verbose nfs-utils bash iproute2 
              rm -rf /var/cache/apk /tmp /sbin/halt /sbin/poweroff /sbin/reboot 
              mkdir -p /var/lib/nfs/rpc_pipefs /var/lib/nfs/v4recovery 
              echo "rpc_pipefs    /var/lib/nfs/rpc_pipefs rpc_pipefs      defaults        0       0" >> /etc/fstab 
              echo "nfsd  /proc/fs/nfsd   nfsd    defaults        0       0" >> /etc/fstab 
              rm /etc/exports 
              cp /exports/exports /etc/exports 
              cp /nfsd/nfsd.sh /usr/bin/nfsd.sh
              cp /bashrc/.bashrc /root/.bashrc
              chmod +x /usr/bin/nfsd.sh
              /usr/bin/nfsd.sh
          env:
            - name: SHARED_DIRECTORY
              value: "/nfsshare"
          volumeMounts:
            - name: exports
              mountPath: /exports
              #readOnly: true
            - name: nfsd
              mountPath: /nfsd
            - name: bashrc
              mountPath: /bashrc 
            - name: nfsshare
              mountPath: /nfsshare 

k8s ConfigMap mountPath need becarful. like this example exports and nfsd are /etc & /usr/bin. This two directory already have file inside. When you director mountPath: /etc or mountPath: /usr/bin . k8s replace this two path to other direcotry. ls -al /etc or /usr/bin that only get exports file and nfsd.sh file. So better way is mountPath usually different directory. In shell: Use Copy that copy exprots file and nfsd.sh file to docker image correct directory.


Service


kind: Service
apiVersion: v1
metadata:
  name: nfs-service
  namespace: nfs
spec:
  selector:
    app: nfs-service
  type: NodePort
  ports:
    # Open the ports required by the NFS server
    # Port 2049 for TCP
    - name: tcp-2049
      port: 2049
      targetPort: 2049
      protocol: TCP
      nodePort: 32049

    # Port 111 for UDP
    - name: udp-111
      port: 111
      protocol: UDP



======nfs v4 client mount
debain
https://blog.gtwang.org/linux/nfsv4/

Centos
https://computingforgeeks.com/configure-nfsv3-and-nfsv4-on-centos-7/


$ sudo yum -y install nfs-utils
$ sudo systemctl start rpcbind 
$ sudo systemctl enable rpcbind

mount -v -o port=32049 -o vers=4 -t nfs 192.168.99.118:/ /tmp/tnfs/
nano /tmp/tnfs/tt  # save some text. Then check nfs pod /nfsshare have tt and tt context



skype 8 自動登入..... 令人無言的設計

skype 8登入後自動記住登入帳密,下次登入時,不會詢問,一般會有不記憶選項,但沒有! 唯一只能登出時選擇


等到下次再登入就會詢問,但實際上再次登入後,登出、登入…又不用輸入……


難怪軟微推什麼產品,失敗居多!

kubernetes Deployment Failed to fetch Temporary failure resolving

IF you apt-get update in k8s Deployment yaml that containers command run it. Maybe get

Deployment Failed to fetch Temporary failure resolving


So just put this line before script


- /bin/bash
- -c
- |
  echo nameserver 8.8.8.8 >> /etc/resolv.conf
  apt-get update


Usually you get this problem because try to transfer Dockerfile / docker-compose directory to ConfigMap and Deployment YAML. Dockerfile have apt-get update that get problem.

[轉]你到底知不知道什麼是 Kubernetes?

https://www.hwchiu.com/kubernetes-concept.html


然而通常 A/B/C/D 都不是底下的工程人員,所以最後都會變成用嘴巴生架構,用嘴巴解決問題,底下實際的工程人員則是各種崩潰

goodbye microservices

https://segment.com/blog/goodbye-microservices/

核心服務 不見得要微服務化,因為共同開發上,減少一些問題

In early 2017 we reached a tipping point with a core piece of Segment’s product. It seemed as if we were falling from the microservices tree, hitting every branch on the way down. Instead of enabling us to move faster, the small team found themselves mired in exploding complexity. Essential benefits of this architecture became burdens. As our velocity plummeted, our defect rate exploded.

Eventually, the team found themselves unable to make headway, with 3 full-time engineers spending most of their time just keeping the system alive. Something had to change. This post is the story of how we took a step back and embraced an approach that aligned well with our product requirements and needs of the team.


===
Ditching Microservices and Queues

The first item on the list was to consolidate the now over 140 services into a single service. The overhead from managing all of these services was a huge tax on our team. We were literally losing sleep over it since it was common for the on-call engineer to get paged to deal with load spikes.


===

While we did have auto-scaling implemented, each service had a distinct blend of required CPU and memory resources, which made tuning the auto-scaling configuration more art than science.

===

===
跟當初想法一樣,微服務化會成幾何成長的複雜度,人力夠ok,人力不夠,搞這個找死