EExcel 丞燕快速查詢2

EExcel 丞燕快速查詢2
EExcel 丞燕快速查詢2 https://sandk.ffbizs.com/

Codeigniter 4 ORM relation By hand Part 2

Modify ModelTrait file at \App\Traits


namespace App\Traits;

trait ModelTrait
{
    // fetaure Maybe belongsTo

    // get result but not effect this, keep this ORM status
    public function newSelfObject()
    {
        return $this->db->query($this->builder->getCompiledSelect(false));
    }
    
    // copy self object return ORM Object
    public function cso() 
    {
        $class_name = get_class($this);
        $class_new_object = (new $class_name);
        $class_new_object->builder = clone $this->builder;
        return $class_new_object;
    }

    public function hasOne($class, $relation_primary_key=null, $primary_key=null)
    {
        return $class->where($relation_primary_key ?? $this->primaryKey, $this->newSelfObject()->getRowArray()[$primary_key ?? $this->primaryKey] ?? '');
    }

    // whereIn  If can't get records, return null.  Usually use primaryKey cloud happen
    public function hasMany($class, $relation_primary_key=null, $primary_key=null)
    {
        $temp = array_column($this->newSelfObject()->getResult(), $primary_key ?? $this->primaryKey);
        return $class->whereIn($relation_primary_key ?? $this->primaryKey, empty($temp) ? [null] : $temp);
    }
}
App\Controllers\OrderListContriller.php

namespace App\Controllers\Order;

class OrderListController extends \App\Controllers\BaseController
{
  public function OrderList()
  {
      // 3. 
      $order = (new Order)->where("id", $order_id);
      $orderData = $order->cso()->findAll(); // Maybe need orderData to do something.

      $member = $order->members()->where('createtime >', '2022/11/22')->first(); // ->findAll();
  }
}

Codeigniter 4 ORM relation By hand

Add ModelTrait file at \App\Traits


namespace App\Traits;

trait ModelTrait
{
    // fetaure Maybe belongsTo

    // get result but not effect this, keep this ORM status
    public function newSelfObject()
    {
        return $this->db->query($this->builder->getCompiledSelect(false));
    }

    public function hasOne($class, $relation_primary_key=null, $primary_key=null)
    {
        return $class->where($relation_primary_key ?? $this->primaryKey, $this->newSelfObject()->getRowArray()[$primary_key ?? $this->primaryKey] ?? '');
    }

    // whereIn  If can't get records, return null.  Usually use primaryKey cloud happen
    public function hasMany($class, $relation_primary_key=null, $primary_key=null)
    {
        $temp = array_column($this->newSelfObject()->getResult(), $primary_key ?? $this->primaryKey);
        return $class->whereIn($relation_primary_key ?? $this->primaryKey, empty($temp) ? [null] : $temp);
    }
}


How to use


App\Models\Order.php

class Order extends Model
{
    use ModelTrait;

    protected $table      = 'order';
    protected $primaryKey = 'order_id';
    
    public function members()
    {
        return $this->hasMany(new Member);
        
        // OR
        return $this->hasMany(new Member, 'id', 'member_id'); // id always use new Member,  member_id use Order
    }


App\Controllers\OrderListContriller.php

namespace App\Controllers\Order;

class OrderListController extends \App\Controllers\BaseController
{
  public function OrderList()
  {
      // 1. one order
      $members = (new Order)->where('id', $order_id)->members()->first(); // ->findAll();

      // 2. All orders
      $members = (new Order)->members()->findAll();

      // 3. 
      $order = (new Order)->where("id", $order_id);
      $orderData = $order->newSelfObject()->getRow(); // Maybe need orderData to do something.

      $member = $order->members()->where('createtime >', '2022/11/22')->first(); // ->findAll();
  }
}


You can make belongTo or belongsTo by yourself. More about feature like multiple key, it's more complex

SeckillSolution 秒殺 高并發 交易

https://github.com/qianshou/SeckillSolution
https://blog.csdn.net/koastal/article/details/78995885
=====
https://blog.csdn.net/qq_38512763/article/details/118830903

Aggregate array values into ranges

https://codereview.stackexchange.com/questions/80080/aggregate-array-values-into-ranges

    // Output:
    // Array (
    //     [0] => Array ( [0] => 1, [1] => 2, [2] => 3, [3] => 4, [4] => 5, [5] => 6 )
    //     [1] => Array ( [0] => 10, [1] => 11, [2] => 12, [3] => 13 )
    //     [2] => Array ( [0] => 20 )
    //     [3] => Array ( [0] => 24 )
    // )
    static function GetRanges1( $aNumbers ) {
        $aNumbers = array_unique( $aNumbers );
        sort( $aNumbers );
        $aGroups = array();
        for( $i = 0; $i < count( $aNumbers ); $i++ ) {
          if( $i > 0 && ( $aNumbers[$i-1] == $aNumbers[$i] - 1 ))
            array_push( $aGroups[count($aGroups)-1], $aNumbers[$i] );
          else
            array_push( $aGroups, array( $aNumbers[$i] )); 
        }
        return $aGroups;
    }

    // Output: Array ( [0] => 1-6, [1] => 10-13, [2] => 20, [3] => 24 )
    static function GetRanges2( $aNumbers ) {
        $aNumbers = array_unique( $aNumbers );
        sort( $aNumbers );
        $aGroups = array();
        for( $i = 0; $i < count( $aNumbers ); $i++ ) {
          if( $i > 0 && ( $aNumbers[$i-1] == $aNumbers[$i] - 1 ))
            array_push( $aGroups[count($aGroups)-1], $aNumbers[$i] );
          else
            array_push( $aGroups, array( $aNumbers[$i] )); 
        }
        $aRanges = array();
        foreach( $aGroups as $aGroup ) {
          if( count( $aGroup ) == 1 )
            $aRanges[] = $aGroup[0];
          else
            $aRanges[] = $aGroup[0] . '-' . $aGroup[count($aGroup)-1];
        }
        return $aRanges;
    }

    // Output: 1~6,10~13,20,24
    static function GetRanges3( $aNumbers, $headChar='' ) {
        $aNumbers = array_unique( $aNumbers );
        sort( $aNumbers );
        $aGroups = array();
        for( $i = 0; $i < count( $aNumbers ); $i++ ) {
          if( $i > 0 && ( $aNumbers[$i-1] == $aNumbers[$i] - 1 ))
            array_push( $aGroups[count($aGroups)-1], $aNumbers[$i] );
          else
            array_push( $aGroups, array( $aNumbers[$i] )); 
        }
        $aRanges = array();
        foreach( $aGroups as $aGroup ) {
          if( count( $aGroup ) == 1 )
            $aRanges[] = $headChar.$aGroup[0];
          else
            $aRanges[] = $headChar.$aGroup[0] . '~' . $headChar.$aGroup[count($aGroup)-1];
        }
        return implode( ',', $aRanges );
    }

gorm manytomany many2many use go program


type RoleHasPermissionList struct {
	model.Roles
	Rolehaspermissions []CUS_Rolehaspermissions `json:"rolehaspermissions" gorm:"foreignkey:role_id; references:id"`
}

type CUS_Rolehaspermissions struct {
	PermissionId  int                 `json:"-"`
	RoleId        int                 `json:"-"`
	PermissinInfo model.Permission `gorm:"foreignkey:id; references:permission_id"`
}

func (CUS_Rolehaspermissions) TableName() string {
	return "RoleHasPermissions"
}

type FormaterResult struct {
	model.Roles
	Permission []model.Permission `json:"permission"`
}

func GetRoleList(c *gin.Context) serializer.Response {
	var req GetRoleListRequest
	c.ShouldBind(&req)

	var total int64
	var list []RoleHasPermissionList

	page := req.Page
	pageSize := req.PageSize

	model.DB.
		Scopes(util.Paginate(c, page, pageSize)).
		Preload("Rolehaspermissions.PermissinInfo").
		Find(&list).Offset(-1).Limit(-1).Count(&total)

	// 重新整理輸出格式
	var formater_result []FormaterResult
	for i, v := range list {
		formater_result = append(formater_result, FormaterResult{Roles: v.Roles})

		for _, v2 := range v.Rolehaspermissions {
			formater_result[i].Permission = append(formater_result[i].Permission, v2.PermissinInfo)
		}
	}

	return serializer.Response{
		Code:    0,
		Message: "成功",
		Data:    util.Pagination(c, page, pageSize, total, formater_result),
	}
}


    {
        "id": 1,
        "name": "company",
        "display_name": "公司",
        "guard_name": "admin_api",
        "created_at": "2020-03-26 14:34:25",
        "updated_at": "2020-03-26 14:34:25",
        "permission": [
          {
            "id": 1,
            "name": "add company",
            "display_name": "add company",
            "guard_name": "admin_api",
            "created_at": "2020-03-26 14:34:25",
            "updated_at": "2020-03-26 14:34:25"
          },
          {
            "id": 2,
            "name": "edit company",
            "display_name": "edit company",
            "guard_name": "admin_api",
            "created_at": "2020-03-26 14:34:25",
            "updated_at": "2020-03-26 14:34:25"
          },
        ]
    },
    ...

bytes32[] remix solidity


// SPDX-License-Identifier: GPL-3.0

pragma solidity >=0.7.0 <0.9.0;

contract Fix {

    bytes32[] questionList= [
            bytes32(0x52a6eb687cd22e80d3342eac6fcc7f2e19209e8f83eb9b82e81c6f3e6f30743b),
            bytes32(0x257f3de9149acf0a49d3f7668956aeb52490202fae9ec2e92c3caa7d5223c6ef)
        ];

    // constructor() {
    //     // questionList = [
    //     //     bytes32(0x52a6eb687cd22e80d3342eac6fcc7f2e19209e8f83eb9b82e81c6f3e6f30743b),
    //     //     bytes32(0x257f3de9149acf0a49d3f7668956aeb52490202fae9ec2e92c3caa7d5223c6ef)
    //     // ];
    // }

    // function addQuestion(bytes32 questionKey)
    //     public
    //     returns(bool success)
    // {
    //     questionList.push(questionKey);
    //     return true;
    // }

    function getQuestionCount()
        public view
        returns(uint questionCount)
    {
        return questionList.length;
    }

    function getQuestionAtIndex(uint row)
        public view
        returns(bytes32 questionkey)
    {
        return questionList[row];
    }

}

[轉]After 5 years, I'm out of the serverless compute cult

https://dev.to/brentmitchell/after-5-years-im-out-of-the-serverless-compute-cult-3f6d

I have been using serverless computing and storage for nearly five years and I'm finally tired of it. I do feel like it has become a cult. In a cult, brainwashing is done so gradually, people have no idea it is going on. I feel like this has happened across the board with so many developers; many don’t even realize they are clouded. In my case, I took the serverless marketing and hype hook, line, and sinker for the first half of my serverless journey. After working with several companies small and large, I have been continually disappointed as our projects grew. The fact is, serverless technology is amazingly simple to start, but becomes a bear as projects and teams accelerate. A serverless project typically includes a fully serverless stack which can include (using a non-exhaustive list of AWS services):


  1. API Gateway
  2. Cognito
  3. Lambda
  4. DynamoDB
  5. DAX
  6. SQS/SNS/EventBridge

Combining all of these into a serverless project become a huge nightmare for the following reasons.



Testing


All these solutions are proprietary to AWS. Sure, a lambda function is a pretty simple idea; it is simply a function that executes your code. The other services listed above have almost no other easy and testable solutions when integrated together. Serverless Application Model and Localstack have done some amazing work attempting to emulate these services. However, they usually only only cover basic use cases and an engineer ends up spending a chunk of time trying to mock or figure out a way to get their function to test locally. Or, they simply forget it and deploy it. Also, since these functions typically depend on other developer's functions or API Gateway, there tends to be 10 different ways to authorize their function. For example, someone might have an unauthorized API, one may use AWS credentials, another might use Cognito, and yet another uses an API key. All of these factors lead to an engineer having little to no confidence in their ability to test anything locally.


Account Chaos


Since engineers typically don't have a high confidence in their code locally they depend on testing their functions by deploying. This means possibly breaking their own code. As you can imagine, this breaks everyone else deploying and testing any code which relies on the now broken function. While there are a few solutions to this scenario, all are usually quite complex (i.e. using an AWS account per developer) and still cannot be tested locally with much confidence. Chaos engineering has a time and a place. This is not it.


Security


With all the possible permutations of deployments and account structures, security becomes a big problem. Good IAM practices are hard. Many engineers simply put a dynamodb:* for all resources in the account for a lambda function. (BTW this is not good). It becomes hard to manage all of these because developers can usually quite easily deploy and manage their own IAM roles and policies. And since it is hard to test locally, trying to fix serverless IAM issues requires deploying to AWS and testing (or breaking) in the environment.


Bad (Cult-like) Practices


No Fundamental Enforcement


Without help from frameworks, DRY (Don't Repeat Yourself), KISS (Keep It Simple Stupid) and other essential programming paradigms are simply ignored. In a perfect world, a team would reject PR's that do not abide by these basic principles. However, with the huge push for the cloud over the past several years, many junior developers have had the freedom to do what they want in the serverless space because of its ease of use; resulting in developers enmasse adopting something that doesn’t increase the health of the developer ecosystem as a whole. AWS gives you a knife by providing such an easy way to deploy code on the internet. Please don't hurt yourself with it.


Copy and Paste Culture


Most teams end up copying code to the new microservices and proliferating it across many services. I have seen teams with hundreds and even thousands of functions with nearly every function being different. This culture has gotten out of hand and now teams are stuck with these functions. Another symptom of this is not taking the time to provide a proper DNS.

DNS Migration Failures

Developers take the generic API Gateway generated DNS name (abcd1234.amazonaws.com) and litter their code with it. There will come a time when the teams want to put a real DNS in front of it and now you're faced with locating the 200 different spots it was used to change it. And, it's not as easy as a Find/Replace. Searching like this can become a problem when you have a mix of hard-coded strings, parameterized/concatenated strings, and environment variables everywhere that DNS name lies. Oh and telemetry? Yeah that's nowhere to be found.


Microservice Hell


This isn't a post about microservices. However, as teams and developers can decide and add whatever they want into their YAML files for deployment, you end up with hundreds of dependent services and hundreds of repositories. Many have different approaches and/or have different CI/CD workflows. Also, I've found that repository structures begin widely diverging. Any perceived cost savings has now been moved to managing all of these deployments and repositories. Here are a few examples of how developers choose to break up their serverless functions by Git repositories:

  1. Use a monolith for all their API's.
  2. Separate each API Gateway or queue processors
  3. Separate "domain" (i.e. /customers or /invoices)
  4. Separate by endpoint (I have seen developers break out a repository for POST:/customers while maintaining a separate one for GET:/customers/:id and so on…).

Many times, developers switch between different styles and structures daily. This becomes a nightmare not only for day-to-day development, but also for any developer getting to a quick-understanding of how the code deploys and what dependencies it has or impacts.


API Responses


The serverless cult has been active long enough now that many newer engineers entering the field don't seem to even know about the basics of HTTP responses. Now there are many veteran developers lacking this knowledge. While this is not strictly a serverless problem, I never have experienced this much abuse outside of serverless. I've seen endpoints returning 200, 400, 500 like normal. Yet another set of endpoints return all 2xx responses, with a payload like:

{
  "status": "ERROR",
  "reason": "malformed input"
}

Then, another set of endpoints implement inconsistent response patterns dependent on some permutation of query parameters. For example:
Query 1:
/customers?firstName=John

[{
  "accountId": "1234",
  "firstName": "John",
  "lastName": "Newman"
}]

Query 2:
/customers?lastName=Newman

{
  "accountId": "1234",
  "firstName": "John",
  "lastName": "Newman"
}

Inventing New Problems

As mentioned previously, initially deploying these types of services are easy. The reality is there are new problems with these kind of serverless structures that don't typically occur in server-backed services:

  1. Cold starts - many engineers don't care too much about this. But they suddenly start caring when Function A calls Function B which calls Function C and so on. Without some voodoo warm-up scripting solution, paying for provisioned concurrency, or ignoring it, you may be out of luck.
  2. In the past five years, prior to our work on our flooring installation marketplace, the teams I have been a part of have always chased the latest features because we had been doing workarounds like FIFO queues, state machines, provisioned concurrency, etc. As teams chase the latest features released by AWS (or your cloud provider of choice), things then become even harder to test and maintain since SAM or Localstack don't match these features for some time.
  3. Some awful custom eventing solution because… serverless. Engineers think simply putting an API Gateway in front of EventBridge will solve all their eventing problems. What about retries? What about duplicate events? What about replaying events? Schema enforcement? Where does the data land? How do I get the data? These are all questions that have to be answered or documented in a custom fashion. Ok, EventBridge supports a few of these things in some form but it does leave engineers chasing the latest features, waiting for these to become available. However, outside of the serverless cult, these issues can be solved with KafkaNATS, or other technologies. Use the right tool.
  4. When it’s not okay to talk about the advantages and disadvantages of serverless with other engineers without fear of reprisal, it might be a cult. Many of these engineers say Lambda is the only way to deploy anymore. There isn't much thought to offline solutions when things need to be run onsite or separated from the cloud. For some companies this can be a fine approach. However, many medium to large organizations have (potentially) offline computing needs outside of the cloud. Lambda cannot provide a sensitive, remote pressure device real-time updates in the event of an internet outage in the middle of Canada during winter.

So, how do I get out of the cult?


In this article, I didn’t plan to address the many options you have to extricate yourself from the grips of mindless serverless abuse. If you're interested, please leave a comment and I will write a follow-up on the different solutions and alternatives to serverless I’ve found as well as some tips to incrementally shift back to a normal life. What I did want to do here was to express the pains I have experienced with serverless technologies over the past couple years now that I have helped architect more traditional VM and container-based tech stacks. I felt compelled to ensure individuals, teams, and organizations know what serverless can really mean for long-term sustainability in an environment.


What does serverless do well?


Deployment and scaling. That's really it for most organizations. For a lot of these organizations, it's hard to find the time, people, and money to figure out how to automatically provision new VM's, get access to a K8S cluster, etc. My challenge to you is to first fix your deployment and scaling problems internally before thinking about serverless compute.



Conclusion

Serverless is one of the hottest new cloud trends. However, I have found it leads to more harm than good in the long run. While I understand some of the problems listed above are not unique to serverless, they are much more prolific; leading engineers to spend most of their time with YAML configuration or troubleshooting function execution rather than crafting business logic. What I find odd is the lack of complaints from the community. If I’m alone in my assessment, I’d love to hear from you in the comments below. I’ve spent a significant amount of time over the last few years working to undo my own serverless mistakes as well as those made by other developers. Maybe I’m the one who has been brainwashed? Time will tell.

jenkins withCredentials sshUserPrivateKey

https://codeleading.com/article/11015806881/

Remeber sshCommand remote, remote infos have identityFile
identityFile need be at insdie identity

Don't understand identity be keep. Only can be used inside identity

def getRemoteHost(ip, user) {
    def remote = [:]
    remote.name = ip
    remote.host = ip
    remote.user = user
    remote.identityFile = identity
    remote.port = 22
    remote.allowAnyHosts = true

    return remote
}

pipeline {
    agent any

    environment {
        ssh_ip = 'ooo.xxx.ooo.xxx'
        ssh_user = 'ubuntu'
        ssh_jenkins_key_uid = 'oooooooo-xxxx-xxxx-xxxx-oooooooooooo'
        
        // SSH_CREDS = credentials('oooooooo-xxxx-xxxx-xxxx-oooooooooooo')
    }
    
    stages {
        stage('ssh Command'){
            steps {
                withCredentials([sshUserPrivateKey(credentialsId: "${ssh_jenkins_key_uid}", keyFileVariable: 'identity')]) {
                    sshCommand remote: getRemoteHost(ssh_ip, ssh_user), command: "whoami"  //在远程服务器执行echo命令
                }
                
            }
        }
        
        
    }
}

def runCommand(cmd) {
    def remote = [:]
    remote.name = "${ssh_ip}"
    remote.host = "${ssh_ip}"
    remote.user = "${ssh_user}"
    // remote.identityFile = identity
    remote.port = 22
    remote.allowAnyHosts = true
    
    withCredentials([sshUserPrivateKey(
        credentialsId: "${ssh_jenkins_key_uid}", 
        keyFileVariable: 'identity')]) 
    {
        remote.identityFile = identity
        sshCommand remote: remote, command: cmd
    }
}

pipeline {
    agent any

    environment {
        ssh_ip = 'ooo.xxx.ooo.xxx'
        ssh_user = 'ubuntu'
        ssh_jenkins_key_uid = 'oooooooo-xxxx-xxxx-xxxx-oooooooooooo'
        
        // SSH_CREDS = credentials('oooooooo-xxxx-xxxx-xxxx-oooooooooooo')
    }
    
    stages {
        stage('ssh Command') {
            steps {
                echo 'whoami start...'
                runCommand('whoami') 
                echo 'whoami success'
    		}
        }
    }
}

model y 車機掛幾次

保养花费差7倍?特斯拉Model Y蔚来EC6《真十万公里长测》报告 7 Times in Maintenance costs? Tesla Model Y vs NIO EC6 Test

https://youtu.be/UrCEOMQPRzk?t=1052

17:32

laravel schedule cron docker dockerfile docker-compose

cron

php laravel UI Boostrap jetstream docker-compose

laravel_docker

dokcer-compose.yml

  cron:
    build: ./infra/docker/cron
    env_file: ./env.mariadb.local.env
    stop_signal: SIGTERM
    depends_on:
      - app
    volumes:
      - ./backend:/work/backend
Dockerfile

FROM php:8.0.11-fpm-buster
LABEL maintainer="ucan-lab "
#SHELL ["/bin/bash", "-oeux", "pipefail", "-c"]

# timezone environment
ENV TZ=Asia/Taipei \
  # locale
  LANG=en_US.UTF-8 \
  LANGUAGE=en_US:UTF-8 \
  LC_ALL=en_US.UTF-8 \
  # Laravel environment
  APP_SERVICES_CACHE=/tmp/cache/services.php \
  APP_PACKAGES_CACHE=/tmp/cache/packages.php \
  APP_CONFIG_CACHE=/tmp/cache/config.php \
  APP_ROUTES_CACHE=/tmp/cache/routes.php \
  APP_EVENTS_CACHE=/tmp/cache/events.php \
  VIEW_COMPILED_PATH=/tmp/cache/views \
  # SESSION_DRIVER=cookie \
  LOG_CHANNEL=stderr \
  DB_CONNECTION=mysql \
  DB_PORT=3306 

RUN apt-get update
RUN apt-get -y install locales libicu-dev libzip-dev htop cron nano
RUN apt-get -y install default-mysql-client

RUN locale-gen en_US.UTF-8 && localedef -f UTF-8 -i en_US en_US.UTF-8
RUN docker-php-ext-install intl pdo_mysql zip bcmath exif

RUN apt-get clean && rm -rf /var/lib/apt/lists/* 

# 自訂
RUN mkdir -p /tmp/cache
WORKDIR /work/backend


# 這行超級重要 把初始環境的變數寫死
RUN printenv > /etc/environment

# 把log 輸出到 docker 上
RUN ln -sf /proc/1/fd/1 /var/log/laravel-scheduler.log


#ADD crontab /var/spool/cron/crontabs/root
#RUN chown root:crontab /var/spool/cron/crontabs/root
#RUN chmod 0600 /var/spool/cron/crontabs/root

#RUN crontab -l | { cat; echo "* * * * * . /usr/local/bin/php /work/backend/artisan config:cache && php artisan schedule:run >> /var/log/cron.log 2>&1"; } | crontab -
#RUN crontab -l | { cat; echo "* * * * * date >> /var/log/cron.log"; } | crontab -
#RUN crontab -l | { cat; echo "* * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2"; } | crontab -


COPY crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
RUN crontab /etc/cron.d/crontab

CMD bash -c "/usr/local/bin/php /work/backend/artisan config:cache && cron -f"
cron

# 這行可有可無 主要是 dockerfile printenv 那行最重要 #!/usr/bin/env bash
# 這行可有可無 SHELL=/bin/bash
PATH=/usr/bin:/usr/local/bin:$PATH
* * * * * cd /work/backend && php artisan schedule:run >> /var/log/cron.log 2>&1
#* * * * * cd /work/backend && php artisan schedule:run >> /var/log/cron.log 2>&1 && echo schedule > /proc/1/fd/1 2>/proc/1/fd/2
#* * * * * date >> /var/log/cron.log
#* * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2
#要多一行
app/Console/Kernel.php

protected function schedule(Schedule $schedule)
    {
        // $schedule->command('inspire')->hourly();

        $fileCronLog = '/var/log/laravel-scheduler.log';  // dockerfile RUN ln -sf /proc/1/fd/1 /var/log/laravel-scheduler.log

        // cron('* * 26 * *')
        $schedule->command('your command')->timezone(config('app.timezone'))->everyMinute()->onOneServer()  
            ->before(function () {
                Log::info("Schedule your command before!");
            })
            ->after(function () {
                Log::info("Schedule your command after!");
            })
            ->onSuccess(function (Stringable $output) {
                Log::info("Schedule your command onSuccess!");
            })
            ->onFailure(function (Stringable $output) {
                Log::error("Schedule your command onFailure!");
            })
            ->appendOutputTo($fileCronLog);

【事實釐清】媒體報導「歐盟主要成員國近日同意將核能納入『綠色轉型』,成為解決碳排放的一環;希望各國利用核能,趕在2050年前實現碳中和」?

【事實釐清】媒體報導「歐盟主要成員國近日同意將核能納入『綠色轉型』,成為解決碳排放的一環;希望各國利用核能,趕在2050年前實現碳中和」?

https://tfc-taiwan.org.tw/articles/2192 

黃其君


✔正常查核程序:
優先翻閱一手資料,若不懂則請教專家
✖ TFC 台灣事實查核中心:
請教(有偏見)專家,也無對專家言論做查對

然後第一個說核能列入歐盟綠色轉型的不是台灣報導也不是哪個擁核媒體平台,是世界最大的媒體通訊社,美聯社以《EU leaders include nuclear energy in green transition》為標題報導歐盟高峰會結論,然後有紐約時報、日本時報等轉載刊登,到台灣因為翻譯就變成歐盟將核能納入綠色轉型。不懂台灣反核怎麼一直說別人世界最大平台假消息呢?

▉美聯社
https://reurl.cc/5gmyV6

范建德老師以2020/01/10的法案文字來認定歐盟要非核,但1/10有好多版本討論,法案文字都不是最終版,反核的還是綠黨提的...裏頭還要求2030歐盟要非核無煤阿...所以最後全部被刪光光也不意外。

▉法案文字
2020/01/15(無反核定案版)
https://reurl.cc/RdNl3Z
2020/01/10(無反核草案)
https://reurl.cc/4gx79v
2020/01/10(反核草案)
https://reurl.cc/Gk1R2v

而台大風險中心趙研究員真的讓你看看什麼叫超譯...

歐盟的技術文件裡提到核能在低碳能源中的重要腳色顯而易見,但確實在風險管理例如核廢料上難以將do no harm排除,很重要的一點,他有表明是因為現階段尚無啟用的核發電燃料的深層處置設施,所以無實例可以做評估。該報告也會在近一步針對核能做詳細討論。

並非像趙研究員說的什麼沒有解方或要隔絕數百萬年...可以不要自己腦補嗎?明明裏投也跟你說有解法了...

▉報告這邊看
https://reurl.cc/9zlYz8

Regarding the long-term management of High-Level Waste (HLW), there is an international consensus that a safe, long-term technical solution is needed to solve the present unsustainable situation. A combination of temporary storage plus permanent disposal in geological formation is the most promising, with some countries are leading the way in implementing those solutions. Yet nowhere in the world has a viable, safe and long-term underground repository been established. It was therefore infeasible for the TEG to undertake a robust DNSH assessment as no permanent, operational disposal site for HLW exists yet from which long-term empirical, in-situ data and evidence to inform such an evaluation for nuclear energy.

Given these limitations, it was not possible for TEG, nor its members, to conclude that the nuclear energy value chain does not cause significant harm to other environmental objectives on the time scales in question. The TEG has not therefore recommended the inclusion of nuclear energy in the Taxonomy at this stage. Further, the TEG recommends that more extensive technical work is undertaken on the DNSH aspects of nuclear energy in future and by a group with in-depth technical expertise on nuclear life cycle technologies and the existing and potential environmental impacts across all objectives.

https://hugo-m9c19ra1o-sueboy.vercel.app/

[轉]PHP 8.2 Remove libmysql mysqli

https://m.facebook.com/story.php?story_fbid=10216824504164331&id=1815507975

《 PHP RFC: Remove support for libmysql from mysqli 》

» https://wiki.php.net/rfc/mysqli_support_for_libmysql

PHP 核心開發團隊投票通過移除 mysqli 的 libmysql 支援,將於 PHP 8.2 正式生效。這項討論從農曆年前關注到年後,最終於 2022/2/5 全數投票通過。

對於一般 PHP 開發者是好事,不用再考慮 MySQL 是選擇 libmysql 還是 mysqlnd;面試時也減少面試官詢問兩者差異的比較 (不過現在很多面試官也不知道了)。

如果想瞭解 libmysql / mysqlnd 的優缺點,官方 RFC 也貼心地條列整理了。不過 RFC 裡沒提關於「License (授權)」的考量,特別在商業上。這也是 PHP 與 Python 及 Ruby 等社群有著不太一樣的生態考量。

Python 要連結 MySQL,通常選用 MySQL Connector 或 MySQLdb,但這兩者底層都依賴 libmysqlclient (MySQL C Library),而 libmysqlclient 的授權 [1] 主要採用 GPL-2.0,進而連帶影響了整體產品/專案的授權。

Ruby 要連結 MySQL,通常選用 mysql2,而其底層同樣依賴 libmysqlclient,有著同樣的潛在商業問題。

更重要的是 libmysqlclient 背後的公司是 Oracle。

但為什麼 PHP mysqlnd 沒有此問題?因為 mysqlnd 不依賴 libmysqlclient 而採用純 PHP extension 開發,授權採用 PHP License,在商業上避免了許多不必要的麻煩。

Python 其實也有不依賴 libmysqlclient 的 PyMySQL [2],Ruby 則有 mysql-rb [3],但效能大多不如 libmysqlclient 的實現。不像 PHP mysqlnd 在效能上大多勝於或接近 libmysql。

當然,Python 或 Ruby 若要不犧牲效能繼續採用 MySQLdb 或 mysql2,也是有替代方案的。

[1] https://dev.mysql.com/downloads/c-api/

[2] https://github.com/PyMySQL/PyMySQL

[3] https://github.com/kirs/mysql-rb

==========

https://hugo-kuh8tm78y-sueboy.vercel.app/