額外處理事項: 直接在 Jenkins server 發 pr 給 openshift 時,發生 503 錯誤。 使用 ssh 登入 openshift 看 log ,發現 Node Sass does not yet support your current environment 錯誤 必須登入執行以下語法修正模組問題 npm rebuild node-sass
建置作業,會得到錯誤訊息
1 2 3 4
上略... 19:44:46 Host key verification failed. 19:44:46 fatal: Could not read from remote repository. 下略...
原因:主機密鑰驗證失敗,這個錯誤的意思是我的 Jenkins Server 主機並不認得遠端的 Openshift Server 的 host key,主要的原因是 Jenkins Service 在執行的身份是 NT AUTHORITY\SYSTEM,由於我已經有合適權限的帳號,所以只需要切換執行 Jenkins Server 的身份即可。
一個處理器在一個時間段內其實只能做一件事,因為它只有一個個體、一個時空。而多任務操作系統把一個長時間段劃分成很多短小的時間片,每個時間片只讓處理器執行一個進程(Process)——儘管同時可能有多個進程需要處理。在第一個時間片裡,操作系統讓處理器處理 A 進程;時間片的時間用完之後,無論 A 進程處理到什麼程度,都要被“掛起”(即,A 進程這時不能再佔用處理器資源——儘管它還是被允許使用計算機的其他資源,如內存、磁盤、屏幕輸出等);在第二個時間片裡,處理器處理的是 B 進程,時間用完之後,B 進程將與 A 進程一樣被中途“掛起”;
優秀的領導,能夠把人們帶到他們想去的地方;而卓越的領導,能夠把人們帶到他們應該去但是沒想過要去的地方 A leader takes people where they want to go. A great leader takes people where they don’t necessarily want to go,but ought to be.
A program database (.pdb) file, also called a symbol file, maps the identifiers that you create in source files for classes, methods, and other code to the identifiers that are used in the compiled executables of your project.
PDB 記什麼
原始碼的檔案名稱 (source code file name)
變數與行號 (lines and the local variable names.)
偵錯工具搜尋 .pdb 檔案的順序?
DLL 或可執行檔內部指定的位置
可能和 DLL 或可執行檔存在相同資料夾中的 .pdb 檔案。
任何本機符號快取資料夾。
任何在像是 Microsoft 符號伺服器 (如果啟用) 等位置指定的網路、網際網路或本機符號伺服器和位置。
line 46 at [00001F16][0001:00001F16], len = 0x1 D:\Repo\Mark_Lin\PROJ\WebAPI\Controllers\BookController.cs line 47 at [00001F17][0001:00001F17], len = 0x6 line 48 at [00001F1D][0001:00001F1D], len = 0x1C
in the last time slice: 0 commands have been issued
在最後時脈:已發出 0 個命令
最後的時脈發出的命令個數
mgr
the socket manager is performing “socket.select”, which means it is asking the OS to indicate a socket that has something to do; basically: the reader is not actively reading from the network because it doesn’t think there is anything to do
最後的操作命令
queue
there are 73 total in-progress operations
73 個正在排隊中的操作
正在排隊中的操作
qu
6 of those are in unsent queue: they have not yet been written to the outbound network
6 個未發送的 queue
未發送的 queue
qs
67 of those have been sent to the server but a response is not yet available. The response could be: Not yet sent by the server sent by the server but not yet processed by the client.
67 個已發送的 queue
已發送的 queue
qc
0 of those have seen replies but have not yet been marked as complete due to waiting on the completion loop
0 個已發送未標記完成的 queue
已發送未標記完成的 queue
wr
there is an active writer (meaning - those 6 unsent are not being ignored) bytes/activewriters
Problem: 記憶體不足而開始讀取虛擬記憶體(磁碟)而導致特能低落。 Memory pressure on the client machine leads to all kinds of performance problems that can delay processing of data that was sent by the Redis instance without any delay. When memory pressure hits, the system typically has to page data from physical memory to virtual memory which is on disk. This page faulting causes the system to slow down significantly.
Measurement: Monitory memory usage on machine to make sure that it does not exceed available memory. Monitor the Page Faults/Sec perf counter. Most systems will have some page faults even during normal operation, so watch for spikes in this page faults perf counter which correspond with timeouts.
Resolution:
增加記憶體或是減少記憶體使用量 Upgrade to a larger client VM size with more memory or dig into your memory usage patterns to reduce memory consuption.
Burst of traffic
Problem:
ThreadPool 突然大量的工作湧入 queue 導致執行緒來不及建立。 Bursts of traffic combined with poor ThreadPool settings can result in delays in processing data already sent by the Redis Server but not yet consumed on the client side.
Monitor how your ThreadPool statistics change over time using code like this. You can also look at the TimeoutException message from StackExchange.Redis. Here is an example :
In the above message, there are several issues that are interesting: Notice that in the “IOCP” section and the “WORKER” section you have a “Busy” value that is greater than the “Min” value. This means that your threadpool settings need adjusting. You can also see “in: 64221”. This indicates that 64211 bytes have been received at the kernel socket layer but haven’t yet been read by the application (e.g. StackExchange.Redis). This typically means that your application isn’t reading data from the network as quickly as the server is sending it to you.
Resolution:
調整 ThreadPool 設定 Configure your ThreadPool Settings to make sure that your threadpool will scale up quickly under burst scenarios.
High CPU usage (CPU 過載)
Problem:
High CPU usage on the client is an indication that the system cannot keep up with the work that it has been asked to perform. High CPU is a problem because the CPU is busy and it can’t keep up with the work the application is asking it to do. The response from Redis can come very quickly, but because the CPU isn’t keeping up with the workload, the response sits in the socket’s kernel buffer waiting to be processed. If the delay is long enough, a timeout occurs in spite of the requested data having already arrived from the server.
Measurement:
Monitor the System Wide CPU usage through the azure portal or through the associated perf counter. Be careful not to monitor process CPU because a single process can have low CPU usage at the same time that overall system CPU can be high. Watch for spikes in CPU usage that correspond with timeouts. As a result of high CPU, you may also see high “in: XXX” values in TimeoutException error messages as described above in the “Burst of traffic” section. Note that in newer builds of StackExchange.Redis, the client-side CPU will be printed out in the timeout error message as long as the environment doesn’t block access to the CPU perf counter.
Note:
StackExchange.Redis version 1.1.603 or later now prints out “local-cpu” usage when a timeout occurs to help understand when client-side CPU usage may be affecting performance.
Resolution:
增加 CPU 或是找出 CPU 產生過載的原因 Upgrade to a larger VM size with more CPU capacity or investigate what is causing CPU spikes.
Client Side Bandwidth Exceeded (頻寬不足)
Problem:
Different sized client machines have limitations on how much network bandwidth they have available. If the client exceeds the available bandwidth, then data will not be processed on the client side as quickly as the server is sending it. This can lead to timeouts.
Measurement:
Monitor how your Bandwidth usage change over time using code like this. Note that this code may not run successfully in some environments with restricted permissions (like Azure WebSites).
Resolution:
加大頻寬或減少使用量 Increase Client VM size or reduce network bandwidth consumption.
Large Request/Response Size (過大的請求/回應量)
Problem:
如圖所示,A 與 B 兩個 Request 都太過龐大,當同時發動請求時, A 回應的時間過長, 導致 B 的 Timeout A large request/response can cause timeouts. As an example, suppose your timeout value configured is 1 second. Your application requests two keys (e.g. ‘A’ and ‘B’) at the same time using the same physical network connection. Most clients support “Pipelining” of requests, such that both requests ‘A’ and ‘B’ are sent on the wire to the server one after the other without waiting for the responses. The server will send the responses back in the same order. If response ‘A’ is large enough it can eat up most of the timeout for subsequent requests.
Below, I will try to demonstrate this. In this scenario, Request ‘A’ and ‘B’ are sent quickly, the server starts sending responses ‘A’ and ‘B’ quickly, but because of data transfer times, ‘B’ get stuck behind the other request and times out even though the server responded quickly.
1 2 3 4 5 6
|-------- 1 Second Timeout (A)----------| |-Request A-| |-------- 1 Second Timeout (B) ----------| |-Request B-| |- Read Response A --------| |- Read Response B-| (**TIMEOUT**)
Measurement:
This is a difficult one to measure. You basically have to instrument your client code to track large requests and responses.
Resolution:
將所需要的資料分割成數個小片段 再分別取回 Redis is optimized for a large number of small values, rather than a few large values. The preferred solution is to break up your data into related smaller values. See here for details around why smaller values are recommended. Increase the size of your VM (for client and Redis Cache Server), to get higher bandwidth capabilities, reducing data transfer times for larger responses. Note that getting more bandwidth on just the server or just on the client may not be enough. Measure your bandwidth usage and compare it to the capabilities of the size of VM you currently have. Increase the number of ConnectionMultiplexer objects you use and round-robin requests over different connections (e.g. use a connection pool). If you go this route,make sure that you don’t create a brand new ConnectionMultiplexer for each request as the overhead of creating the new connection will kill your performance.
Init() { //// todoList in localStorage var list = window.localStorage.getItem("todoList"); if (!list) { this.List = newArray<todoItem>(); } else { this.List = JSON.parse(list); } window.onbeforeunload = (evt) => { window.localStorage.setItem("todoList", JSON.stringify(this.List)); };
// mark task as done $(".todolist").on( "change", '#sortable li input[type="checkbox"]', (evt) => { var self = evt.target; var text = $(self).parent().text(); if ($(self).prop("checked")) { var doneItem = this.List.filter((i) => { return text == i.Content; })[0]; this.Delete(doneItem); this.Render(); } } );
$(".add-todo").on("keypress", (evt) => { evt.preventDefault; if (evt.which == 13) { if ($(evt.target).val() != "") { var todo = $(evt.target).val(); this.Create({ Content: $(evt.target).val(), Status: todoStatus.undo, }); this.Render(); } else { // some validation } } });
$(".todolist").on("click", "#done-items li button.recover-item", (evt) => { var text = $(evt.target).parent().parent().text(); var recoverItem = this.List.filter((i) => { return text == i.Content; })[0]; recoverItem.Status = todoStatus.undo; this.Render(); });
$(".todolist").on("click", "#done-items li button.remove-item", (evt) => { var text = $(evt.target).parent().parent().text(); var removeItem = this.List.filter((i) => { return text == i.Content; })[0]; var index = this.List.indexOf(removeItem); this.List.splice(index, 1); this.Render(); });