Add some comments for recent code (#22725)
When using the main branch, I found that some changed code didn't have comments. This PR adds some comments.
This commit is contained in:
parent
368d43643f
commit
ccb3851281
|
@ -10,6 +10,35 @@ import (
|
|||
"code.gitea.io/gitea/models/db"
|
||||
)
|
||||
|
||||
/*
|
||||
The reasons behind the DBFS (database-filesystem) package:
|
||||
When a Gitea action is running, the Gitea action server should collect and store all the logs.
|
||||
|
||||
The requirements are:
|
||||
* The running logs must be stored across the cluster if the Gitea servers are deployed as a cluster.
|
||||
* The logs will be archived to Object Storage (S3/MinIO, etc.) after a period of time.
|
||||
* The Gitea action UI should be able to render the running logs and the archived logs.
|
||||
|
||||
Some possible solutions for the running logs:
|
||||
* [Not ideal] Using local temp file: it can not be shared across the cluster.
|
||||
* [Not ideal] Using shared file in the filesystem of git repository: although at the moment, the Gitea cluster's
|
||||
git repositories must be stored in a shared filesystem, in the future, Gitea may need a dedicated Git Service Server
|
||||
to decouple the shared filesystem. Then the action logs will become a blocker.
|
||||
* [Not ideal] Record the logs in a database table line by line: it has a couple of problems:
|
||||
- It's difficult to make multiple increasing sequence (log line number) for different databases.
|
||||
- The database table will have a lot of rows and be affected by the big-table performance problem.
|
||||
- It's difficult to load logs by using the same interface as other storages.
|
||||
- It's difficult to calculate the size of the logs.
|
||||
|
||||
The DBFS solution:
|
||||
* It can be used in a cluster.
|
||||
* It can share the same interface (Read/Write/Seek) as other storages.
|
||||
* It's very friendly to database because it only needs to store much fewer rows than the log-line solution.
|
||||
* In the future, when Gitea action needs to limit the log size (other CI/CD services also do so), it's easier to calculate the log file size.
|
||||
* Even sometimes the UI needs to render the tailing lines, the tailing lines can be found be counting the "\n" from the end of the file by seek.
|
||||
The seeking and finding is not the fastest way, but it's still acceptable and won't affect the performance too much.
|
||||
*/
|
||||
|
||||
type dbfsMeta struct {
|
||||
ID int64 `xorm:"pk autoincr"`
|
||||
FullPath string `xorm:"VARCHAR(500) UNIQUE NOT NULL"`
|
||||
|
|
|
@ -19,6 +19,8 @@ import (
|
|||
func AddCacheControlToHeader(h http.Header, maxAge time.Duration, additionalDirectives ...string) {
|
||||
directives := make([]string, 0, 2+len(additionalDirectives))
|
||||
|
||||
// "max-age=0 + must-revalidate" (aka "no-cache") is preferred instead of "no-store"
|
||||
// because browsers may restore some input fields after navigate-back / reload a page.
|
||||
if setting.IsProd {
|
||||
if maxAge == 0 {
|
||||
directives = append(directives, "max-age=0", "private", "must-revalidate")
|
||||
|
|
|
@ -126,6 +126,7 @@ INTERNAL_TOKEN = eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJuYmYiOjE0OTU1NTE2MTh9.h
|
|||
ENABLED = true
|
||||
|
||||
[email.incoming]
|
||||
; temporarily disabled because the incoming mail tests are flaky due to the IMAP server (during integration tests) couldn't be not ready in time sometimes.
|
||||
ENABLED = false
|
||||
HOST = smtpimap
|
||||
PORT = 993
|
||||
|
|
Loading…
Reference in New Issue