V0.2.10 (#220)
This commit is contained in:
parent
b4b92bf852
commit
1608877813
17
CHANGELOG.md
17
CHANGELOG.md
|
@ -2,6 +2,23 @@
|
|||
|
||||
All notable changes to this project will be documented in this file. For commit guidelines, please refer to [Standard Version](https://github.com/conventional-changelog/standard-version).
|
||||
|
||||
## v0.2.10
|
||||
|
||||
**New Features**:
|
||||
- Allows user creation command line arguments https://github.com/gtsteffaniak/filebrowser/issues/196
|
||||
- Folder sizes are always shown, leveraging the index. https://github.com/gtsteffaniak/filebrowser/issues/138
|
||||
- Searching files based on filesize is no longer slower.
|
||||
|
||||
**Bugfixes**:
|
||||
- fixes file selection usage when in single-click mode https://github.com/gtsteffaniak/filebrowser/issues/214
|
||||
- Fixed displayed search context on root directory
|
||||
- Fixed issue searching "smaller than" actually returned files "larger than"
|
||||
|
||||
**Notes**:
|
||||
- Memory usage from index is reduced by ~40%
|
||||
- Indexing time has increased 2x due to the extra processing time required to calculate directory sizes.
|
||||
- File size calcuations use 1024 base vs previous 1000 base (matching windows explorer)
|
||||
|
||||
## v0.2.9
|
||||
|
||||
This release focused on UI navigation experience. Improving keyboard navigation and adds right click context menu.
|
||||
|
|
45
README.md
45
README.md
|
@ -6,7 +6,7 @@
|
|||
</p>
|
||||
<h3 align="center">FileBrowser Quantum - A modern web-based file manager</h3>
|
||||
<p align="center">
|
||||
<img width="800" src="https://github.com/user-attachments/assets/8ba93582-aba2-4996-8ac3-25f763a2e596" title="Main Screenshot">
|
||||
<img width="800" src="https://private-user-images.githubusercontent.com/42989099/367975355-3d6f4619-4985-4ce3-952f-286510dff4f1.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjgxNTA2MjEsIm5iZiI6MTcyODE1MDMyMSwicGF0aCI6Ii80Mjk4OTA5OS8zNjc5NzUzNTUtM2Q2ZjQ2MTktNDk4NS00Y2UzLTk1MmYtMjg2NTEwZGZmNGYxLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDEwMDUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMDA1VDE3NDUyMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTg1OGNlMWM3M2I1ZmY3MDcxMGU1ODc3N2ZkMjI5YWQ3YzEyODRmNDU0ZDkxMjJhNTU0ZGY1MDQ2YmIwOWRmMTgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.mOl0Hm70XmQEk-DPzx1FbwrpxNMDAqb-WDprs1HK-mc" title="Main Screenshot">
|
||||
</p>
|
||||
|
||||
> [!WARNING]
|
||||
|
@ -18,9 +18,9 @@
|
|||
FileBrowser Quantum is a fork of the filebrowser opensource project with the
|
||||
following changes:
|
||||
|
||||
1. [x] Enhanced lightning fast indexed search
|
||||
- Real-time results as you type
|
||||
- Works with more type filters
|
||||
1. [x] Efficiently indexed files
|
||||
- Real-time search results as you type
|
||||
- Search Works with more type filters
|
||||
- Enhanced interactive results page.
|
||||
2. [x] Revamped and simplified GUI navbar and sidebar menu.
|
||||
- Additional compact view mode as well as refreshed view mode
|
||||
|
@ -131,39 +131,30 @@ Not using docker (not recommended), download your binary from releases and run w
|
|||
./filebrowser -c <filebrowser.yml or other /path/to/config.yaml>
|
||||
```
|
||||
|
||||
## Command Line Usage
|
||||
|
||||
There are very few commands available. There are 3 actions done via command line:
|
||||
|
||||
1. Running the program, as shown on install step. Only argument used is the config file, if you choose to override default "filebrowser.yaml"
|
||||
2. Checking the version info via `./filebrowser version`
|
||||
3. Updating the DB, which currently only supports adding users via `./filebrowser set -u username,password [-a] [-s "example/scope"]`
|
||||
|
||||
## Configuration
|
||||
|
||||
All configuration is now done via a single configuration file:
|
||||
`filebrowser.yaml`, here is an example of minimal [configuration
|
||||
file](./backend/filebrowser.yaml).
|
||||
|
||||
View the [Configuration Help Page](./configuration.md) for available
|
||||
View the [Configuration Help Page](./docs/configuration.md) for available
|
||||
configuration options and other help.
|
||||
|
||||
|
||||
## Migration from filebrowser/filebrowser
|
||||
|
||||
If you currently use the original opensource filebrowser
|
||||
but want to try using this. I recommend you start fresh without
|
||||
reusing the database, but there are a few things you'll need to do if you
|
||||
must migrate:
|
||||
|
||||
1. Create a configuration file as mentioned above.
|
||||
2. Copy your database file from the original filebrowser to the path of
|
||||
the new one.
|
||||
3. Update the configuration file to use the database (under server in
|
||||
filebrowser.yml)
|
||||
4. If you are using docker, update the docker-compose file or docker run
|
||||
command to use the config file as described in the install section
|
||||
above.
|
||||
5. If you are not using docker, just make sure you run filebrowser -c
|
||||
filebrowser.yml and have a valid filebrowser config.
|
||||
|
||||
|
||||
The filebrowser Quantum application should run with the same user and rules that
|
||||
you have from the original. But keep in mind the differences that are
|
||||
mentioned at the top of this readme.
|
||||
|
||||
If you currently use the original filebrowser but want to try using this.
|
||||
I recommend you start fresh without reusing the database. If you want to
|
||||
migrate your existing database to FileBrowser Quantum, visit the [migration
|
||||
readme](./docs/migration.md)
|
||||
|
||||
## Comparison Chart
|
||||
|
||||
|
@ -217,4 +208,4 @@ Chromecast support | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ |
|
|||
|
||||
## Roadmap
|
||||
|
||||
see [Roadmap Page](./roadmap.md)
|
||||
see [Roadmap Page](./docs/roadmap.md)
|
||||
|
|
|
@ -7,26 +7,27 @@
|
|||
PASS
|
||||
ok github.com/gtsteffaniak/filebrowser/diskcache 0.004s
|
||||
? github.com/gtsteffaniak/filebrowser/errors [no test files]
|
||||
2024/10/07 12:46:34 could not update unknown type: unknown
|
||||
goos: linux
|
||||
goarch: amd64
|
||||
pkg: github.com/gtsteffaniak/filebrowser/files
|
||||
cpu: 11th Gen Intel(R) Core(TM) i5-11320H @ 3.20GHz
|
||||
BenchmarkFillIndex-8 10 3559830 ns/op 274639 B/op 2026 allocs/op
|
||||
BenchmarkSearchAllIndexes-8 10 31912612 ns/op 20545741 B/op 312477 allocs/op
|
||||
BenchmarkFillIndex-8 10 3847878 ns/op 758424 B/op 5567 allocs/op
|
||||
BenchmarkSearchAllIndexes-8 10 780431 ns/op 173444 B/op 2014 allocs/op
|
||||
PASS
|
||||
ok github.com/gtsteffaniak/filebrowser/files 0.417s
|
||||
ok github.com/gtsteffaniak/filebrowser/files 0.073s
|
||||
PASS
|
||||
ok github.com/gtsteffaniak/filebrowser/fileutils 0.002s
|
||||
2024/08/27 16:16:13 h: 401 <nil>
|
||||
2024/08/27 16:16:13 h: 401 <nil>
|
||||
2024/08/27 16:16:13 h: 401 <nil>
|
||||
2024/08/27 16:16:13 h: 401 <nil>
|
||||
2024/08/27 16:16:13 h: 401 <nil>
|
||||
2024/08/27 16:16:13 h: 401 <nil>
|
||||
ok github.com/gtsteffaniak/filebrowser/fileutils 0.003s
|
||||
2024/10/07 12:46:34 h: 401 <nil>
|
||||
2024/10/07 12:46:34 h: 401 <nil>
|
||||
2024/10/07 12:46:34 h: 401 <nil>
|
||||
2024/10/07 12:46:34 h: 401 <nil>
|
||||
2024/10/07 12:46:34 h: 401 <nil>
|
||||
2024/10/07 12:46:34 h: 401 <nil>
|
||||
PASS
|
||||
ok github.com/gtsteffaniak/filebrowser/http 0.100s
|
||||
ok github.com/gtsteffaniak/filebrowser/http 0.080s
|
||||
PASS
|
||||
ok github.com/gtsteffaniak/filebrowser/img 0.124s
|
||||
ok github.com/gtsteffaniak/filebrowser/img 0.137s
|
||||
PASS
|
||||
ok github.com/gtsteffaniak/filebrowser/rules 0.002s
|
||||
PASS
|
||||
|
@ -38,4 +39,5 @@ ok github.com/gtsteffaniak/filebrowser/settings 0.004s
|
|||
? github.com/gtsteffaniak/filebrowser/storage/bolt [no test files]
|
||||
PASS
|
||||
ok github.com/gtsteffaniak/filebrowser/users 0.002s
|
||||
? github.com/gtsteffaniak/filebrowser/utils [no test files]
|
||||
? github.com/gtsteffaniak/filebrowser/version [no test files]
|
||||
|
|
|
@ -11,28 +11,28 @@ import (
|
|||
"os"
|
||||
"os/signal"
|
||||
"strconv"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"embed"
|
||||
|
||||
"github.com/spf13/pflag"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/auth"
|
||||
"github.com/gtsteffaniak/filebrowser/diskcache"
|
||||
"github.com/gtsteffaniak/filebrowser/files"
|
||||
fbhttp "github.com/gtsteffaniak/filebrowser/http"
|
||||
"github.com/gtsteffaniak/filebrowser/img"
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
"github.com/gtsteffaniak/filebrowser/version"
|
||||
)
|
||||
|
||||
//go:embed dist/*
|
||||
var assets embed.FS
|
||||
|
||||
var nonEmbededFS = os.Getenv("FILEBROWSER_NO_EMBEDED") == "true"
|
||||
var (
|
||||
nonEmbededFS = os.Getenv("FILEBROWSER_NO_EMBEDED") == "true"
|
||||
)
|
||||
|
||||
type dirFS struct {
|
||||
http.Dir
|
||||
|
@ -42,102 +42,119 @@ func (d dirFS) Open(name string) (fs.File, error) {
|
|||
return d.Dir.Open(name)
|
||||
}
|
||||
|
||||
func init() {
|
||||
// Define a flag for the config option (-c or --config)
|
||||
configFlag := pflag.StringP("config", "c", "filebrowser.yaml", "Path to the config file")
|
||||
// Bind the flags to the pflag command line parser
|
||||
pflag.CommandLine.AddGoFlagSet(flag.CommandLine)
|
||||
pflag.Parse()
|
||||
log.Printf("Initializing FileBrowser Quantum (%v) with config file: %v \n", version.Version, *configFlag)
|
||||
log.Println("Embeded Frontend:", !nonEmbededFS)
|
||||
settings.Initialize(*configFlag)
|
||||
func getStore(config string) (*storage.Storage, bool) {
|
||||
// Use the config file (global flag)
|
||||
log.Printf("Using Config file : %v", config)
|
||||
settings.Initialize(config)
|
||||
store, hasDB, err := storage.InitializeDb(settings.Config.Server.Database)
|
||||
if err != nil {
|
||||
log.Fatal("could not load db info: ", err)
|
||||
}
|
||||
return store, hasDB
|
||||
}
|
||||
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "filebrowser",
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
serverConfig := settings.Config.Server
|
||||
if !d.hadDB {
|
||||
quickSetup(d)
|
||||
}
|
||||
if serverConfig.NumImageProcessors < 1 {
|
||||
log.Fatal("Image resize workers count could not be < 1")
|
||||
}
|
||||
imgSvc := img.New(serverConfig.NumImageProcessors)
|
||||
|
||||
cacheDir := "/tmp"
|
||||
var fileCache diskcache.Interface
|
||||
|
||||
// Use file cache if cacheDir is specified
|
||||
if cacheDir != "" {
|
||||
var err error
|
||||
fileCache, err = diskcache.NewFileCache(cacheDir)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to create file cache: %v", err)
|
||||
}
|
||||
} else {
|
||||
// No-op cache if no cacheDir is specified
|
||||
fileCache = diskcache.NewNoOp()
|
||||
}
|
||||
// initialize indexing and schedule indexing ever n minutes (default 5)
|
||||
go files.InitializeIndex(serverConfig.IndexingInterval, serverConfig.Indexing)
|
||||
_, err := os.Stat(serverConfig.Root)
|
||||
checkErr(fmt.Sprint("cmd os.Stat ", serverConfig.Root), err)
|
||||
var listener net.Listener
|
||||
address := serverConfig.Address + ":" + strconv.Itoa(serverConfig.Port)
|
||||
switch {
|
||||
case serverConfig.Socket != "":
|
||||
listener, err = net.Listen("unix", serverConfig.Socket)
|
||||
checkErr("net.Listen", err)
|
||||
socketPerm, err := cmd.Flags().GetUint32("socket-perm") //nolint:govet
|
||||
checkErr("cmd.Flags().GetUint32", err)
|
||||
err = os.Chmod(serverConfig.Socket, os.FileMode(socketPerm))
|
||||
checkErr("os.Chmod", err)
|
||||
case serverConfig.TLSKey != "" && serverConfig.TLSCert != "":
|
||||
cer, err := tls.LoadX509KeyPair(serverConfig.TLSCert, serverConfig.TLSKey) //nolint:govet
|
||||
checkErr("tls.LoadX509KeyPair", err)
|
||||
listener, err = tls.Listen("tcp", address, &tls.Config{
|
||||
MinVersion: tls.VersionTLS12,
|
||||
Certificates: []tls.Certificate{cer}},
|
||||
)
|
||||
checkErr("tls.Listen", err)
|
||||
default:
|
||||
listener, err = net.Listen("tcp", address)
|
||||
checkErr("net.Listen", err)
|
||||
}
|
||||
sigc := make(chan os.Signal, 1)
|
||||
signal.Notify(sigc, os.Interrupt, syscall.SIGTERM)
|
||||
go cleanupHandler(listener, sigc)
|
||||
if !nonEmbededFS {
|
||||
assetsFs, err := fs.Sub(assets, "dist")
|
||||
if err != nil {
|
||||
log.Fatal("Could not embed frontend. Does backend/cmd/dist exist? Must be built and exist first")
|
||||
}
|
||||
handler, err := fbhttp.NewHandler(imgSvc, fileCache, d.store, &serverConfig, assetsFs)
|
||||
checkErr("fbhttp.NewHandler", err)
|
||||
defer listener.Close()
|
||||
log.Println("Listening on", listener.Addr().String())
|
||||
//nolint: gosec
|
||||
if err := http.Serve(listener, handler); err != nil {
|
||||
log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err)
|
||||
}
|
||||
} else {
|
||||
assetsFs := dirFS{Dir: http.Dir("frontend/dist")}
|
||||
handler, err := fbhttp.NewHandler(imgSvc, fileCache, d.store, &serverConfig, assetsFs)
|
||||
checkErr("fbhttp.NewHandler", err)
|
||||
defer listener.Close()
|
||||
log.Println("Listening on", listener.Addr().String())
|
||||
//nolint: gosec
|
||||
if err := http.Serve(listener, handler); err != nil {
|
||||
log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err)
|
||||
}
|
||||
}
|
||||
|
||||
}, pythonConfig{allowNoDB: true}),
|
||||
func generalUsage() {
|
||||
fmt.Printf(`usage: ./html-web-crawler <command> [options] --urls <urls>
|
||||
commands:
|
||||
collect Collect data from URLs
|
||||
crawl Crawl URLs and collect data
|
||||
install Install chrome browser for javascript enabled scraping.
|
||||
Note: Consider instead to install via native package manager,
|
||||
then set "CHROME_EXECUTABLE" in the environment
|
||||
` + "\n")
|
||||
}
|
||||
|
||||
func StartFilebrowser() {
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
// Global flags
|
||||
var configPath string
|
||||
var help bool
|
||||
// Override the default usage output to use generalUsage()
|
||||
flag.Usage = generalUsage
|
||||
flag.StringVar(&configPath, "c", "filebrowser.yaml", "Path to the config file.")
|
||||
flag.BoolVar(&help, "h", false, "Get help about commands")
|
||||
|
||||
// Parse global flags (before subcommands)
|
||||
flag.Parse() // print generalUsage on error
|
||||
|
||||
// Show help if requested
|
||||
if help {
|
||||
generalUsage()
|
||||
return
|
||||
}
|
||||
|
||||
// Create a new FlagSet for the 'set' subcommand
|
||||
setCmd := flag.NewFlagSet("set", flag.ExitOnError)
|
||||
var user, scope, dbConfig string
|
||||
var asAdmin bool
|
||||
|
||||
setCmd.StringVar(&user, "u", "", "Comma-separated username and password: \"set -u <username>,<password>\"")
|
||||
setCmd.BoolVar(&asAdmin, "a", false, "Create user as admin user, used in combination with -u")
|
||||
setCmd.StringVar(&scope, "s", "", "Specify a user scope, otherwise default user config scope is used")
|
||||
setCmd.StringVar(&dbConfig, "c", "filebrowser.yaml", "Path to the config file.")
|
||||
|
||||
// Parse subcommand flags only if a subcommand is specified
|
||||
if len(os.Args) > 1 {
|
||||
switch os.Args[1] {
|
||||
case "set":
|
||||
err := setCmd.Parse(os.Args)
|
||||
if err != nil {
|
||||
setCmd.PrintDefaults()
|
||||
os.Exit(1)
|
||||
}
|
||||
userInfo := strings.Split(user, ",")
|
||||
if len(userInfo) < 2 {
|
||||
fmt.Println("not enough info to create user: \"set -u username,password\"")
|
||||
setCmd.PrintDefaults()
|
||||
os.Exit(1)
|
||||
}
|
||||
username := userInfo[0]
|
||||
password := userInfo[1]
|
||||
getStore(dbConfig)
|
||||
// Create the user logic
|
||||
if asAdmin {
|
||||
log.Printf("Creating user as admin: %s\n", username)
|
||||
} else {
|
||||
log.Printf("Creating user: %s\n", username)
|
||||
}
|
||||
newUser := users.User{
|
||||
Username: username,
|
||||
Password: password,
|
||||
}
|
||||
if scope != "" {
|
||||
newUser.Scope = scope
|
||||
}
|
||||
err = storage.CreateUser(newUser, asAdmin)
|
||||
if err != nil {
|
||||
log.Fatal("Could not create user: ", err)
|
||||
}
|
||||
return
|
||||
case "version":
|
||||
fmt.Println("FileBrowser Quantum - A modern web-based file manager")
|
||||
fmt.Printf("Version : %v\n", version.Version)
|
||||
fmt.Printf("Commit : %v\n", version.CommitSHA)
|
||||
fmt.Printf("Release Info : https://github.com/gtsteffaniak/filebrowser/releases/tag/%v\n", version.Version)
|
||||
return
|
||||
}
|
||||
}
|
||||
store, dbExists := getStore(configPath)
|
||||
indexingInterval := fmt.Sprint(settings.Config.Server.IndexingInterval, " minutes")
|
||||
if !settings.Config.Server.Indexing {
|
||||
indexingInterval = "disabled"
|
||||
}
|
||||
database := fmt.Sprintf("Using existing database : %v", settings.Config.Server.Database)
|
||||
if !dbExists {
|
||||
database = fmt.Sprintf("Creating new database : %v", settings.Config.Server.Database)
|
||||
}
|
||||
log.Printf("Initializing FileBrowser Quantum (%v)\n", version.Version)
|
||||
log.Println("Embeded frontend :", !nonEmbededFS)
|
||||
log.Println(database)
|
||||
log.Println("Sources :", settings.Config.Server.Root)
|
||||
log.Print("Indexing interval : ", indexingInterval)
|
||||
|
||||
serverConfig := settings.Config.Server
|
||||
// initialize indexing and schedule indexing ever n minutes (default 5)
|
||||
go files.InitializeIndex(serverConfig.IndexingInterval, serverConfig.Indexing)
|
||||
if err := rootCMD(store, &serverConfig); err != nil {
|
||||
log.Fatal("Error starting filebrowser:", err)
|
||||
}
|
||||
}
|
||||
|
@ -149,37 +166,77 @@ func cleanupHandler(listener net.Listener, c chan os.Signal) { //nolint:interfac
|
|||
os.Exit(0)
|
||||
}
|
||||
|
||||
func quickSetup(d pythonData) {
|
||||
settings.Config.Auth.Key = generateKey()
|
||||
if settings.Config.Auth.Method == "noauth" {
|
||||
err := d.store.Auth.Save(&auth.NoAuth{})
|
||||
checkErr("d.store.Auth.Save", err)
|
||||
func rootCMD(store *storage.Storage, serverConfig *settings.Server) error {
|
||||
if serverConfig.NumImageProcessors < 1 {
|
||||
log.Fatal("Image resize workers count could not be < 1")
|
||||
}
|
||||
imgSvc := img.New(serverConfig.NumImageProcessors)
|
||||
|
||||
cacheDir := "/tmp"
|
||||
var fileCache diskcache.Interface
|
||||
|
||||
// Use file cache if cacheDir is specified
|
||||
if cacheDir != "" {
|
||||
var err error
|
||||
fileCache, err = diskcache.NewFileCache(cacheDir)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to create file cache: %v", err)
|
||||
}
|
||||
} else {
|
||||
settings.Config.Auth.Method = "password"
|
||||
err := d.store.Auth.Save(&auth.JSONAuth{})
|
||||
checkErr("d.store.Auth.Save", err)
|
||||
// No-op cache if no cacheDir is specified
|
||||
fileCache = diskcache.NewNoOp()
|
||||
}
|
||||
err := d.store.Settings.Save(&settings.Config)
|
||||
checkErr("d.store.Settings.Save", err)
|
||||
err = d.store.Settings.SaveServer(&settings.Config.Server)
|
||||
checkErr("d.store.Settings.SaveServer", err)
|
||||
user := users.ApplyDefaults(users.User{})
|
||||
user.Username = settings.Config.Auth.AdminUsername
|
||||
user.Password = settings.Config.Auth.AdminPassword
|
||||
user.Perm.Admin = true
|
||||
user.Scope = "./"
|
||||
user.DarkMode = true
|
||||
user.ViewMode = "normal"
|
||||
user.LockPassword = false
|
||||
user.Perm = settings.Permissions{
|
||||
Create: true,
|
||||
Rename: true,
|
||||
Modify: true,
|
||||
Delete: true,
|
||||
Share: true,
|
||||
Download: true,
|
||||
Admin: true,
|
||||
|
||||
fbhttp.SetupEnv(store, serverConfig, fileCache)
|
||||
|
||||
_, err := os.Stat(serverConfig.Root)
|
||||
utils.CheckErr(fmt.Sprint("cmd os.Stat ", serverConfig.Root), err)
|
||||
var listener net.Listener
|
||||
address := serverConfig.Address + ":" + strconv.Itoa(serverConfig.Port)
|
||||
switch {
|
||||
case serverConfig.Socket != "":
|
||||
listener, err = net.Listen("unix", serverConfig.Socket)
|
||||
utils.CheckErr("net.Listen", err)
|
||||
err = os.Chmod(serverConfig.Socket, os.FileMode(0666)) // socket-perm
|
||||
utils.CheckErr("os.Chmod", err)
|
||||
case serverConfig.TLSKey != "" && serverConfig.TLSCert != "":
|
||||
cer, err := tls.LoadX509KeyPair(serverConfig.TLSCert, serverConfig.TLSKey) //nolint:govet
|
||||
utils.CheckErr("tls.LoadX509KeyPair", err)
|
||||
listener, err = tls.Listen("tcp", address, &tls.Config{
|
||||
MinVersion: tls.VersionTLS12,
|
||||
Certificates: []tls.Certificate{cer}},
|
||||
)
|
||||
utils.CheckErr("tls.Listen", err)
|
||||
default:
|
||||
listener, err = net.Listen("tcp", address)
|
||||
utils.CheckErr("net.Listen", err)
|
||||
}
|
||||
err = d.store.Users.Save(&user)
|
||||
checkErr("d.store.Users.Save", err)
|
||||
sigc := make(chan os.Signal, 1)
|
||||
signal.Notify(sigc, os.Interrupt, syscall.SIGTERM)
|
||||
go cleanupHandler(listener, sigc)
|
||||
if !nonEmbededFS {
|
||||
assetsFs, err := fs.Sub(assets, "dist")
|
||||
if err != nil {
|
||||
log.Fatal("Could not embed frontend. Does backend/cmd/dist exist? Must be built and exist first")
|
||||
}
|
||||
handler, err := fbhttp.NewHandler(imgSvc, assetsFs)
|
||||
utils.CheckErr("fbhttp.NewHandler", err)
|
||||
defer listener.Close()
|
||||
log.Println("Listening on", listener.Addr().String())
|
||||
//nolint: gosec
|
||||
if err := http.Serve(listener, handler); err != nil {
|
||||
log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err)
|
||||
}
|
||||
} else {
|
||||
assetsFs := dirFS{Dir: http.Dir("frontend/dist")}
|
||||
handler, err := fbhttp.NewHandler(imgSvc, assetsFs)
|
||||
utils.CheckErr("fbhttp.NewHandler", err)
|
||||
defer listener.Close()
|
||||
log.Println("Listening on", listener.Addr().String())
|
||||
//nolint: gosec
|
||||
if err := http.Serve(listener, handler); err != nil {
|
||||
log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -6,7 +6,9 @@ import (
|
|||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@ -40,27 +42,27 @@ including 'index_end'.`,
|
|||
|
||||
return nil
|
||||
},
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
i, err := strconv.Atoi(args[0])
|
||||
checkErr("strconv.Atoi", err)
|
||||
utils.CheckErr("strconv.Atoi", err)
|
||||
f := i
|
||||
if len(args) == 2 { //nolint:gomnd
|
||||
f, err = strconv.Atoi(args[1])
|
||||
checkErr("strconv.Atoi", err)
|
||||
utils.CheckErr("strconv.Atoi", err)
|
||||
}
|
||||
|
||||
user := func(u *users.User) {
|
||||
u.Rules = append(u.Rules[:i], u.Rules[f+1:]...)
|
||||
err := d.store.Users.Save(u)
|
||||
checkErr("d.store.Users.Save", err)
|
||||
err := store.Users.Save(u)
|
||||
utils.CheckErr("store.Users.Save", err)
|
||||
}
|
||||
|
||||
global := func(s *settings.Settings) {
|
||||
s.Rules = append(s.Rules[:i], s.Rules[f+1:]...)
|
||||
err := d.store.Settings.Save(s)
|
||||
checkErr("d.store.Settings.Save", err)
|
||||
err := store.Settings.Save(s)
|
||||
utils.CheckErr("store.Settings.Save", err)
|
||||
}
|
||||
|
||||
runRules(d.store, cmd, user, global)
|
||||
}, pythonConfig{}),
|
||||
runRules(store, cmd, user, global)
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -10,10 +10,10 @@ import (
|
|||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(rulesCmd)
|
||||
rulesCmd.PersistentFlags().StringP("username", "u", "", "username of user to which the rules apply")
|
||||
rulesCmd.PersistentFlags().UintP("id", "i", 0, "id of user to which the rules apply")
|
||||
}
|
||||
|
@ -33,7 +33,7 @@ func runRules(st *storage.Storage, cmd *cobra.Command, usersFn func(*users.User)
|
|||
id := getUserIdentifier(cmd.Flags())
|
||||
if id != nil {
|
||||
user, err := st.Users.Get("", id)
|
||||
checkErr("st.Users.Get", err)
|
||||
utils.CheckErr("st.Users.Get", err)
|
||||
|
||||
if usersFn != nil {
|
||||
usersFn(user)
|
||||
|
@ -44,7 +44,7 @@ func runRules(st *storage.Storage, cmd *cobra.Command, usersFn func(*users.User)
|
|||
}
|
||||
|
||||
s, err := st.Settings.Get()
|
||||
checkErr("st.Settings.Get", err)
|
||||
utils.CheckErr("st.Settings.Get", err)
|
||||
|
||||
if globalFn != nil {
|
||||
globalFn(s)
|
||||
|
|
|
@ -7,7 +7,9 @@ import (
|
|||
|
||||
"github.com/gtsteffaniak/filebrowser/rules"
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@ -21,7 +23,7 @@ var rulesAddCmd = &cobra.Command{
|
|||
Short: "Add a global rule or user rule",
|
||||
Long: `Add a global rule or user rule.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
allow := mustGetBool(cmd.Flags(), "allow")
|
||||
regex := mustGetBool(cmd.Flags(), "regex")
|
||||
exp := args[0]
|
||||
|
@ -43,16 +45,16 @@ var rulesAddCmd = &cobra.Command{
|
|||
|
||||
user := func(u *users.User) {
|
||||
u.Rules = append(u.Rules, rule)
|
||||
err := d.store.Users.Save(u)
|
||||
checkErr("d.store.Users.Save", err)
|
||||
err := store.Users.Save(u)
|
||||
utils.CheckErr("store.Users.Save", err)
|
||||
}
|
||||
|
||||
global := func(s *settings.Settings) {
|
||||
s.Rules = append(s.Rules, rule)
|
||||
err := d.store.Settings.Save(s)
|
||||
checkErr("d.store.Settings.Save", err)
|
||||
err := store.Settings.Save(s)
|
||||
utils.CheckErr("store.Settings.Save", err)
|
||||
}
|
||||
|
||||
runRules(d.store, cmd, user, global)
|
||||
}, pythonConfig{}),
|
||||
runRules(store, cmd, user, global)
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
|
@ -13,7 +14,7 @@ var rulesLsCommand = &cobra.Command{
|
|||
Short: "List global rules or user specific rules",
|
||||
Long: `List global rules or user specific rules.`,
|
||||
Args: cobra.NoArgs,
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
runRules(d.store, cmd, nil, nil)
|
||||
}, pythonConfig{}),
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
runRules(store, cmd, nil, nil)
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -11,10 +11,6 @@ import (
|
|||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(usersCmd)
|
||||
}
|
||||
|
||||
var usersCmd = &cobra.Command{
|
||||
Use: "users",
|
||||
Short: "Users management utility",
|
||||
|
|
|
@ -3,7 +3,9 @@ package cmd
|
|||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@ -15,26 +17,26 @@ var usersAddCmd = &cobra.Command{
|
|||
Short: "Create a new user",
|
||||
Long: `Create a new user and add it to the database.`,
|
||||
Args: cobra.ExactArgs(2), //nolint:gomnd
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
user := &users.User{
|
||||
Username: args[0],
|
||||
Password: args[1],
|
||||
LockPassword: mustGetBool(cmd.Flags(), "lockPassword"),
|
||||
}
|
||||
servSettings, err := d.store.Settings.GetServer()
|
||||
checkErr("d.store.Settings.GetServer()", err)
|
||||
servSettings, err := store.Settings.GetServer()
|
||||
utils.CheckErr("store.Settings.GetServer()", err)
|
||||
// since getUserDefaults() polluted s.Defaults.Scope
|
||||
// which makes the Scope not the one saved in the db
|
||||
// we need the right s.Defaults.Scope here
|
||||
s2, err := d.store.Settings.Get()
|
||||
checkErr("d.store.Settings.Get()", err)
|
||||
s2, err := store.Settings.Get()
|
||||
utils.CheckErr("store.Settings.Get()", err)
|
||||
|
||||
userHome, err := s2.MakeUserDir(user.Username, user.Scope, servSettings.Root)
|
||||
checkErr("s2.MakeUserDir", err)
|
||||
utils.CheckErr("s2.MakeUserDir", err)
|
||||
user.Scope = userHome
|
||||
|
||||
err = d.store.Users.Save(user)
|
||||
checkErr("d.store.Users.Save", err)
|
||||
err = store.Users.Save(user)
|
||||
utils.CheckErr("store.Users.Save", err)
|
||||
printUsers([]*users.User{user})
|
||||
}, pythonConfig{}),
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
|
@ -14,11 +16,11 @@ var usersExportCmd = &cobra.Command{
|
|||
Long: `Export all users to a json or yaml file. Please indicate the
|
||||
path to the file where you want to write the users.`,
|
||||
Args: jsonYamlArg,
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
list, err := d.store.Users.Gets("")
|
||||
checkErr("d.store.Users.Gets", err)
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
list, err := store.Users.Gets("")
|
||||
utils.CheckErr("store.Users.Gets", err)
|
||||
|
||||
err = marshal(args[0], list)
|
||||
checkErr("marshal", err)
|
||||
}, pythonConfig{}),
|
||||
utils.CheckErr("marshal", err)
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -3,7 +3,9 @@ package cmd
|
|||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@ -26,7 +28,7 @@ var usersLsCmd = &cobra.Command{
|
|||
Run: findUsers,
|
||||
}
|
||||
|
||||
var findUsers = python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
var findUsers = cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
var (
|
||||
list []*users.User
|
||||
user *users.User
|
||||
|
@ -36,16 +38,16 @@ var findUsers = python(func(cmd *cobra.Command, args []string, d pythonData) {
|
|||
if len(args) == 1 {
|
||||
username, id := parseUsernameOrID(args[0])
|
||||
if username != "" {
|
||||
user, err = d.store.Users.Get("", username)
|
||||
user, err = store.Users.Get("", username)
|
||||
} else {
|
||||
user, err = d.store.Users.Get("", id)
|
||||
user, err = store.Users.Get("", id)
|
||||
}
|
||||
|
||||
list = []*users.User{user}
|
||||
} else {
|
||||
list, err = d.store.Users.Gets("")
|
||||
list, err = store.Users.Gets("")
|
||||
}
|
||||
|
||||
checkErr("findUsers", err)
|
||||
utils.CheckErr("findUsers", err)
|
||||
printUsers(list)
|
||||
}, pythonConfig{})
|
||||
})
|
||||
|
|
|
@ -8,7 +8,9 @@ import (
|
|||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@ -25,47 +27,47 @@ file. You can use this command to import new users to your
|
|||
installation. For that, just don't place their ID on the files
|
||||
list or set it to 0.`,
|
||||
Args: jsonYamlArg,
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
fd, err := os.Open(args[0])
|
||||
checkErr("os.Open", err)
|
||||
utils.CheckErr("os.Open", err)
|
||||
defer fd.Close()
|
||||
|
||||
list := []*users.User{}
|
||||
err = unmarshal(args[0], &list)
|
||||
checkErr("unmarshal", err)
|
||||
utils.CheckErr("unmarshal", err)
|
||||
|
||||
if mustGetBool(cmd.Flags(), "replace") {
|
||||
oldUsers, err := d.store.Users.Gets("")
|
||||
checkErr("d.store.Users.Gets", err)
|
||||
oldUsers, err := store.Users.Gets("")
|
||||
utils.CheckErr("store.Users.Gets", err)
|
||||
|
||||
err = marshal("users.backup.json", list)
|
||||
checkErr("marshal users.backup.json", err)
|
||||
utils.CheckErr("marshal users.backup.json", err)
|
||||
|
||||
for _, user := range oldUsers {
|
||||
err = d.store.Users.Delete(user.ID)
|
||||
checkErr("d.store.Users.Delete", err)
|
||||
err = store.Users.Delete(user.ID)
|
||||
utils.CheckErr("store.Users.Delete", err)
|
||||
}
|
||||
}
|
||||
|
||||
overwrite := mustGetBool(cmd.Flags(), "overwrite")
|
||||
|
||||
for _, user := range list {
|
||||
onDB, err := d.store.Users.Get("", user.ID)
|
||||
onDB, err := store.Users.Get("", user.ID)
|
||||
|
||||
// User exists in DB.
|
||||
if err == nil {
|
||||
if !overwrite {
|
||||
newErr := errors.New("user " + strconv.Itoa(int(user.ID)) + " is already registered")
|
||||
checkErr("", newErr)
|
||||
utils.CheckErr("", newErr)
|
||||
}
|
||||
|
||||
// If the usernames mismatch, check if there is another one in the DB
|
||||
// with the new username. If there is, print an error and cancel the
|
||||
// operation
|
||||
if user.Username != onDB.Username {
|
||||
if conflictuous, err := d.store.Users.Get("", user.Username); err == nil { //nolint:govet
|
||||
if conflictuous, err := store.Users.Get("", user.Username); err == nil { //nolint:govet
|
||||
newErr := usernameConflictError(user.Username, conflictuous.ID, user.ID)
|
||||
checkErr("usernameConflictError", newErr)
|
||||
utils.CheckErr("usernameConflictError", newErr)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
|
@ -74,10 +76,10 @@ list or set it to 0.`,
|
|||
user.ID = 0
|
||||
}
|
||||
|
||||
err = d.store.Users.Save(user)
|
||||
checkErr("d.store.Users.Save", err)
|
||||
err = store.Users.Save(user)
|
||||
utils.CheckErr("store.Users.Save", err)
|
||||
}
|
||||
}, pythonConfig{}),
|
||||
}),
|
||||
}
|
||||
|
||||
func usernameConflictError(username string, originalID, newID uint) error {
|
||||
|
|
|
@ -3,6 +3,8 @@ package cmd
|
|||
import (
|
||||
"log"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
|
@ -15,17 +17,17 @@ var usersRmCmd = &cobra.Command{
|
|||
Short: "Delete a user by username or id",
|
||||
Long: `Delete a user by username or id`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
username, id := parseUsernameOrID(args[0])
|
||||
var err error
|
||||
|
||||
if username != "" {
|
||||
err = d.store.Users.Delete(username)
|
||||
err = store.Users.Delete(username)
|
||||
} else {
|
||||
err = d.store.Users.Delete(id)
|
||||
err = store.Users.Delete(id)
|
||||
}
|
||||
|
||||
checkErr("usersRmCmd", err)
|
||||
utils.CheckErr("usersRmCmd", err)
|
||||
log.Println("user deleted successfully")
|
||||
}, pythonConfig{}),
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -3,7 +3,9 @@ package cmd
|
|||
import (
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
@ -16,7 +18,7 @@ var usersUpdateCmd = &cobra.Command{
|
|||
Long: `Updates an existing user. Set the flags for the
|
||||
options you want to change.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
Run: python(func(cmd *cobra.Command, args []string, d pythonData) {
|
||||
Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) {
|
||||
username, id := parseUsernameOrID(args[0])
|
||||
|
||||
var (
|
||||
|
@ -25,14 +27,14 @@ options you want to change.`,
|
|||
)
|
||||
|
||||
if id != 0 {
|
||||
user, err = d.store.Users.Get("", id)
|
||||
user, err = store.Users.Get("", id)
|
||||
} else {
|
||||
user, err = d.store.Users.Get("", username)
|
||||
user, err = store.Users.Get("", username)
|
||||
}
|
||||
checkErr("d.store.Users.Get", err)
|
||||
utils.CheckErr("store.Users.Get", err)
|
||||
|
||||
err = d.store.Users.Update(user)
|
||||
checkErr("d.store.Users.Update", err)
|
||||
err = store.Users.Update(user)
|
||||
utils.CheckErr("store.Users.Update", err)
|
||||
printUsers([]*users.User{user})
|
||||
}, pythonConfig{}),
|
||||
}),
|
||||
}
|
||||
|
|
|
@ -3,113 +3,42 @@ package cmd
|
|||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/asdine/storm/v3"
|
||||
"github.com/goccy/go-yaml"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/storage/bolt"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
func checkErr(source string, err error) {
|
||||
if err != nil {
|
||||
log.Fatalf("%s: %v", source, err)
|
||||
}
|
||||
}
|
||||
|
||||
func mustGetString(flags *pflag.FlagSet, flag string) string {
|
||||
s, err := flags.GetString(flag)
|
||||
checkErr("mustGetString", err)
|
||||
utils.CheckErr("mustGetString", err)
|
||||
return s
|
||||
}
|
||||
|
||||
func mustGetBool(flags *pflag.FlagSet, flag string) bool {
|
||||
b, err := flags.GetBool(flag)
|
||||
checkErr("mustGetBool", err)
|
||||
utils.CheckErr("mustGetBool", err)
|
||||
return b
|
||||
}
|
||||
|
||||
func mustGetUint(flags *pflag.FlagSet, flag string) uint {
|
||||
b, err := flags.GetUint(flag)
|
||||
checkErr("mustGetUint", err)
|
||||
utils.CheckErr("mustGetUint", err)
|
||||
return b
|
||||
}
|
||||
|
||||
func generateKey() []byte {
|
||||
k, err := settings.GenerateKey()
|
||||
checkErr("generateKey", err)
|
||||
return k
|
||||
}
|
||||
|
||||
type cobraFunc func(cmd *cobra.Command, args []string)
|
||||
type pythonFunc func(cmd *cobra.Command, args []string, data pythonData)
|
||||
|
||||
type pythonConfig struct {
|
||||
noDB bool
|
||||
allowNoDB bool
|
||||
}
|
||||
|
||||
type pythonData struct {
|
||||
hadDB bool
|
||||
store *storage.Storage
|
||||
}
|
||||
|
||||
func dbExists(path string) (bool, error) {
|
||||
stat, err := os.Stat(path)
|
||||
if err == nil {
|
||||
return stat.Size() != 0, nil
|
||||
}
|
||||
|
||||
if os.IsNotExist(err) {
|
||||
d := filepath.Dir(path)
|
||||
_, err = os.Stat(d)
|
||||
if os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(d, 0700); err != nil { //nolint:govet,gomnd
|
||||
return false, err
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
return false, err
|
||||
}
|
||||
|
||||
func python(fn pythonFunc, cfg pythonConfig) cobraFunc {
|
||||
return func(cmd *cobra.Command, args []string) {
|
||||
data := pythonData{hadDB: true}
|
||||
path := settings.Config.Server.Database
|
||||
exists, err := dbExists(path)
|
||||
|
||||
if err != nil {
|
||||
panic(err)
|
||||
} else if exists && cfg.noDB {
|
||||
log.Fatal(path + " already exists")
|
||||
} else if !exists && !cfg.noDB && !cfg.allowNoDB {
|
||||
log.Fatal(path + " does not exist. Please run 'filebrowser config init' first.")
|
||||
}
|
||||
|
||||
data.hadDB = exists
|
||||
db, err := storm.Open(path)
|
||||
checkErr(fmt.Sprintf("storm.Open path %v", path), err)
|
||||
|
||||
defer db.Close()
|
||||
data.store, err = bolt.NewStorage(db)
|
||||
checkErr("bolt.NewStorage", err)
|
||||
fn(cmd, args, data)
|
||||
}
|
||||
}
|
||||
type pythonFunc func(cmd *cobra.Command, args []string, store *storage.Storage)
|
||||
|
||||
func marshal(filename string, data interface{}) error {
|
||||
fd, err := os.Create(filename)
|
||||
|
||||
checkErr("os.Create", err)
|
||||
utils.CheckErr("os.Create", err)
|
||||
defer fd.Close()
|
||||
|
||||
switch ext := filepath.Ext(filename); ext {
|
||||
|
@ -127,7 +56,7 @@ func marshal(filename string, data interface{}) error {
|
|||
|
||||
func unmarshal(filename string, data interface{}) error {
|
||||
fd, err := os.Open(filename)
|
||||
checkErr("os.Open", err)
|
||||
utils.CheckErr("os.Open", err)
|
||||
defer fd.Close()
|
||||
|
||||
switch ext := filepath.Ext(filename); ext {
|
||||
|
@ -152,3 +81,8 @@ func jsonYamlArg(cmd *cobra.Command, args []string) error {
|
|||
return errors.New("invalid format: " + ext)
|
||||
}
|
||||
}
|
||||
|
||||
func cobraCmd(fn pythonFunc) cobraFunc {
|
||||
return func(cmd *cobra.Command, args []string) {
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(versionCmd)
|
||||
}
|
||||
|
||||
var versionCmd = &cobra.Command{
|
||||
Use: "version",
|
||||
Short: "Print the version number",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Println("File Browser " + version.Version + "/" + version.CommitSHA)
|
||||
},
|
||||
}
|
|
@ -91,7 +91,7 @@ func ParseSearch(value string) *SearchOptions {
|
|||
opts.LargerThan = updateSize(size)
|
||||
}
|
||||
if strings.HasPrefix(filter, "smallerThan=") {
|
||||
opts.Conditions["larger"] = true
|
||||
opts.Conditions["smaller"] = true
|
||||
size := strings.TrimPrefix(filter, "smallerThan=")
|
||||
opts.SmallerThan = updateSize(size)
|
||||
}
|
||||
|
|
|
@ -0,0 +1,154 @@
|
|||
package files
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Helper function to create error messages dynamically
|
||||
func errorMsg(extension, expectedType string, expectedMatch bool) string {
|
||||
matchStatus := "to match"
|
||||
if !expectedMatch {
|
||||
matchStatus = "to not match"
|
||||
}
|
||||
return fmt.Sprintf("Expected %s %s type '%s'", extension, matchStatus, expectedType)
|
||||
}
|
||||
|
||||
func TestIsMatchingType(t *testing.T) {
|
||||
// Test cases where IsMatchingType should return true
|
||||
trueTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".pdf", "pdf"},
|
||||
{".doc", "doc"},
|
||||
{".docx", "doc"},
|
||||
{".json", "text"},
|
||||
{".sh", "text"},
|
||||
{".zip", "archive"},
|
||||
{".rar", "archive"},
|
||||
}
|
||||
|
||||
for _, tc := range trueTestCases {
|
||||
assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true))
|
||||
}
|
||||
|
||||
// Test cases where IsMatchingType should return false
|
||||
falseTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".mp4", "doc"},
|
||||
{".mp4", "text"},
|
||||
{".mp4", "archive"},
|
||||
}
|
||||
|
||||
for _, tc := range falseTestCases {
|
||||
assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false))
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateSize(t *testing.T) {
|
||||
// Helper function for size error messages
|
||||
sizeErrorMsg := func(input string, expected, actual int) string {
|
||||
return fmt.Sprintf("Expected size for input '%s' to be %d, got %d", input, expected, actual)
|
||||
}
|
||||
|
||||
// Test cases for updateSize
|
||||
testCases := []struct {
|
||||
input string
|
||||
expected int
|
||||
}{
|
||||
{"150", 150},
|
||||
{"invalid", 100},
|
||||
{"", 100},
|
||||
}
|
||||
|
||||
for _, tc := range testCases {
|
||||
actual := updateSize(tc.input)
|
||||
assert.Equal(t, tc.expected, actual, sizeErrorMsg(tc.input, tc.expected, actual))
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsDoc(t *testing.T) {
|
||||
// Test cases where IsMatchingType should return true for document types
|
||||
docTrueTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".doc", "doc"},
|
||||
{".pdf", "doc"},
|
||||
}
|
||||
|
||||
for _, tc := range docTrueTestCases {
|
||||
assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true))
|
||||
}
|
||||
|
||||
// Test case where IsMatchingType should return false for document types
|
||||
docFalseTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".mp4", "doc"},
|
||||
}
|
||||
|
||||
for _, tc := range docFalseTestCases {
|
||||
assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false))
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsText(t *testing.T) {
|
||||
// Test cases where IsMatchingType should return true for text types
|
||||
textTrueTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".json", "text"},
|
||||
{".sh", "text"},
|
||||
}
|
||||
|
||||
for _, tc := range textTrueTestCases {
|
||||
assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true))
|
||||
}
|
||||
|
||||
// Test case where IsMatchingType should return false for text types
|
||||
textFalseTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".mp4", "text"},
|
||||
}
|
||||
|
||||
for _, tc := range textFalseTestCases {
|
||||
assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false))
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsArchive(t *testing.T) {
|
||||
// Test cases where IsMatchingType should return true for archive types
|
||||
archiveTrueTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".zip", "archive"},
|
||||
{".rar", "archive"},
|
||||
}
|
||||
|
||||
for _, tc := range archiveTrueTestCases {
|
||||
assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true))
|
||||
}
|
||||
|
||||
// Test case where IsMatchingType should return false for archive types
|
||||
archiveFalseTestCases := []struct {
|
||||
extension string
|
||||
expectedType string
|
||||
}{
|
||||
{".mp4", "archive"},
|
||||
}
|
||||
|
||||
for _, tc := range archiveFalseTestCases {
|
||||
assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false))
|
||||
}
|
||||
}
|
|
@ -21,32 +21,42 @@ import (
|
|||
"github.com/gtsteffaniak/filebrowser/errors"
|
||||
"github.com/gtsteffaniak/filebrowser/rules"
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
)
|
||||
|
||||
var (
|
||||
bytesInMegabyte int64 = 1000000
|
||||
pathMutexes = make(map[string]*sync.Mutex)
|
||||
pathMutexesMu sync.Mutex // Mutex to protect the pathMutexes map
|
||||
pathMutexes = make(map[string]*sync.Mutex)
|
||||
pathMutexesMu sync.Mutex // Mutex to protect the pathMutexes map
|
||||
)
|
||||
|
||||
type ReducedItem struct {
|
||||
Name string `json:"name"`
|
||||
Size int64 `json:"size"`
|
||||
ModTime time.Time `json:"modified"`
|
||||
IsDir bool `json:"isDir,omitempty"`
|
||||
Type string `json:"type"`
|
||||
}
|
||||
|
||||
// FileInfo describes a file.
|
||||
// reduced item is non-recursive reduced "Items", used to pass flat items array
|
||||
type FileInfo struct {
|
||||
*Listing
|
||||
Path string `json:"path,omitempty"`
|
||||
Name string `json:"name"`
|
||||
Size int64 `json:"size"`
|
||||
Extension string `json:"-"`
|
||||
ModTime time.Time `json:"modified"`
|
||||
CacheTime time.Time `json:"-"`
|
||||
Mode os.FileMode `json:"-"`
|
||||
IsDir bool `json:"isDir,omitempty"`
|
||||
IsSymlink bool `json:"isSymlink,omitempty"`
|
||||
Type string `json:"type"`
|
||||
Subtitles []string `json:"subtitles,omitempty"`
|
||||
Content string `json:"content,omitempty"`
|
||||
Checksums map[string]string `json:"checksums,omitempty"`
|
||||
Token string `json:"token,omitempty"`
|
||||
Items []*FileInfo `json:"-"`
|
||||
ReducedItems []ReducedItem `json:"items,omitempty"`
|
||||
Path string `json:"path,omitempty"`
|
||||
Name string `json:"name"`
|
||||
Size int64 `json:"size"`
|
||||
Extension string `json:"-"`
|
||||
ModTime time.Time `json:"modified"`
|
||||
CacheTime time.Time `json:"-"`
|
||||
Mode os.FileMode `json:"-"`
|
||||
IsDir bool `json:"isDir,omitempty"`
|
||||
IsSymlink bool `json:"isSymlink,omitempty"`
|
||||
Type string `json:"type"`
|
||||
Subtitles []string `json:"subtitles,omitempty"`
|
||||
Content string `json:"content,omitempty"`
|
||||
Checksums map[string]string `json:"checksums,omitempty"`
|
||||
Token string `json:"token,omitempty"`
|
||||
NumDirs int `json:"numDirs"`
|
||||
NumFiles int `json:"numFiles"`
|
||||
}
|
||||
|
||||
// FileOptions are the options when getting a file info.
|
||||
|
@ -61,26 +71,11 @@ type FileOptions struct {
|
|||
Content bool
|
||||
}
|
||||
|
||||
// Sorting constants
|
||||
const (
|
||||
SortingByName = "name"
|
||||
SortingBySize = "size"
|
||||
SortingByModified = "modified"
|
||||
)
|
||||
|
||||
// Listing is a collection of files.
|
||||
type Listing struct {
|
||||
Items []*FileInfo `json:"items"`
|
||||
Path string `json:"path"`
|
||||
NumDirs int `json:"numDirs"`
|
||||
NumFiles int `json:"numFiles"`
|
||||
Sorting users.Sorting `json:"sorting"`
|
||||
}
|
||||
|
||||
// NewFileInfo creates a File object from a path and a given user. This File
|
||||
// object will be automatically filled depending on if it is a directory
|
||||
// or a file. If it's a video file, it will also detect any subtitles.
|
||||
// Legacy file info method, only called on non-indexed directories.
|
||||
// Once indexing completes for the first time, NewFileInfo is never called.
|
||||
func NewFileInfo(opts FileOptions) (*FileInfo, error) {
|
||||
|
||||
index := GetIndex(rootPath)
|
||||
if !opts.Checker.Check(opts.Path) {
|
||||
return nil, os.ErrPermission
|
||||
}
|
||||
|
@ -93,6 +88,26 @@ func NewFileInfo(opts FileOptions) (*FileInfo, error) {
|
|||
if err = file.readListing(opts.Path, opts.Checker, opts.ReadHeader); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cleanedItems := []ReducedItem{}
|
||||
for _, item := range file.Items {
|
||||
// This is particularly useful for root of index, while indexing hasn't finished.
|
||||
// adds the directory sizes for directories that have been indexed already.
|
||||
if item.IsDir {
|
||||
adjustedPath := index.makeIndexPath(opts.Path+"/"+item.Name, true)
|
||||
info, _ := index.GetMetadataInfo(adjustedPath)
|
||||
item.Size = info.Size
|
||||
}
|
||||
cleanedItems = append(cleanedItems, ReducedItem{
|
||||
Name: item.Name,
|
||||
Size: item.Size,
|
||||
IsDir: item.IsDir,
|
||||
ModTime: item.ModTime,
|
||||
Type: item.Type,
|
||||
})
|
||||
}
|
||||
|
||||
file.Items = nil
|
||||
file.ReducedItems = cleanedItems
|
||||
return file, nil
|
||||
}
|
||||
err = file.detectType(opts.Path, opts.Modify, opts.Content, true)
|
||||
|
@ -102,6 +117,7 @@ func NewFileInfo(opts FileOptions) (*FileInfo, error) {
|
|||
}
|
||||
return file, err
|
||||
}
|
||||
|
||||
func FileInfoFaster(opts FileOptions) (*FileInfo, error) {
|
||||
// Lock access for the specific path
|
||||
pathMutex := getMutex(opts.Path)
|
||||
|
@ -133,12 +149,11 @@ func FileInfoFaster(opts FileOptions) (*FileInfo, error) {
|
|||
file, err := NewFileInfo(opts)
|
||||
return file, err
|
||||
}
|
||||
info, exists := index.GetMetadataInfo(adjustedPath)
|
||||
info, exists := index.GetMetadataInfo(adjustedPath + "/" + filepath.Base(opts.Path))
|
||||
if !exists || info.Name == "" {
|
||||
return &FileInfo{}, errors.ErrEmptyKey
|
||||
return NewFileInfo(opts)
|
||||
}
|
||||
return &info, nil
|
||||
|
||||
}
|
||||
|
||||
func RefreshFileInfo(opts FileOptions) error {
|
||||
|
@ -491,9 +506,8 @@ func (i *FileInfo) readListing(path string, checker rules.Checker, readHeader bo
|
|||
return err
|
||||
}
|
||||
|
||||
listing := &Listing{
|
||||
listing := &FileInfo{
|
||||
Items: []*FileInfo{},
|
||||
Path: i.Path,
|
||||
NumDirs: 0,
|
||||
NumFiles: 0,
|
||||
}
|
||||
|
@ -548,7 +562,7 @@ func (i *FileInfo) readListing(path string, checker rules.Checker, readHeader bo
|
|||
listing.Items = append(listing.Items, file)
|
||||
}
|
||||
|
||||
i.Listing = listing
|
||||
i.Items = listing.Items
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
package files
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
@ -12,23 +11,12 @@ import (
|
|||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
)
|
||||
|
||||
type Directory struct {
|
||||
Metadata map[string]FileInfo
|
||||
Files string
|
||||
}
|
||||
|
||||
type File struct {
|
||||
Name string
|
||||
IsDir bool
|
||||
}
|
||||
|
||||
type Index struct {
|
||||
Root string
|
||||
Directories map[string]Directory
|
||||
Directories map[string]FileInfo
|
||||
NumDirs int
|
||||
NumFiles int
|
||||
inProgress bool
|
||||
quickList []File
|
||||
LastIndexed time.Time
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
@ -50,16 +38,12 @@ func indexingScheduler(intervalMinutes uint32) {
|
|||
rootPath = settings.Config.Server.Root
|
||||
}
|
||||
si := GetIndex(rootPath)
|
||||
log.Printf("Indexing Files...")
|
||||
log.Printf("Configured to run every %v minutes", intervalMinutes)
|
||||
log.Printf("Indexing from root: %s", si.Root)
|
||||
for {
|
||||
startTime := time.Now()
|
||||
// Set the indexing flag to indicate that indexing is in progress
|
||||
si.resetCount()
|
||||
// Perform the indexing operation
|
||||
err := si.indexFiles(si.Root)
|
||||
si.quickList = []File{}
|
||||
// Reset the indexing flag to indicate that indexing has finished
|
||||
si.inProgress = false
|
||||
// Update the LastIndexed time
|
||||
|
@ -81,78 +65,114 @@ func indexingScheduler(intervalMinutes uint32) {
|
|||
|
||||
// Define a function to recursively index files and directories
|
||||
func (si *Index) indexFiles(path string) error {
|
||||
// Check if the current directory has been modified since the last indexing
|
||||
// Ensure path is cleaned and normalized
|
||||
adjustedPath := si.makeIndexPath(path, true)
|
||||
|
||||
// Open the directory
|
||||
dir, err := os.Open(path)
|
||||
if err != nil {
|
||||
// Directory must have been deleted, remove it from the index
|
||||
// If the directory can't be opened (e.g., deleted), remove it from the index
|
||||
si.RemoveDirectory(adjustedPath)
|
||||
return err
|
||||
}
|
||||
defer dir.Close()
|
||||
|
||||
dirInfo, err := dir.Stat()
|
||||
if err != nil {
|
||||
dir.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
// Compare the last modified time of the directory with the last indexed time
|
||||
lastIndexed := si.LastIndexed
|
||||
if dirInfo.ModTime().Before(lastIndexed) {
|
||||
dir.Close()
|
||||
// Check if the directory is already up-to-date
|
||||
if dirInfo.ModTime().Before(si.LastIndexed) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Read the directory contents
|
||||
// Read directory contents
|
||||
files, err := dir.Readdir(-1)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dir.Close()
|
||||
si.UpdateQuickList(files)
|
||||
si.InsertFiles(path)
|
||||
// done separately for memory efficiency on recursion
|
||||
si.InsertDirs(path)
|
||||
|
||||
// Recursively process files and directories
|
||||
fileInfos := []*FileInfo{}
|
||||
var totalSize int64
|
||||
var numDirs, numFiles int
|
||||
|
||||
for _, file := range files {
|
||||
parentInfo := &FileInfo{
|
||||
Name: file.Name(),
|
||||
Size: file.Size(),
|
||||
ModTime: file.ModTime(),
|
||||
IsDir: file.IsDir(),
|
||||
}
|
||||
childInfo, err := si.InsertInfo(path, parentInfo)
|
||||
if err != nil {
|
||||
// Log error, but continue processing other files
|
||||
continue
|
||||
}
|
||||
|
||||
// Accumulate directory size and items
|
||||
totalSize += childInfo.Size
|
||||
if childInfo.IsDir {
|
||||
numDirs++
|
||||
} else {
|
||||
numFiles++
|
||||
}
|
||||
_ = childInfo.detectType(path, true, false, false)
|
||||
fileInfos = append(fileInfos, childInfo)
|
||||
}
|
||||
|
||||
// Create FileInfo for the current directory
|
||||
dirFileInfo := &FileInfo{
|
||||
Items: fileInfos,
|
||||
Name: filepath.Base(path),
|
||||
Size: totalSize,
|
||||
ModTime: dirInfo.ModTime(),
|
||||
CacheTime: time.Now(),
|
||||
IsDir: true,
|
||||
NumDirs: numDirs,
|
||||
NumFiles: numFiles,
|
||||
}
|
||||
|
||||
// Add directory to index
|
||||
si.mu.Lock()
|
||||
si.Directories[adjustedPath] = *dirFileInfo
|
||||
si.NumDirs += numDirs
|
||||
si.NumFiles += numFiles
|
||||
si.mu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (si *Index) InsertFiles(path string) {
|
||||
adjustedPath := si.makeIndexPath(path, true)
|
||||
subDirectory := Directory{}
|
||||
buffer := bytes.Buffer{}
|
||||
// InsertInfo function to handle adding a file or directory into the index
|
||||
func (si *Index) InsertInfo(parentPath string, file *FileInfo) (*FileInfo, error) {
|
||||
filePath := filepath.Join(parentPath, file.Name)
|
||||
|
||||
for _, f := range si.GetQuickList() {
|
||||
if !f.IsDir {
|
||||
buffer.WriteString(f.Name + ";")
|
||||
si.UpdateCount("files")
|
||||
// Check if it's a directory and recursively index it
|
||||
if file.IsDir {
|
||||
// Recursively index directory
|
||||
err := si.indexFiles(filePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// Use GetMetadataInfo and SetFileMetadata for safer read and write operations
|
||||
subDirectory.Files = buffer.String()
|
||||
si.SetDirectoryInfo(adjustedPath, subDirectory)
|
||||
}
|
||||
|
||||
func (si *Index) InsertDirs(path string) {
|
||||
for _, f := range si.GetQuickList() {
|
||||
if f.IsDir {
|
||||
adjustedPath := si.makeIndexPath(path, true)
|
||||
if _, exists := si.Directories[adjustedPath]; exists {
|
||||
si.UpdateCount("dirs")
|
||||
// Add or update the directory in the map
|
||||
if adjustedPath == "/" {
|
||||
si.SetDirectoryInfo("/"+f.Name, Directory{})
|
||||
} else {
|
||||
si.SetDirectoryInfo(adjustedPath+"/"+f.Name, Directory{})
|
||||
}
|
||||
}
|
||||
err := si.indexFiles(path + "/" + f.Name)
|
||||
if err != nil {
|
||||
if err.Error() == "invalid argument" {
|
||||
log.Printf("Could not index \"%v\": %v \n", path, "Permission Denied")
|
||||
} else {
|
||||
log.Printf("Could not index \"%v\": %v \n", path, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Return directory info from the index
|
||||
adjustedPath := si.makeIndexPath(filePath, true)
|
||||
si.mu.RLock()
|
||||
dirInfo := si.Directories[adjustedPath]
|
||||
si.mu.RUnlock()
|
||||
return &dirInfo, nil
|
||||
}
|
||||
|
||||
// Create FileInfo for regular files
|
||||
fileInfo := &FileInfo{
|
||||
Path: filePath,
|
||||
Name: file.Name,
|
||||
Size: file.Size,
|
||||
ModTime: file.ModTime,
|
||||
IsDir: false,
|
||||
}
|
||||
|
||||
return fileInfo, nil
|
||||
}
|
||||
|
||||
func (si *Index) makeIndexPath(subPath string, isDir bool) string {
|
||||
|
@ -171,5 +191,8 @@ func (si *Index) makeIndexPath(subPath string, isDir bool) string {
|
|||
} else if !isDir {
|
||||
adjustedPath = filepath.Dir(adjustedPath)
|
||||
}
|
||||
if !strings.HasPrefix(adjustedPath, "/") {
|
||||
adjustedPath = "/" + adjustedPath
|
||||
}
|
||||
return adjustedPath
|
||||
}
|
||||
|
|
|
@ -2,6 +2,7 @@ package files
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
@ -23,18 +24,26 @@ func BenchmarkFillIndex(b *testing.B) {
|
|||
func (si *Index) createMockData(numDirs, numFilesPerDir int) {
|
||||
for i := 0; i < numDirs; i++ {
|
||||
dirName := generateRandomPath(rand.Intn(3) + 1)
|
||||
files := []File{}
|
||||
// Append a new Directory to the slice
|
||||
files := []*FileInfo{} // Slice of FileInfo
|
||||
|
||||
// Simulating files and directories with FileInfo
|
||||
for j := 0; j < numFilesPerDir; j++ {
|
||||
newFile := File{
|
||||
Name: "file-" + getRandomTerm() + getRandomExtension(),
|
||||
IsDir: false,
|
||||
newFile := &FileInfo{
|
||||
Name: "file-" + getRandomTerm() + getRandomExtension(),
|
||||
IsDir: false,
|
||||
Size: rand.Int63n(1000), // Random size
|
||||
ModTime: time.Now().Add(-time.Duration(rand.Intn(100)) * time.Hour), // Random mod time
|
||||
}
|
||||
files = append(files, newFile)
|
||||
}
|
||||
si.UpdateQuickListForTests(files)
|
||||
si.InsertFiles(dirName)
|
||||
si.InsertDirs(dirName)
|
||||
|
||||
// Simulate inserting files into index
|
||||
for _, file := range files {
|
||||
_, err := si.InsertInfo(dirName, file)
|
||||
if err != nil {
|
||||
fmt.Println("Error inserting file:", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -2,7 +2,6 @@ package files
|
|||
|
||||
import (
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
|
@ -30,12 +29,17 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st
|
|||
continue
|
||||
}
|
||||
si.mu.Lock()
|
||||
defer si.mu.Unlock()
|
||||
for dirName, dir := range si.Directories {
|
||||
isDir := true
|
||||
files := strings.Split(dir.Files, ";")
|
||||
files := []string{}
|
||||
for _, item := range dir.Items {
|
||||
if !item.IsDir {
|
||||
files = append(files, item.Name)
|
||||
}
|
||||
}
|
||||
value, found := sessionInProgress.Load(sourceSession)
|
||||
if !found || value != runningHash {
|
||||
si.mu.Unlock()
|
||||
return []string{}, map[string]map[string]bool{}
|
||||
}
|
||||
if count > maxSearchResults {
|
||||
|
@ -46,7 +50,9 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st
|
|||
continue // path not matched
|
||||
}
|
||||
fileTypes := map[string]bool{}
|
||||
matches, fileType := containsSearchTerm(dirName, searchTerm, *searchOptions, isDir, fileTypes)
|
||||
si.mu.Unlock()
|
||||
matches, fileType := si.containsSearchTerm(dirName, searchTerm, *searchOptions, isDir, fileTypes)
|
||||
si.mu.Lock()
|
||||
if matches {
|
||||
fileListTypes[pathName] = fileType
|
||||
matching = append(matching, pathName)
|
||||
|
@ -67,8 +73,9 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st
|
|||
}
|
||||
fullName := strings.TrimLeft(pathName+file, "/")
|
||||
fileTypes := map[string]bool{}
|
||||
|
||||
matches, fileType := containsSearchTerm(fullName, searchTerm, *searchOptions, isDir, fileTypes)
|
||||
si.mu.Unlock()
|
||||
matches, fileType := si.containsSearchTerm(fullName, searchTerm, *searchOptions, isDir, fileTypes)
|
||||
si.mu.Lock()
|
||||
if !matches {
|
||||
continue
|
||||
}
|
||||
|
@ -77,6 +84,7 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st
|
|||
count++
|
||||
}
|
||||
}
|
||||
si.mu.Unlock()
|
||||
}
|
||||
// Sort the strings based on the number of elements after splitting by "/"
|
||||
sort.Slice(matching, func(i, j int) bool {
|
||||
|
@ -102,65 +110,88 @@ func scopedPathNameFilter(pathName string, scope string, isDir bool) string {
|
|||
return pathName
|
||||
}
|
||||
|
||||
func containsSearchTerm(pathName string, searchTerm string, options SearchOptions, isDir bool, fileTypes map[string]bool) (bool, map[string]bool) {
|
||||
func (si *Index) containsSearchTerm(pathName string, searchTerm string, options SearchOptions, isDir bool, fileTypes map[string]bool) (bool, map[string]bool) {
|
||||
largerThan := int64(options.LargerThan) * 1024 * 1024
|
||||
smallerThan := int64(options.SmallerThan) * 1024 * 1024
|
||||
conditions := options.Conditions
|
||||
path := getLastPathComponent(pathName)
|
||||
// Convert to lowercase once
|
||||
fileName := filepath.Base(pathName)
|
||||
adjustedPath := si.makeIndexPath(pathName, isDir)
|
||||
|
||||
// Convert to lowercase if not exact match
|
||||
if !conditions["exact"] {
|
||||
path = strings.ToLower(path)
|
||||
fileName = strings.ToLower(fileName)
|
||||
searchTerm = strings.ToLower(searchTerm)
|
||||
}
|
||||
if strings.Contains(path, searchTerm) {
|
||||
// Calculate fileSize only if needed
|
||||
var fileSize int64
|
||||
matchesAllConditions := true
|
||||
extension := filepath.Ext(path)
|
||||
for _, k := range AllFiletypeOptions {
|
||||
if IsMatchingType(extension, k) {
|
||||
fileTypes[k] = true
|
||||
|
||||
// Check if the file name contains the search term
|
||||
if !strings.Contains(fileName, searchTerm) {
|
||||
return false, map[string]bool{}
|
||||
}
|
||||
|
||||
// Initialize file size and fileTypes map
|
||||
var fileSize int64
|
||||
extension := filepath.Ext(fileName)
|
||||
|
||||
// Collect file types
|
||||
for _, k := range AllFiletypeOptions {
|
||||
if IsMatchingType(extension, k) {
|
||||
fileTypes[k] = true
|
||||
}
|
||||
}
|
||||
fileTypes["dir"] = isDir
|
||||
// Get file info if needed for size-related conditions
|
||||
if largerThan > 0 || smallerThan > 0 {
|
||||
fileInfo, exists := si.GetMetadataInfo(adjustedPath)
|
||||
if !exists {
|
||||
return false, fileTypes
|
||||
} else if !isDir {
|
||||
// Look for specific file in ReducedItems
|
||||
for _, item := range fileInfo.ReducedItems {
|
||||
lower := strings.ToLower(item.Name)
|
||||
if strings.Contains(lower, searchTerm) {
|
||||
if item.Size == 0 {
|
||||
return false, fileTypes
|
||||
}
|
||||
fileSize = item.Size
|
||||
break
|
||||
}
|
||||
}
|
||||
} else {
|
||||
fileSize = fileInfo.Size
|
||||
}
|
||||
if fileSize == 0 {
|
||||
return false, fileTypes
|
||||
}
|
||||
}
|
||||
|
||||
// Evaluate all conditions
|
||||
for t, v := range conditions {
|
||||
if t == "exact" {
|
||||
continue
|
||||
}
|
||||
switch t {
|
||||
case "larger":
|
||||
if largerThan > 0 {
|
||||
if fileSize <= largerThan {
|
||||
return false, fileTypes
|
||||
}
|
||||
}
|
||||
case "smaller":
|
||||
if smallerThan > 0 {
|
||||
if fileSize >= smallerThan {
|
||||
return false, fileTypes
|
||||
}
|
||||
}
|
||||
default:
|
||||
// Handle other file type conditions
|
||||
notMatchType := v != fileTypes[t]
|
||||
if notMatchType {
|
||||
return false, fileTypes
|
||||
}
|
||||
}
|
||||
fileTypes["dir"] = isDir
|
||||
for t, v := range conditions {
|
||||
if t == "exact" {
|
||||
continue
|
||||
}
|
||||
var matchesCondition bool
|
||||
switch t {
|
||||
case "larger":
|
||||
if fileSize == 0 {
|
||||
fileSize = getFileSize(pathName)
|
||||
}
|
||||
matchesCondition = fileSize > int64(options.LargerThan)*bytesInMegabyte
|
||||
case "smaller":
|
||||
if fileSize == 0 {
|
||||
fileSize = getFileSize(pathName)
|
||||
}
|
||||
matchesCondition = fileSize < int64(options.SmallerThan)*bytesInMegabyte
|
||||
default:
|
||||
matchesCondition = v == fileTypes[t]
|
||||
}
|
||||
if !matchesCondition {
|
||||
matchesAllConditions = false
|
||||
}
|
||||
}
|
||||
return matchesAllConditions, fileTypes
|
||||
}
|
||||
// Clear variables and return
|
||||
return false, map[string]bool{}
|
||||
}
|
||||
|
||||
func getFileSize(filepath string) int64 {
|
||||
fileInfo, err := os.Stat(rootPath + "/" + filepath)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return fileInfo.Size()
|
||||
}
|
||||
|
||||
func getLastPathComponent(path string) string {
|
||||
// Use filepath.Base to extract the last component of the path
|
||||
return filepath.Base(path)
|
||||
return true, fileTypes
|
||||
}
|
||||
|
||||
func generateRandomHash(length int) string {
|
||||
|
|
|
@ -11,7 +11,7 @@ func BenchmarkSearchAllIndexes(b *testing.B) {
|
|||
InitializeIndex(5, false)
|
||||
si := GetIndex(rootPath)
|
||||
|
||||
si.createMockData(50, 3) // 1000 dirs, 3 files per dir
|
||||
si.createMockData(50, 3) // 50 dirs, 3 files per dir
|
||||
|
||||
// Generate 100 random search terms
|
||||
searchTerms := generateRandomSearchTerms(100)
|
||||
|
@ -26,87 +26,90 @@ func BenchmarkSearchAllIndexes(b *testing.B) {
|
|||
}
|
||||
}
|
||||
|
||||
// loop over test files and compare output
|
||||
func TestParseSearch(t *testing.T) {
|
||||
value := ParseSearch("my test search")
|
||||
want := &SearchOptions{
|
||||
Conditions: map[string]bool{
|
||||
"exact": false,
|
||||
tests := []struct {
|
||||
input string
|
||||
want *SearchOptions
|
||||
}{
|
||||
{
|
||||
input: "my test search",
|
||||
want: &SearchOptions{
|
||||
Conditions: map[string]bool{"exact": false},
|
||||
Terms: []string{"my test search"},
|
||||
},
|
||||
},
|
||||
Terms: []string{"my test search"},
|
||||
}
|
||||
if !reflect.DeepEqual(value, want) {
|
||||
t.Fatalf("\n got: %+v\n want: %+v", value, want)
|
||||
}
|
||||
value = ParseSearch("case:exact my|test|search")
|
||||
want = &SearchOptions{
|
||||
Conditions: map[string]bool{
|
||||
"exact": true,
|
||||
{
|
||||
input: "case:exact my|test|search",
|
||||
want: &SearchOptions{
|
||||
Conditions: map[string]bool{"exact": true},
|
||||
Terms: []string{"my", "test", "search"},
|
||||
},
|
||||
},
|
||||
Terms: []string{"my", "test", "search"},
|
||||
}
|
||||
if !reflect.DeepEqual(value, want) {
|
||||
t.Fatalf("\n got: %+v\n want: %+v", value, want)
|
||||
}
|
||||
value = ParseSearch("type:largerThan=100 type:smallerThan=1000 test")
|
||||
want = &SearchOptions{
|
||||
Conditions: map[string]bool{
|
||||
"exact": false,
|
||||
"larger": true,
|
||||
{
|
||||
input: "type:largerThan=100 type:smallerThan=1000 test",
|
||||
want: &SearchOptions{
|
||||
Conditions: map[string]bool{"exact": false, "larger": true, "smaller": true},
|
||||
Terms: []string{"test"},
|
||||
LargerThan: 100,
|
||||
SmallerThan: 1000,
|
||||
},
|
||||
},
|
||||
Terms: []string{"test"},
|
||||
LargerThan: 100,
|
||||
SmallerThan: 1000,
|
||||
}
|
||||
if !reflect.DeepEqual(value, want) {
|
||||
t.Fatalf("\n got: %+v\n want: %+v", value, want)
|
||||
}
|
||||
value = ParseSearch("type:audio thisfile")
|
||||
want = &SearchOptions{
|
||||
Conditions: map[string]bool{
|
||||
"exact": false,
|
||||
"audio": true,
|
||||
{
|
||||
input: "type:audio thisfile",
|
||||
want: &SearchOptions{
|
||||
Conditions: map[string]bool{"exact": false, "audio": true},
|
||||
Terms: []string{"thisfile"},
|
||||
},
|
||||
},
|
||||
Terms: []string{"thisfile"},
|
||||
}
|
||||
if !reflect.DeepEqual(value, want) {
|
||||
t.Fatalf("\n got: %+v\n want: %+v", value, want)
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.input, func(t *testing.T) {
|
||||
value := ParseSearch(tt.input)
|
||||
if !reflect.DeepEqual(value, tt.want) {
|
||||
t.Fatalf("\n got: %+v\n want: %+v", value, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearchWhileIndexing(t *testing.T) {
|
||||
InitializeIndex(5, false)
|
||||
si := GetIndex(rootPath)
|
||||
// Generate 100 random search terms
|
||||
// Generate 100 random search terms
|
||||
|
||||
searchTerms := generateRandomSearchTerms(10)
|
||||
for i := 0; i < 5; i++ {
|
||||
// Execute the SearchAllIndexes function
|
||||
go si.createMockData(100, 100) // 1000 dirs, 3 files per dir
|
||||
go si.createMockData(100, 100) // Creating mock data concurrently
|
||||
for _, term := range searchTerms {
|
||||
go si.Search(term, "/", "test")
|
||||
go si.Search(term, "/", "test") // Search concurrently
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSearchIndexes(t *testing.T) {
|
||||
index := Index{
|
||||
Directories: map[string]Directory{
|
||||
"test": {
|
||||
Files: "audio1.wav;",
|
||||
},
|
||||
"test/path": {
|
||||
Files: "file.txt;",
|
||||
},
|
||||
"new": {},
|
||||
"new/test": {
|
||||
Files: "audio.wav;video.mp4;video.MP4;",
|
||||
},
|
||||
"new/test/path": {
|
||||
Files: "archive.zip;",
|
||||
Directories: map[string]FileInfo{
|
||||
"test": {Items: []*FileInfo{{Name: "audio1.wav"}}},
|
||||
"test/path": {Items: []*FileInfo{{Name: "file.txt"}}},
|
||||
"new/test": {Items: []*FileInfo{
|
||||
{Name: "audio.wav"},
|
||||
{Name: "video.mp4"},
|
||||
{Name: "video.MP4"},
|
||||
}},
|
||||
"new/test/path": {Items: []*FileInfo{{Name: "archive.zip"}}},
|
||||
"/firstDir": {Items: []*FileInfo{
|
||||
{Name: "archive.zip", Size: 100},
|
||||
{Name: "thisIsDir", IsDir: true, Size: 2 * 1024 * 1024},
|
||||
}},
|
||||
"/firstDir/thisIsDir": {
|
||||
Items: []*FileInfo{
|
||||
{Name: "hi.txt"},
|
||||
},
|
||||
Size: 2 * 1024 * 1024,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
search string
|
||||
scope string
|
||||
|
@ -118,7 +121,7 @@ func TestSearchIndexes(t *testing.T) {
|
|||
scope: "/new/",
|
||||
expectedResult: []string{"test/audio.wav"},
|
||||
expectedTypes: map[string]map[string]bool{
|
||||
"test/audio.wav": map[string]bool{"audio": true, "dir": false},
|
||||
"test/audio.wav": {"audio": true, "dir": false},
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -126,16 +129,41 @@ func TestSearchIndexes(t *testing.T) {
|
|||
scope: "/",
|
||||
expectedResult: []string{"test/", "new/test/"},
|
||||
expectedTypes: map[string]map[string]bool{
|
||||
"test/": map[string]bool{"dir": true},
|
||||
"new/test/": map[string]bool{"dir": true},
|
||||
"test/": {"dir": true},
|
||||
"new/test/": {"dir": true},
|
||||
},
|
||||
},
|
||||
{
|
||||
search: "archive",
|
||||
scope: "/",
|
||||
expectedResult: []string{"new/test/path/archive.zip"},
|
||||
expectedResult: []string{"firstDir/archive.zip", "new/test/path/archive.zip"},
|
||||
expectedTypes: map[string]map[string]bool{
|
||||
"new/test/path/archive.zip": map[string]bool{"archive": true, "dir": false},
|
||||
"new/test/path/archive.zip": {"archive": true, "dir": false},
|
||||
"firstDir/archive.zip": {"archive": true, "dir": false},
|
||||
},
|
||||
},
|
||||
{
|
||||
search: "arch",
|
||||
scope: "/firstDir",
|
||||
expectedResult: []string{"archive.zip"},
|
||||
expectedTypes: map[string]map[string]bool{
|
||||
"archive.zip": {"archive": true, "dir": false},
|
||||
},
|
||||
},
|
||||
{
|
||||
search: "isdir",
|
||||
scope: "/",
|
||||
expectedResult: []string{"firstDir/thisIsDir/"},
|
||||
expectedTypes: map[string]map[string]bool{
|
||||
"firstDir/thisIsDir/": {"dir": true},
|
||||
},
|
||||
},
|
||||
{
|
||||
search: "dir type:largerThan=1",
|
||||
scope: "/",
|
||||
expectedResult: []string{"firstDir/thisIsDir/"},
|
||||
expectedTypes: map[string]map[string]bool{
|
||||
"firstDir/thisIsDir/": {"dir": true},
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -146,18 +174,17 @@ func TestSearchIndexes(t *testing.T) {
|
|||
"new/test/video.MP4",
|
||||
},
|
||||
expectedTypes: map[string]map[string]bool{
|
||||
"new/test/video.MP4": map[string]bool{"video": true, "dir": false},
|
||||
"new/test/video.mp4": map[string]bool{"video": true, "dir": false},
|
||||
"new/test/video.MP4": {"video": true, "dir": false},
|
||||
"new/test/video.mp4": {"video": true, "dir": false},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.search, func(t *testing.T) {
|
||||
actualResult, actualTypes := index.Search(tt.search, tt.scope, "")
|
||||
assert.Equal(t, tt.expectedResult, actualResult)
|
||||
if !reflect.DeepEqual(tt.expectedTypes, actualTypes) {
|
||||
t.Fatalf("\n got: %+v\n want: %+v", actualTypes, tt.expectedTypes)
|
||||
}
|
||||
assert.Equal(t, tt.expectedTypes, actualTypes)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
@ -186,6 +213,7 @@ func Test_scopedPathNameFilter(t *testing.T) {
|
|||
want: "", // Update this with the expected result
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := scopedPathNameFilter(tt.args.pathName, tt.args.scope, tt.args.isDir); got != tt.want {
|
||||
|
@ -194,103 +222,3 @@ func Test_scopedPathNameFilter(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_isDoc(t *testing.T) {
|
||||
type args struct {
|
||||
extension string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want bool
|
||||
}{
|
||||
// TODO: Add test cases.
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := isDoc(tt.args.extension); got != tt.want {
|
||||
t.Errorf("isDoc() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_getFileSize(t *testing.T) {
|
||||
type args struct {
|
||||
filepath string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want int64
|
||||
}{
|
||||
// TODO: Add test cases.
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := getFileSize(tt.args.filepath); got != tt.want {
|
||||
t.Errorf("getFileSize() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_isArchive(t *testing.T) {
|
||||
type args struct {
|
||||
extension string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want bool
|
||||
}{
|
||||
// TODO: Add test cases.
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := isArchive(tt.args.extension); got != tt.want {
|
||||
t.Errorf("isArchive() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_getLastPathComponent(t *testing.T) {
|
||||
type args struct {
|
||||
path string
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want string
|
||||
}{
|
||||
// TODO: Add test cases.
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := getLastPathComponent(tt.args.path); got != tt.want {
|
||||
t.Errorf("getLastPathComponent() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_generateRandomHash(t *testing.T) {
|
||||
type args struct {
|
||||
length int
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want string
|
||||
}{
|
||||
// TODO: Add test cases.
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := generateRandomHash(tt.args.length); got != tt.want {
|
||||
t.Errorf("generateRandomHash() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
package files
|
||||
|
||||
import (
|
||||
"io/fs"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
|
@ -13,15 +12,10 @@ func (si *Index) UpdateFileMetadata(adjustedPath string, info FileInfo) bool {
|
|||
si.mu.Lock()
|
||||
defer si.mu.Unlock()
|
||||
dir, exists := si.Directories[adjustedPath]
|
||||
if !exists || exists && dir.Metadata == nil {
|
||||
// Initialize the Metadata map if it is nil
|
||||
if dir.Metadata == nil {
|
||||
dir.Metadata = make(map[string]FileInfo)
|
||||
}
|
||||
si.Directories[adjustedPath] = dir
|
||||
// Release the read lock before calling SetFileMetadata
|
||||
if !exists {
|
||||
si.Directories[adjustedPath] = FileInfo{}
|
||||
}
|
||||
return si.SetFileMetadata(adjustedPath, info)
|
||||
return si.SetFileMetadata(adjustedPath, dir)
|
||||
}
|
||||
|
||||
// SetFileMetadata sets the FileInfo for the specified directory in the index.
|
||||
|
@ -32,37 +26,45 @@ func (si *Index) SetFileMetadata(adjustedPath string, info FileInfo) bool {
|
|||
return false
|
||||
}
|
||||
info.CacheTime = time.Now()
|
||||
si.Directories[adjustedPath].Metadata[adjustedPath] = info
|
||||
si.Directories[adjustedPath] = info
|
||||
return true
|
||||
}
|
||||
|
||||
// GetMetadataInfo retrieves the FileInfo from the specified directory in the index.
|
||||
func (si *Index) GetMetadataInfo(adjustedPath string) (FileInfo, bool) {
|
||||
fi := FileInfo{}
|
||||
si.mu.RLock()
|
||||
dir, exists := si.Directories[adjustedPath]
|
||||
si.mu.RUnlock()
|
||||
if exists {
|
||||
// Initialize the Metadata map if it is nil
|
||||
if dir.Metadata == nil {
|
||||
dir.Metadata = make(map[string]FileInfo)
|
||||
si.SetDirectoryInfo(adjustedPath, dir)
|
||||
} else {
|
||||
fi = dir.Metadata[adjustedPath]
|
||||
}
|
||||
if !exists {
|
||||
return dir, exists
|
||||
}
|
||||
return fi, exists
|
||||
// remove recursive items, we only want this directories direct files
|
||||
cleanedItems := []ReducedItem{}
|
||||
for _, item := range dir.Items {
|
||||
cleanedItems = append(cleanedItems, ReducedItem{
|
||||
Name: item.Name,
|
||||
Size: item.Size,
|
||||
IsDir: item.IsDir,
|
||||
ModTime: item.ModTime,
|
||||
Type: item.Type,
|
||||
})
|
||||
}
|
||||
dir.Items = nil
|
||||
dir.ReducedItems = cleanedItems
|
||||
realPath, _, _ := GetRealPath(adjustedPath)
|
||||
dir.Path = realPath
|
||||
return dir, exists
|
||||
}
|
||||
|
||||
// SetDirectoryInfo sets the directory information in the index.
|
||||
func (si *Index) SetDirectoryInfo(adjustedPath string, dir Directory) {
|
||||
func (si *Index) SetDirectoryInfo(adjustedPath string, dir FileInfo) {
|
||||
si.mu.Lock()
|
||||
si.Directories[adjustedPath] = dir
|
||||
si.mu.Unlock()
|
||||
}
|
||||
|
||||
// SetDirectoryInfo sets the directory information in the index.
|
||||
func (si *Index) GetDirectoryInfo(adjustedPath string) (Directory, bool) {
|
||||
func (si *Index) GetDirectoryInfo(adjustedPath string) (FileInfo, bool) {
|
||||
si.mu.RLock()
|
||||
dir, exists := si.Directories[adjustedPath]
|
||||
si.mu.RUnlock()
|
||||
|
@ -106,7 +108,7 @@ func GetIndex(root string) *Index {
|
|||
}
|
||||
newIndex := &Index{
|
||||
Root: rootPath,
|
||||
Directories: make(map[string]Directory), // Initialize the map
|
||||
Directories: map[string]FileInfo{},
|
||||
NumDirs: 0,
|
||||
NumFiles: 0,
|
||||
inProgress: false,
|
||||
|
@ -116,36 +118,3 @@ func GetIndex(root string) *Index {
|
|||
indexesMutex.Unlock()
|
||||
return newIndex
|
||||
}
|
||||
|
||||
func (si *Index) UpdateQuickList(files []fs.FileInfo) {
|
||||
si.mu.Lock()
|
||||
defer si.mu.Unlock()
|
||||
si.quickList = []File{}
|
||||
for _, file := range files {
|
||||
newFile := File{
|
||||
Name: file.Name(),
|
||||
IsDir: file.IsDir(),
|
||||
}
|
||||
si.quickList = append(si.quickList, newFile)
|
||||
}
|
||||
}
|
||||
|
||||
func (si *Index) UpdateQuickListForTests(files []File) {
|
||||
si.mu.Lock()
|
||||
defer si.mu.Unlock()
|
||||
si.quickList = []File{}
|
||||
for _, file := range files {
|
||||
newFile := File{
|
||||
Name: file.Name,
|
||||
IsDir: file.IsDir,
|
||||
}
|
||||
si.quickList = append(si.quickList, newFile)
|
||||
}
|
||||
}
|
||||
|
||||
func (si *Index) GetQuickList() []File {
|
||||
si.mu.Lock()
|
||||
defer si.mu.Unlock()
|
||||
newQuickList := si.quickList
|
||||
return newQuickList
|
||||
}
|
||||
|
|
|
@ -1,92 +1,118 @@
|
|||
package files
|
||||
|
||||
import (
|
||||
"io/fs"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Mock for fs.FileInfo
|
||||
type mockFileInfo struct {
|
||||
name string
|
||||
isDir bool
|
||||
}
|
||||
|
||||
func (m mockFileInfo) Name() string { return m.name }
|
||||
func (m mockFileInfo) Size() int64 { return 0 }
|
||||
func (m mockFileInfo) Mode() os.FileMode { return 0 }
|
||||
func (m mockFileInfo) ModTime() time.Time { return time.Now() }
|
||||
func (m mockFileInfo) IsDir() bool { return m.isDir }
|
||||
func (m mockFileInfo) Sys() interface{} { return nil }
|
||||
|
||||
var testIndex Index
|
||||
|
||||
// Test for GetFileMetadata
|
||||
//func TestGetFileMetadata(t *testing.T) {
|
||||
// t.Parallel()
|
||||
// tests := []struct {
|
||||
// name string
|
||||
// adjustedPath string
|
||||
// fileName string
|
||||
// expectedName string
|
||||
// expectedExists bool
|
||||
// }{
|
||||
// {
|
||||
// name: "testpath exists",
|
||||
// adjustedPath: "/testpath",
|
||||
// fileName: "testfile.txt",
|
||||
// expectedName: "testfile.txt",
|
||||
// expectedExists: true,
|
||||
// },
|
||||
// {
|
||||
// name: "testpath not exists",
|
||||
// adjustedPath: "/testpath",
|
||||
// fileName: "nonexistent.txt",
|
||||
// expectedName: "",
|
||||
// expectedExists: false,
|
||||
// },
|
||||
// {
|
||||
// name: "File exists in /anotherpath",
|
||||
// adjustedPath: "/anotherpath",
|
||||
// fileName: "afile.txt",
|
||||
// expectedName: "afile.txt",
|
||||
// expectedExists: true,
|
||||
// },
|
||||
// {
|
||||
// name: "File does not exist in /anotherpath",
|
||||
// adjustedPath: "/anotherpath",
|
||||
// fileName: "nonexistentfile.txt",
|
||||
// expectedName: "",
|
||||
// expectedExists: false,
|
||||
// },
|
||||
// {
|
||||
// name: "Directory does not exist",
|
||||
// adjustedPath: "/nonexistentpath",
|
||||
// fileName: "testfile.txt",
|
||||
// expectedName: "",
|
||||
// expectedExists: false,
|
||||
// },
|
||||
// }
|
||||
//
|
||||
// for _, tt := range tests {
|
||||
// t.Run(tt.name, func(t *testing.T) {
|
||||
// fileInfo, exists := testIndex.GetFileMetadata(tt.adjustedPath)
|
||||
// if exists != tt.expectedExists || fileInfo.Name != tt.expectedName {
|
||||
// t.Errorf("expected %v:%v but got: %v:%v", tt.expectedName, tt.expectedExists, //fileInfo.Name, exists)
|
||||
// }
|
||||
// })
|
||||
// }
|
||||
//}
|
||||
// Test for GetFileMetadata// Test for GetFileMetadata
|
||||
func TestGetFileMetadataSize(t *testing.T) {
|
||||
t.Parallel()
|
||||
tests := []struct {
|
||||
name string
|
||||
adjustedPath string
|
||||
expectedName string
|
||||
expectedSize int64
|
||||
}{
|
||||
{
|
||||
name: "testpath exists",
|
||||
adjustedPath: "/testpath",
|
||||
expectedName: "testfile.txt",
|
||||
expectedSize: 100,
|
||||
},
|
||||
{
|
||||
name: "testpath exists",
|
||||
adjustedPath: "/testpath",
|
||||
expectedName: "directory",
|
||||
expectedSize: 100,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
fileInfo, _ := testIndex.GetMetadataInfo(tt.adjustedPath)
|
||||
// Iterate over fileInfo.Items to look for expectedName
|
||||
for _, item := range fileInfo.ReducedItems {
|
||||
// Assert the existence and the name
|
||||
if item.Name == tt.expectedName {
|
||||
assert.Equal(t, tt.expectedSize, item.Size)
|
||||
break
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Test for GetFileMetadata// Test for GetFileMetadata
|
||||
func TestGetFileMetadata(t *testing.T) {
|
||||
t.Parallel()
|
||||
tests := []struct {
|
||||
name string
|
||||
adjustedPath string
|
||||
expectedName string
|
||||
expectedExists bool
|
||||
}{
|
||||
{
|
||||
name: "testpath exists",
|
||||
adjustedPath: "/testpath",
|
||||
expectedName: "testfile.txt",
|
||||
expectedExists: true,
|
||||
},
|
||||
{
|
||||
name: "testpath not exists",
|
||||
adjustedPath: "/testpath",
|
||||
expectedName: "nonexistent.txt",
|
||||
expectedExists: false,
|
||||
},
|
||||
{
|
||||
name: "File exists in /anotherpath",
|
||||
adjustedPath: "/anotherpath",
|
||||
expectedName: "afile.txt",
|
||||
expectedExists: true,
|
||||
},
|
||||
{
|
||||
name: "File does not exist in /anotherpath",
|
||||
adjustedPath: "/anotherpath",
|
||||
expectedName: "nonexistentfile.txt",
|
||||
expectedExists: false,
|
||||
},
|
||||
{
|
||||
name: "Directory does not exist",
|
||||
adjustedPath: "/nonexistentpath",
|
||||
expectedName: "",
|
||||
expectedExists: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
fileInfo, _ := testIndex.GetMetadataInfo(tt.adjustedPath)
|
||||
found := false
|
||||
// Iterate over fileInfo.Items to look for expectedName
|
||||
for _, item := range fileInfo.ReducedItems {
|
||||
// Assert the existence and the name
|
||||
if item.Name == tt.expectedName {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Equal(t, tt.expectedExists, found)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Test for UpdateFileMetadata
|
||||
func TestUpdateFileMetadata(t *testing.T) {
|
||||
index := &Index{
|
||||
Directories: map[string]Directory{
|
||||
Directories: map[string]FileInfo{
|
||||
"/testpath": {
|
||||
Metadata: map[string]FileInfo{
|
||||
"testfile.txt": {Name: "testfile.txt"},
|
||||
"anotherfile.txt": {Name: "anotherfile.txt"},
|
||||
Path: "/testpath",
|
||||
Name: "testpath",
|
||||
IsDir: true,
|
||||
ReducedItems: []ReducedItem{
|
||||
{Name: "testfile.txt"},
|
||||
{Name: "anotherfile.txt"},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
@ -100,7 +126,7 @@ func TestUpdateFileMetadata(t *testing.T) {
|
|||
}
|
||||
|
||||
dir, exists := index.Directories["/testpath"]
|
||||
if !exists || dir.Metadata["testfile.txt"].Name != "testfile.txt" {
|
||||
if !exists || dir.ReducedItems[0].Name != "testfile.txt" {
|
||||
t.Fatalf("expected testfile.txt to be updated in the directory metadata")
|
||||
}
|
||||
}
|
||||
|
@ -122,19 +148,29 @@ func TestGetDirMetadata(t *testing.T) {
|
|||
// Test for SetDirectoryInfo
|
||||
func TestSetDirectoryInfo(t *testing.T) {
|
||||
index := &Index{
|
||||
Directories: map[string]Directory{
|
||||
Directories: map[string]FileInfo{
|
||||
"/testpath": {
|
||||
Metadata: map[string]FileInfo{
|
||||
"testfile.txt": {Name: "testfile.txt"},
|
||||
"anotherfile.txt": {Name: "anotherfile.txt"},
|
||||
Path: "/testpath",
|
||||
Name: "testpath",
|
||||
IsDir: true,
|
||||
Items: []*FileInfo{
|
||||
{Name: "testfile.txt"},
|
||||
{Name: "anotherfile.txt"},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
dir := Directory{Metadata: map[string]FileInfo{"testfile.txt": {Name: "testfile.txt"}}}
|
||||
dir := FileInfo{
|
||||
Path: "/newPath",
|
||||
Name: "newPath",
|
||||
IsDir: true,
|
||||
Items: []*FileInfo{
|
||||
{Name: "testfile.txt"},
|
||||
},
|
||||
}
|
||||
index.SetDirectoryInfo("/newPath", dir)
|
||||
storedDir, exists := index.Directories["/newPath"]
|
||||
if !exists || storedDir.Metadata["testfile.txt"].Name != "testfile.txt" {
|
||||
if !exists || storedDir.Items[0].Name != "testfile.txt" {
|
||||
t.Fatalf("expected SetDirectoryInfo to store directory info correctly")
|
||||
}
|
||||
}
|
||||
|
@ -143,7 +179,7 @@ func TestSetDirectoryInfo(t *testing.T) {
|
|||
func TestGetDirectoryInfo(t *testing.T) {
|
||||
t.Parallel()
|
||||
dir, exists := testIndex.GetDirectoryInfo("/testpath")
|
||||
if !exists || dir.Metadata["testfile.txt"].Name != "testfile.txt" {
|
||||
if !exists || dir.Items[0].Name != "testfile.txt" {
|
||||
t.Fatalf("expected GetDirectoryInfo to return correct directory info")
|
||||
}
|
||||
|
||||
|
@ -156,7 +192,7 @@ func TestGetDirectoryInfo(t *testing.T) {
|
|||
// Test for RemoveDirectory
|
||||
func TestRemoveDirectory(t *testing.T) {
|
||||
index := &Index{
|
||||
Directories: map[string]Directory{
|
||||
Directories: map[string]FileInfo{
|
||||
"/testpath": {},
|
||||
},
|
||||
}
|
||||
|
@ -194,27 +230,33 @@ func TestUpdateCount(t *testing.T) {
|
|||
|
||||
func init() {
|
||||
testIndex = Index{
|
||||
Root: "/",
|
||||
NumFiles: 10,
|
||||
NumDirs: 5,
|
||||
inProgress: false,
|
||||
Directories: map[string]Directory{
|
||||
Directories: map[string]FileInfo{
|
||||
"/testpath": {
|
||||
Metadata: map[string]FileInfo{
|
||||
"testfile.txt": {Name: "testfile.txt"},
|
||||
"anotherfile.txt": {Name: "anotherfile.txt"},
|
||||
Path: "/testpath",
|
||||
Name: "testpath",
|
||||
IsDir: true,
|
||||
NumDirs: 1,
|
||||
NumFiles: 2,
|
||||
Items: []*FileInfo{
|
||||
{Name: "testfile.txt", Size: 100},
|
||||
{Name: "anotherfile.txt", Size: 100},
|
||||
},
|
||||
},
|
||||
"/anotherpath": {
|
||||
Metadata: map[string]FileInfo{
|
||||
"afile.txt": {Name: "afile.txt"},
|
||||
Path: "/anotherpath",
|
||||
Name: "anotherpath",
|
||||
IsDir: true,
|
||||
NumDirs: 1,
|
||||
NumFiles: 1,
|
||||
Items: []*FileInfo{
|
||||
{Name: "directory", IsDir: true, Size: 100},
|
||||
{Name: "afile.txt", Size: 100},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
files := []fs.FileInfo{
|
||||
mockFileInfo{name: "file1.txt", isDir: false},
|
||||
mockFileInfo{name: "dir1", isDir: true},
|
||||
}
|
||||
testIndex.UpdateQuickList(files)
|
||||
}
|
||||
|
|
|
@ -15,11 +15,20 @@ type modifyRequest struct {
|
|||
Which []string `json:"which"` // Answer to: which fields?
|
||||
}
|
||||
|
||||
var (
|
||||
store *storage.Storage
|
||||
server *settings.Server
|
||||
fileCache FileCache
|
||||
)
|
||||
|
||||
func SetupEnv(storage *storage.Storage, s *settings.Server, cache FileCache) {
|
||||
store = storage
|
||||
server = s
|
||||
fileCache = cache
|
||||
}
|
||||
|
||||
func NewHandler(
|
||||
imgSvc ImgService,
|
||||
fileCache FileCache,
|
||||
store *storage.Storage,
|
||||
server *settings.Server,
|
||||
assetsFs fs.FS,
|
||||
) (http.Handler, error) {
|
||||
server.Clean()
|
||||
|
|
|
@ -11,6 +11,7 @@ import (
|
|||
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/share"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/storage/bolt"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
)
|
||||
|
@ -73,8 +74,13 @@ func TestPublicShareHandlerAuthentication(t *testing.T) {
|
|||
t.Errorf("failed to close db: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
storage, err := bolt.NewStorage(db)
|
||||
authStore, userStore, shareStore, settingsStore, err := bolt.NewStorage(db)
|
||||
storage := &storage.Storage{
|
||||
Auth: authStore,
|
||||
Users: userStore,
|
||||
Share: shareStore,
|
||||
Settings: settingsStore,
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get storage: %v", err)
|
||||
}
|
||||
|
|
|
@ -2,7 +2,6 @@ package http
|
|||
|
||||
import (
|
||||
"encoding/json"
|
||||
"log"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"sort"
|
||||
|
@ -14,6 +13,7 @@ import (
|
|||
|
||||
"github.com/gtsteffaniak/filebrowser/errors"
|
||||
"github.com/gtsteffaniak/filebrowser/files"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
)
|
||||
|
||||
|
@ -130,21 +130,7 @@ var userPostHandler = withAdmin(func(w http.ResponseWriter, r *http.Request, d *
|
|||
return http.StatusBadRequest, errors.ErrEmptyPassword
|
||||
}
|
||||
|
||||
newUser := users.ApplyDefaults(*req.Data)
|
||||
|
||||
userHome, err := d.settings.MakeUserDir(req.Data.Username, req.Data.Scope, d.server.Root)
|
||||
if err != nil {
|
||||
log.Printf("create user: failed to mkdir user home dir: [%s]", userHome)
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
newUser.Scope = userHome
|
||||
log.Printf("user: %s, home dir: [%s].", req.Data.Username, userHome)
|
||||
_, _, err = files.GetRealPath(d.server.Root, req.Data.Scope)
|
||||
if err != nil {
|
||||
log.Println("user path is not valid", req.Data.Scope)
|
||||
return http.StatusBadRequest, nil
|
||||
}
|
||||
err = d.store.Users.Save(&newUser)
|
||||
err = storage.CreateUser(*req.Data, req.Data.Perm.Admin)
|
||||
if err != nil {
|
||||
return http.StatusInternalServerError, err
|
||||
}
|
||||
|
|
|
@ -34,15 +34,14 @@ func loadConfigFile(configFile string) []byte {
|
|||
// Open and read the YAML file
|
||||
yamlFile, err := os.Open(configFile)
|
||||
if err != nil {
|
||||
log.Printf("ERROR: opening config file\n %v\n WARNING: Using default config only\n If this was a mistake, please make sure the file exists and is accessible by the filebrowser binary.\n\n", err)
|
||||
Config = setDefaults()
|
||||
return []byte{}
|
||||
log.Println(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
defer yamlFile.Close()
|
||||
|
||||
stat, err := yamlFile.Stat()
|
||||
if err != nil {
|
||||
log.Fatalf("Error getting file information: %s", err.Error())
|
||||
log.Fatalf("error getting file information: %s", err.Error())
|
||||
}
|
||||
|
||||
yamlData := make([]byte, stat.Size())
|
||||
|
|
|
@ -39,3 +39,15 @@ func GenerateKey() ([]byte, error) {
|
|||
func GetSettingsConfig(nameType string, Value string) string {
|
||||
return nameType + Value
|
||||
}
|
||||
|
||||
func AdminPerms() Permissions {
|
||||
return Permissions{
|
||||
Create: true,
|
||||
Rename: true,
|
||||
Modify: true,
|
||||
Delete: true,
|
||||
Share: true,
|
||||
Download: true,
|
||||
Admin: true,
|
||||
}
|
||||
}
|
||||
|
|
|
@ -28,5 +28,5 @@ func (s authBackend) Get(t string) (auth.Auther, error) {
|
|||
}
|
||||
|
||||
func (s authBackend) Save(a auth.Auther) error {
|
||||
return save(s.db, "auther", a)
|
||||
return Save(s.db, "auther", a)
|
||||
}
|
||||
|
|
|
@ -6,26 +6,14 @@ import (
|
|||
"github.com/gtsteffaniak/filebrowser/auth"
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/share"
|
||||
"github.com/gtsteffaniak/filebrowser/storage"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
)
|
||||
|
||||
// NewStorage creates a storage.Storage based on Bolt DB.
|
||||
func NewStorage(db *storm.DB) (*storage.Storage, error) {
|
||||
func NewStorage(db *storm.DB) (*auth.Storage, *users.Storage, *share.Storage, *settings.Storage, error) {
|
||||
userStore := users.NewStorage(usersBackend{db: db})
|
||||
shareStore := share.NewStorage(shareBackend{db: db})
|
||||
settingsStore := settings.NewStorage(settingsBackend{db: db})
|
||||
authStore := auth.NewStorage(authBackend{db: db}, userStore)
|
||||
|
||||
err := save(db, "version", 2) //nolint:gomnd
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &storage.Storage{
|
||||
Auth: authStore,
|
||||
Users: userStore,
|
||||
Share: shareStore,
|
||||
Settings: settingsStore,
|
||||
}, nil
|
||||
return authStore, userStore, shareStore, settingsStore, nil
|
||||
}
|
||||
|
|
|
@ -15,7 +15,7 @@ func (s settingsBackend) Get() (*settings.Settings, error) {
|
|||
}
|
||||
|
||||
func (s settingsBackend) Save(set *settings.Settings) error {
|
||||
return save(s.db, "settings", set)
|
||||
return Save(s.db, "settings", set)
|
||||
}
|
||||
|
||||
func (s settingsBackend) GetServer() (*settings.Server, error) {
|
||||
|
@ -27,5 +27,5 @@ func (s settingsBackend) GetServer() (*settings.Server, error) {
|
|||
}
|
||||
|
||||
func (s settingsBackend) SaveServer(server *settings.Server) error {
|
||||
return save(s.db, "server", server)
|
||||
return Save(s.db, "server", server)
|
||||
}
|
||||
|
|
|
@ -15,6 +15,6 @@ func get(db *storm.DB, name string, to interface{}) error {
|
|||
return err
|
||||
}
|
||||
|
||||
func save(db *storm.DB, name string, from interface{}) error {
|
||||
func Save(db *storm.DB, name string, from interface{}) error {
|
||||
return db.Set("config", name, from)
|
||||
}
|
||||
|
|
|
@ -1,10 +1,20 @@
|
|||
package storage
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/asdine/storm/v3"
|
||||
"github.com/gtsteffaniak/filebrowser/auth"
|
||||
"github.com/gtsteffaniak/filebrowser/errors"
|
||||
"github.com/gtsteffaniak/filebrowser/files"
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
"github.com/gtsteffaniak/filebrowser/share"
|
||||
"github.com/gtsteffaniak/filebrowser/storage/bolt"
|
||||
"github.com/gtsteffaniak/filebrowser/users"
|
||||
"github.com/gtsteffaniak/filebrowser/utils"
|
||||
)
|
||||
|
||||
// Storage is a storage powered by a Backend which makes the necessary
|
||||
|
@ -15,3 +25,112 @@ type Storage struct {
|
|||
Auth *auth.Storage
|
||||
Settings *settings.Storage
|
||||
}
|
||||
|
||||
var store *Storage
|
||||
|
||||
func InitializeDb(path string) (*Storage, bool, error) {
|
||||
exists, err := dbExists(path)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
db, err := storm.Open(path)
|
||||
|
||||
utils.CheckErr(fmt.Sprintf("storm.Open path %v", path), err)
|
||||
authStore, userStore, shareStore, settingsStore, err := bolt.NewStorage(db)
|
||||
if err != nil {
|
||||
return nil, exists, err
|
||||
}
|
||||
|
||||
err = bolt.Save(db, "version", 2) //nolint:gomnd
|
||||
if err != nil {
|
||||
return nil, exists, err
|
||||
}
|
||||
store = &Storage{
|
||||
Auth: authStore,
|
||||
Users: userStore,
|
||||
Share: shareStore,
|
||||
Settings: settingsStore,
|
||||
}
|
||||
if !exists {
|
||||
quickSetup(store)
|
||||
}
|
||||
|
||||
return store, exists, err
|
||||
}
|
||||
|
||||
func dbExists(path string) (bool, error) {
|
||||
stat, err := os.Stat(path)
|
||||
if err == nil {
|
||||
return stat.Size() != 0, nil
|
||||
}
|
||||
|
||||
if os.IsNotExist(err) {
|
||||
d := filepath.Dir(path)
|
||||
_, err = os.Stat(d)
|
||||
if os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(d, 0700); err != nil { //nolint:govet,gomnd
|
||||
return false, err
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
return false, err
|
||||
}
|
||||
|
||||
func quickSetup(store *Storage) {
|
||||
settings.Config.Auth.Key = utils.GenerateKey()
|
||||
if settings.Config.Auth.Method == "noauth" {
|
||||
err := store.Auth.Save(&auth.NoAuth{})
|
||||
utils.CheckErr("store.Auth.Save", err)
|
||||
} else {
|
||||
settings.Config.Auth.Method = "password"
|
||||
err := store.Auth.Save(&auth.JSONAuth{})
|
||||
utils.CheckErr("store.Auth.Save", err)
|
||||
}
|
||||
err := store.Settings.Save(&settings.Config)
|
||||
utils.CheckErr("store.Settings.Save", err)
|
||||
err = store.Settings.SaveServer(&settings.Config.Server)
|
||||
utils.CheckErr("store.Settings.SaveServer", err)
|
||||
user := users.ApplyDefaults(users.User{})
|
||||
user.Username = settings.Config.Auth.AdminUsername
|
||||
user.Password = settings.Config.Auth.AdminPassword
|
||||
user.Perm.Admin = true
|
||||
user.Scope = "./"
|
||||
user.DarkMode = true
|
||||
user.ViewMode = "normal"
|
||||
user.LockPassword = false
|
||||
user.Perm = settings.AdminPerms()
|
||||
err = store.Users.Save(&user)
|
||||
utils.CheckErr("store.Users.Save", err)
|
||||
}
|
||||
|
||||
// create new user
|
||||
func CreateUser(userInfo users.User, asAdmin bool) error {
|
||||
// must have username or password to create
|
||||
if userInfo.Username == "" || userInfo.Password == "" {
|
||||
return errors.ErrInvalidRequestParams
|
||||
}
|
||||
newUser := users.ApplyDefaults(userInfo)
|
||||
if asAdmin {
|
||||
newUser.Perm = settings.AdminPerms()
|
||||
}
|
||||
// create new home directory
|
||||
userHome, err := settings.Config.MakeUserDir(newUser.Username, newUser.Scope, settings.Config.Server.Root)
|
||||
if err != nil {
|
||||
log.Printf("create user: failed to mkdir user home dir: [%s]", userHome)
|
||||
return err
|
||||
}
|
||||
newUser.Scope = userHome
|
||||
log.Printf("user: %s, home dir: [%s].", newUser.Username, userHome)
|
||||
_, _, err = files.GetRealPath(settings.Config.Server.Root, newUser.Scope)
|
||||
if err != nil {
|
||||
log.Println("user path is not valid", newUser.Scope)
|
||||
return nil
|
||||
}
|
||||
err = store.Users.Save(&newUser)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
package utils
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/gtsteffaniak/filebrowser/settings"
|
||||
)
|
||||
|
||||
func CheckErr(source string, err error) {
|
||||
if err != nil {
|
||||
log.Fatalf("%s: %v", source, err)
|
||||
}
|
||||
}
|
||||
|
||||
func GenerateKey() []byte {
|
||||
k, err := settings.GenerateKey()
|
||||
CheckErr("generateKey", err)
|
||||
return k
|
||||
}
|
|
@ -0,0 +1,2 @@
|
|||
# Contributing Guide
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
# Getting Started using FileBrowser Quantum
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
# Migration help
|
||||
|
||||
It is possible to use the same database as used by filebrowser/filebrowser,
|
||||
but you will need to follow the following process:
|
||||
|
||||
1. Create a configuration file as mentioned above.
|
||||
2. Copy your database file from the original filebrowser to the path of
|
||||
the new one.
|
||||
3. Update the configuration file to use the database (under server in
|
||||
filebrowser.yml)
|
||||
4. If you are using docker, update the docker-compose file or docker run
|
||||
command to use the config file as described in the install section
|
||||
above.
|
||||
5. If you are not using docker, just make sure you run filebrowser -c
|
||||
filebrowser.yml and have a valid filebrowser config.
|
||||
|
||||
|
||||
Note: share links will not work and will need to be re-created after migration.
|
||||
|
||||
The filebrowser Quantum application should run with the same user and rules that
|
||||
you have from the original. But keep in mind the differences that may not work
|
||||
the same way, but all user configuration should be available.
|
|
@ -0,0 +1,24 @@
|
|||
# Planned Roadmap
|
||||
|
||||
upcoming 0.2.x releases:
|
||||
|
||||
- Replace http routes for gorilla/mux with stdlib
|
||||
- Theme configuration from settings
|
||||
- File syncronization improvements
|
||||
- more filetype previews
|
||||
|
||||
next major 0.3.0 release :
|
||||
|
||||
- multiple sources https://github.com/filebrowser/filebrowser/issues/2514
|
||||
- introduce jobs as replacement to runners.
|
||||
- Add Job status to the sidebar
|
||||
- index status.
|
||||
- Job status from users
|
||||
- upload status
|
||||
|
||||
Unplanned Future releases:
|
||||
- Add tools to sidebar
|
||||
- duplicate file detector.
|
||||
- bulk rename https://github.com/filebrowser/filebrowser/issues/2473
|
||||
- metrics tracker - user access, file access, download count, last login, etc
|
||||
- support minio, s3, and backblaze sources https://github.com/filebrowser/filebrowser/issues/2544
|
|
@ -19,18 +19,20 @@
|
|||
<input
|
||||
v-model="gallerySize"
|
||||
type="range"
|
||||
id="gallary-size"
|
||||
name="gallary-size"
|
||||
id="gallery-size"
|
||||
name="gallery-size"
|
||||
:value="gallerySize"
|
||||
min="0"
|
||||
max="10"
|
||||
@input="updateGallerySize"
|
||||
@change="commitGallerySize"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script>
|
||||
import { state, mutations, getters } from "@/store"; // Import mutations as well
|
||||
import { state, mutations, getters } from "@/store";
|
||||
import Action from "@/components/Action.vue";
|
||||
|
||||
export default {
|
||||
|
@ -43,12 +45,6 @@ export default {
|
|||
gallerySize: state.user.gallerySize,
|
||||
};
|
||||
},
|
||||
watch: {
|
||||
gallerySize(newValue) {
|
||||
this.gallerySize = parseInt(newValue, 0); // Update the user object
|
||||
mutations.setGallerySize(this.gallerySize);
|
||||
},
|
||||
},
|
||||
props: ["base", "noLink"],
|
||||
computed: {
|
||||
isCardView() {
|
||||
|
@ -100,13 +96,16 @@ export default {
|
|||
return "router-link";
|
||||
},
|
||||
showShare() {
|
||||
// Ensure user properties are accessed safely
|
||||
if (state.route.path.startsWith("/share")) {
|
||||
return false;
|
||||
}
|
||||
return state.user?.perm && state.user?.perm.share; // Access from state directly
|
||||
return state.user?.perm && state.user?.perm.share;
|
||||
},
|
||||
},
|
||||
methods: {
|
||||
updateGallerySize(event) {
|
||||
this.gallerySize = parseInt(event.target.value, 10);
|
||||
},
|
||||
commitGallerySize() {
|
||||
mutations.setGallerySize(this.gallerySize);
|
||||
},
|
||||
},
|
||||
methods: { },
|
||||
};
|
||||
</script>
|
||||
|
|
|
@ -166,10 +166,6 @@
|
|||
<b>Multiple Search terms:</b> Additional terms separated by <code>|</code>,
|
||||
for example <code>"test|not"</code> searches for both terms independently.
|
||||
</p>
|
||||
<p>
|
||||
<b>File size:</b> Searching files by size may have significantly longer search
|
||||
times.
|
||||
</p>
|
||||
</div>
|
||||
<!-- List of search results -->
|
||||
<ul v-show="results.length > 0">
|
||||
|
@ -311,6 +307,9 @@ export default {
|
|||
path = path.slice(1);
|
||||
path = "./" + path.substring(path.indexOf("/") + 1);
|
||||
path = path.replace(/\/+$/, "") + "/";
|
||||
if (path == "./files/") {
|
||||
path = "./";
|
||||
}
|
||||
return path;
|
||||
},
|
||||
},
|
||||
|
@ -391,10 +390,10 @@ export default {
|
|||
return;
|
||||
}
|
||||
let searchTypesFull = this.searchTypes;
|
||||
if (this.largerThan != "") {
|
||||
if (this.largerThan != "" && !this.isTypeSelectDisabled) {
|
||||
searchTypesFull = searchTypesFull + "type:largerThan=" + this.largerThan + " ";
|
||||
}
|
||||
if (this.smallerThan != "") {
|
||||
if (this.smallerThan != "" && !this.isTypeSelectDisabled) {
|
||||
searchTypesFull = searchTypesFull + "type:smallerThan=" + this.smallerThan + " ";
|
||||
}
|
||||
let path = state.route.path;
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
<template>
|
||||
<component
|
||||
:is="isSelected || user.singleClick ? 'a' : 'div'"
|
||||
:href="isSelected || user.singleClick ? url : undefined"
|
||||
:is="quickNav ? 'a' : 'div'"
|
||||
:href="quickNav ? url : undefined"
|
||||
:class="{
|
||||
item: true,
|
||||
activebutton: isMaximized && isSelected,
|
||||
|
@ -16,7 +16,7 @@
|
|||
:data-type="type"
|
||||
:aria-label="name"
|
||||
:aria-selected="isSelected"
|
||||
@click="isSelected || user.singleClick ? toggleClick() : itemClick($event)"
|
||||
@click="quickNav ? toggleClick() : itemClick($event)"
|
||||
>
|
||||
<div @click="toggleClick" :class="{ activetitle: isMaximized && isSelected }">
|
||||
<img
|
||||
|
@ -34,8 +34,7 @@
|
|||
|
||||
<div class="text" :class="{ activecontent: isMaximized && isSelected }">
|
||||
<p class="name">{{ name }}</p>
|
||||
<p v-if="isDir" class="size" data-order="-1">—</p>
|
||||
<p v-else class="size" :data-order="humanSize()">{{ humanSize() }}</p>
|
||||
<p class="size" :data-order="humanSize()">{{ humanSize() }}</p>
|
||||
<p class="modified">
|
||||
<time :datetime="modified">{{ humanTime() }}</time>
|
||||
</p>
|
||||
|
@ -93,6 +92,9 @@ export default {
|
|||
"path",
|
||||
],
|
||||
computed: {
|
||||
quickNav() {
|
||||
return state.user.singleClick && !state.multiple;
|
||||
},
|
||||
user() {
|
||||
return state.user;
|
||||
},
|
||||
|
@ -263,6 +265,7 @@ export default {
|
|||
action(overwrite, rename);
|
||||
},
|
||||
itemClick(event) {
|
||||
console.log("should say something");
|
||||
if (this.singleClick && !state.multiple) this.open();
|
||||
else this.click(event);
|
||||
},
|
||||
|
@ -271,7 +274,7 @@ export default {
|
|||
|
||||
setTimeout(() => {
|
||||
this.touches = 0;
|
||||
}, 300);
|
||||
}, 500);
|
||||
|
||||
this.touches++;
|
||||
if (this.touches > 1) {
|
||||
|
|
|
@ -9,6 +9,7 @@ export const mutations = {
|
|||
setGallerySize: (value) => {
|
||||
state.user.gallerySize = value
|
||||
emitStateChanged();
|
||||
users.update(state.user,['gallerySize']);
|
||||
},
|
||||
setActiveSettingsView: (value) => {
|
||||
state.activeSettingsView = value;
|
||||
|
@ -195,19 +196,20 @@ export const mutations = {
|
|||
emitStateChanged();
|
||||
},
|
||||
setRoute: (value) => {
|
||||
console.log("going...",value)
|
||||
state.route = value;
|
||||
emitStateChanged();
|
||||
},
|
||||
updateListingSortConfig: ({ field, asc }) => {
|
||||
state.req.sorting.by = field;
|
||||
state.req.sorting.asc = asc;
|
||||
state.user.sorting.by = field;
|
||||
state.user.sorting.asc = asc;
|
||||
emitStateChanged();
|
||||
},
|
||||
updateListingItems: () => {
|
||||
state.req.items.sort((a, b) => {
|
||||
const valueA = a[state.req.sorting.by];
|
||||
const valueB = b[state.req.sorting.by];
|
||||
if (state.req.sorting.asc) {
|
||||
const valueA = a[state.user.sorting.by];
|
||||
const valueB = b[state.user.sorting.by];
|
||||
if (state.user.sorting.asc) {
|
||||
return valueA > valueB ? 1 : -1;
|
||||
} else {
|
||||
return valueA < valueB ? 1 : -1;
|
||||
|
|
|
@ -7,24 +7,24 @@ export function getHumanReadableFilesize(fileSizeBytes) {
|
|||
switch (true) {
|
||||
case fileSizeBytes < 1024:
|
||||
break;
|
||||
case fileSizeBytes < 1000 ** 2: // 1 KB - 1 MB
|
||||
size = fileSizeBytes / 1000;
|
||||
case fileSizeBytes < 1024 ** 2: // 1 KB - 1 MB
|
||||
size = fileSizeBytes / 1024;
|
||||
unit = 'KB';
|
||||
break;
|
||||
case fileSizeBytes < 1000 ** 3: // 1 MB - 1 GB
|
||||
size = fileSizeBytes / (1000 ** 2);
|
||||
case fileSizeBytes < 1024 ** 3: // 1 MB - 1 GB
|
||||
size = fileSizeBytes / (1024 ** 2);
|
||||
unit = 'MB';
|
||||
break;
|
||||
case fileSizeBytes < 1000 ** 4: // 1 GB - 1 TB
|
||||
size = fileSizeBytes / (1000 ** 3);
|
||||
case fileSizeBytes < 1024 ** 4: // 1 GB - 1 TB
|
||||
size = fileSizeBytes / (1024 ** 3);
|
||||
unit = 'GB';
|
||||
break;
|
||||
case fileSizeBytes < 1000 ** 5: // 1 TB - 1 PB
|
||||
size = fileSizeBytes / (1000 ** 4);
|
||||
case fileSizeBytes < 1024 ** 5: // 1 TB - 1 PB
|
||||
size = fileSizeBytes / (1024 ** 4);
|
||||
unit = 'TB';
|
||||
break;
|
||||
default: // >= 1 PB
|
||||
size = fileSizeBytes / (1000 ** 5);
|
||||
size = fileSizeBytes / (1024 ** 5);
|
||||
unit = 'PB';
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -51,13 +51,13 @@ export default {
|
|||
return state.selected;
|
||||
},
|
||||
nameSorted() {
|
||||
return state.req.sorting.by === "name";
|
||||
return state.user.sorting.by === "name";
|
||||
},
|
||||
sizeSorted() {
|
||||
return state.req.sorting.by === "size";
|
||||
return state.user.sorting.by === "size";
|
||||
},
|
||||
modifiedSorted() {
|
||||
return state.req.sorting.by === "modified";
|
||||
return state.user.sorting.by === "modified";
|
||||
},
|
||||
ascOrdered() {
|
||||
return state.req.sorting.asc;
|
||||
|
@ -297,7 +297,7 @@ export default {
|
|||
const currentIndex = this.viewModes.indexOf(state.user.viewMode);
|
||||
const nextIndex = (currentIndex + 1) % this.viewModes.length;
|
||||
const newView = this.viewModes[nextIndex];
|
||||
mutations.updateCurrentUser({ "viewMode": newView });
|
||||
mutations.updateCurrentUser({ viewMode: newView });
|
||||
},
|
||||
preventDefault(event) {
|
||||
// Wrapper around prevent default.
|
||||
|
|
|
@ -207,16 +207,16 @@ export default {
|
|||
return state.multiple;
|
||||
},
|
||||
nameSorted() {
|
||||
return state.req.sorting.by === "name";
|
||||
return state.user.sorting.by === "name";
|
||||
},
|
||||
sizeSorted() {
|
||||
return state.req.sorting.by === "size";
|
||||
return state.user.sorting.by === "size";
|
||||
},
|
||||
modifiedSorted() {
|
||||
return state.req.sorting.by === "modified";
|
||||
return state.user.sorting.by === "modified";
|
||||
},
|
||||
ascOrdered() {
|
||||
return state.req.sorting.asc;
|
||||
return state.user.sorting.asc;
|
||||
},
|
||||
items() {
|
||||
return getters.reqItems();
|
||||
|
@ -443,7 +443,7 @@ export default {
|
|||
return;
|
||||
}
|
||||
if (noModifierKeys && getters.currentPromptName() != null) {
|
||||
return
|
||||
return;
|
||||
}
|
||||
// Handle the space bar key
|
||||
if (key === " ") {
|
||||
|
|
27
roadmap.md
27
roadmap.md
|
@ -1,27 +0,0 @@
|
|||
# Planned Roadmap
|
||||
|
||||
next 0.2.x release:
|
||||
|
||||
- Theme configuration from settings
|
||||
- File syncronization improvements
|
||||
- right-click context menu
|
||||
|
||||
initial 0.3.0 release :
|
||||
|
||||
- database changes
|
||||
- introduce jobs as replacement to runners.
|
||||
- Add Job status to the sidebar
|
||||
- index status.
|
||||
- Job status from users
|
||||
- upload status
|
||||
|
||||
Future releases:
|
||||
- Replace http routes for gorilla/mux with pocketbase
|
||||
- Allow multiple volumes to show up in the same filebrowser container. https://github.com/filebrowser/filebrowser/issues/2514
|
||||
- enable/disable indexing for certain mounts
|
||||
- Add tools to sidebar
|
||||
- duplicate file detector.
|
||||
- bulk rename https://github.com/filebrowser/filebrowser/issues/2473
|
||||
- job manager - folder sync, copy, lifecycle operations
|
||||
- metrics tracker - user access, file access, download count, last login, etc
|
||||
- support minio s3 and backblaze sources https://github.com/filebrowser/filebrowser/issues/2544
|
Loading…
Reference in New Issue