diff --git a/CHANGELOG.md b/CHANGELOG.md index 2af53cea..13f053ea 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,23 @@ All notable changes to this project will be documented in this file. For commit guidelines, please refer to [Standard Version](https://github.com/conventional-changelog/standard-version). +## v0.2.10 + + **New Features**: + - Allows user creation command line arguments https://github.com/gtsteffaniak/filebrowser/issues/196 + - Folder sizes are always shown, leveraging the index. https://github.com/gtsteffaniak/filebrowser/issues/138 + - Searching files based on filesize is no longer slower. + + **Bugfixes**: + - fixes file selection usage when in single-click mode https://github.com/gtsteffaniak/filebrowser/issues/214 + - Fixed displayed search context on root directory + - Fixed issue searching "smaller than" actually returned files "larger than" + + **Notes**: + - Memory usage from index is reduced by ~40% + - Indexing time has increased 2x due to the extra processing time required to calculate directory sizes. + - File size calcuations use 1024 base vs previous 1000 base (matching windows explorer) + ## v0.2.9 This release focused on UI navigation experience. Improving keyboard navigation and adds right click context menu. diff --git a/README.md b/README.md index a31c33ff..3190d42a 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@

FileBrowser Quantum - A modern web-based file manager

- +

> [!WARNING] @@ -18,9 +18,9 @@ FileBrowser Quantum is a fork of the filebrowser opensource project with the following changes: - 1. [x] Enhanced lightning fast indexed search - - Real-time results as you type - - Works with more type filters + 1. [x] Efficiently indexed files + - Real-time search results as you type + - Search Works with more type filters - Enhanced interactive results page. 2. [x] Revamped and simplified GUI navbar and sidebar menu. - Additional compact view mode as well as refreshed view mode @@ -131,39 +131,30 @@ Not using docker (not recommended), download your binary from releases and run w ./filebrowser -c ``` +## Command Line Usage + +There are very few commands available. There are 3 actions done via command line: + +1. Running the program, as shown on install step. Only argument used is the config file, if you choose to override default "filebrowser.yaml" +2. Checking the version info via `./filebrowser version` +3. Updating the DB, which currently only supports adding users via `./filebrowser set -u username,password [-a] [-s "example/scope"]` + ## Configuration All configuration is now done via a single configuration file: `filebrowser.yaml`, here is an example of minimal [configuration file](./backend/filebrowser.yaml). -View the [Configuration Help Page](./configuration.md) for available +View the [Configuration Help Page](./docs/configuration.md) for available configuration options and other help. ## Migration from filebrowser/filebrowser -If you currently use the original opensource filebrowser -but want to try using this. I recommend you start fresh without -reusing the database, but there are a few things you'll need to do if you -must migrate: - -1. Create a configuration file as mentioned above. -2. Copy your database file from the original filebrowser to the path of - the new one. -3. Update the configuration file to use the database (under server in - filebrowser.yml) -4. If you are using docker, update the docker-compose file or docker run - command to use the config file as described in the install section - above. -5. If you are not using docker, just make sure you run filebrowser -c - filebrowser.yml and have a valid filebrowser config. - - -The filebrowser Quantum application should run with the same user and rules that -you have from the original. But keep in mind the differences that are -mentioned at the top of this readme. - +If you currently use the original filebrowser but want to try using this. +I recommend you start fresh without reusing the database. If you want to +migrate your existing database to FileBrowser Quantum, visit the [migration +readme](./docs/migration.md) ## Comparison Chart @@ -217,4 +208,4 @@ Chromecast support | ❌ | ❌ | ✅ | ❌ | ❌ | ❌ | ## Roadmap -see [Roadmap Page](./roadmap.md) +see [Roadmap Page](./docs/roadmap.md) diff --git a/backend/benchmark_results.txt b/backend/benchmark_results.txt index 04a3563c..edfc14f3 100644 --- a/backend/benchmark_results.txt +++ b/backend/benchmark_results.txt @@ -7,26 +7,27 @@ PASS ok github.com/gtsteffaniak/filebrowser/diskcache 0.004s ? github.com/gtsteffaniak/filebrowser/errors [no test files] +2024/10/07 12:46:34 could not update unknown type: unknown goos: linux goarch: amd64 pkg: github.com/gtsteffaniak/filebrowser/files cpu: 11th Gen Intel(R) Core(TM) i5-11320H @ 3.20GHz -BenchmarkFillIndex-8 10 3559830 ns/op 274639 B/op 2026 allocs/op -BenchmarkSearchAllIndexes-8 10 31912612 ns/op 20545741 B/op 312477 allocs/op +BenchmarkFillIndex-8 10 3847878 ns/op 758424 B/op 5567 allocs/op +BenchmarkSearchAllIndexes-8 10 780431 ns/op 173444 B/op 2014 allocs/op PASS -ok github.com/gtsteffaniak/filebrowser/files 0.417s +ok github.com/gtsteffaniak/filebrowser/files 0.073s PASS -ok github.com/gtsteffaniak/filebrowser/fileutils 0.002s -2024/08/27 16:16:13 h: 401 -2024/08/27 16:16:13 h: 401 -2024/08/27 16:16:13 h: 401 -2024/08/27 16:16:13 h: 401 -2024/08/27 16:16:13 h: 401 -2024/08/27 16:16:13 h: 401 +ok github.com/gtsteffaniak/filebrowser/fileutils 0.003s +2024/10/07 12:46:34 h: 401 +2024/10/07 12:46:34 h: 401 +2024/10/07 12:46:34 h: 401 +2024/10/07 12:46:34 h: 401 +2024/10/07 12:46:34 h: 401 +2024/10/07 12:46:34 h: 401 PASS -ok github.com/gtsteffaniak/filebrowser/http 0.100s +ok github.com/gtsteffaniak/filebrowser/http 0.080s PASS -ok github.com/gtsteffaniak/filebrowser/img 0.124s +ok github.com/gtsteffaniak/filebrowser/img 0.137s PASS ok github.com/gtsteffaniak/filebrowser/rules 0.002s PASS @@ -38,4 +39,5 @@ ok github.com/gtsteffaniak/filebrowser/settings 0.004s ? github.com/gtsteffaniak/filebrowser/storage/bolt [no test files] PASS ok github.com/gtsteffaniak/filebrowser/users 0.002s +? github.com/gtsteffaniak/filebrowser/utils [no test files] ? github.com/gtsteffaniak/filebrowser/version [no test files] diff --git a/backend/cmd/root.go b/backend/cmd/root.go index 9214947e..f5a892fa 100644 --- a/backend/cmd/root.go +++ b/backend/cmd/root.go @@ -11,28 +11,28 @@ import ( "os" "os/signal" "strconv" + "strings" "syscall" "embed" - "github.com/spf13/pflag" - - "github.com/spf13/cobra" - - "github.com/gtsteffaniak/filebrowser/auth" "github.com/gtsteffaniak/filebrowser/diskcache" "github.com/gtsteffaniak/filebrowser/files" fbhttp "github.com/gtsteffaniak/filebrowser/http" "github.com/gtsteffaniak/filebrowser/img" "github.com/gtsteffaniak/filebrowser/settings" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" "github.com/gtsteffaniak/filebrowser/version" ) //go:embed dist/* var assets embed.FS -var nonEmbededFS = os.Getenv("FILEBROWSER_NO_EMBEDED") == "true" +var ( + nonEmbededFS = os.Getenv("FILEBROWSER_NO_EMBEDED") == "true" +) type dirFS struct { http.Dir @@ -42,102 +42,119 @@ func (d dirFS) Open(name string) (fs.File, error) { return d.Dir.Open(name) } -func init() { - // Define a flag for the config option (-c or --config) - configFlag := pflag.StringP("config", "c", "filebrowser.yaml", "Path to the config file") - // Bind the flags to the pflag command line parser - pflag.CommandLine.AddGoFlagSet(flag.CommandLine) - pflag.Parse() - log.Printf("Initializing FileBrowser Quantum (%v) with config file: %v \n", version.Version, *configFlag) - log.Println("Embeded Frontend:", !nonEmbededFS) - settings.Initialize(*configFlag) +func getStore(config string) (*storage.Storage, bool) { + // Use the config file (global flag) + log.Printf("Using Config file : %v", config) + settings.Initialize(config) + store, hasDB, err := storage.InitializeDb(settings.Config.Server.Database) + if err != nil { + log.Fatal("could not load db info: ", err) + } + return store, hasDB } -var rootCmd = &cobra.Command{ - Use: "filebrowser", - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { - serverConfig := settings.Config.Server - if !d.hadDB { - quickSetup(d) - } - if serverConfig.NumImageProcessors < 1 { - log.Fatal("Image resize workers count could not be < 1") - } - imgSvc := img.New(serverConfig.NumImageProcessors) - - cacheDir := "/tmp" - var fileCache diskcache.Interface - - // Use file cache if cacheDir is specified - if cacheDir != "" { - var err error - fileCache, err = diskcache.NewFileCache(cacheDir) - if err != nil { - log.Fatalf("failed to create file cache: %v", err) - } - } else { - // No-op cache if no cacheDir is specified - fileCache = diskcache.NewNoOp() - } - // initialize indexing and schedule indexing ever n minutes (default 5) - go files.InitializeIndex(serverConfig.IndexingInterval, serverConfig.Indexing) - _, err := os.Stat(serverConfig.Root) - checkErr(fmt.Sprint("cmd os.Stat ", serverConfig.Root), err) - var listener net.Listener - address := serverConfig.Address + ":" + strconv.Itoa(serverConfig.Port) - switch { - case serverConfig.Socket != "": - listener, err = net.Listen("unix", serverConfig.Socket) - checkErr("net.Listen", err) - socketPerm, err := cmd.Flags().GetUint32("socket-perm") //nolint:govet - checkErr("cmd.Flags().GetUint32", err) - err = os.Chmod(serverConfig.Socket, os.FileMode(socketPerm)) - checkErr("os.Chmod", err) - case serverConfig.TLSKey != "" && serverConfig.TLSCert != "": - cer, err := tls.LoadX509KeyPair(serverConfig.TLSCert, serverConfig.TLSKey) //nolint:govet - checkErr("tls.LoadX509KeyPair", err) - listener, err = tls.Listen("tcp", address, &tls.Config{ - MinVersion: tls.VersionTLS12, - Certificates: []tls.Certificate{cer}}, - ) - checkErr("tls.Listen", err) - default: - listener, err = net.Listen("tcp", address) - checkErr("net.Listen", err) - } - sigc := make(chan os.Signal, 1) - signal.Notify(sigc, os.Interrupt, syscall.SIGTERM) - go cleanupHandler(listener, sigc) - if !nonEmbededFS { - assetsFs, err := fs.Sub(assets, "dist") - if err != nil { - log.Fatal("Could not embed frontend. Does backend/cmd/dist exist? Must be built and exist first") - } - handler, err := fbhttp.NewHandler(imgSvc, fileCache, d.store, &serverConfig, assetsFs) - checkErr("fbhttp.NewHandler", err) - defer listener.Close() - log.Println("Listening on", listener.Addr().String()) - //nolint: gosec - if err := http.Serve(listener, handler); err != nil { - log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err) - } - } else { - assetsFs := dirFS{Dir: http.Dir("frontend/dist")} - handler, err := fbhttp.NewHandler(imgSvc, fileCache, d.store, &serverConfig, assetsFs) - checkErr("fbhttp.NewHandler", err) - defer listener.Close() - log.Println("Listening on", listener.Addr().String()) - //nolint: gosec - if err := http.Serve(listener, handler); err != nil { - log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err) - } - } - - }, pythonConfig{allowNoDB: true}), +func generalUsage() { + fmt.Printf(`usage: ./html-web-crawler [options] --urls + commands: + collect Collect data from URLs + crawl Crawl URLs and collect data + install Install chrome browser for javascript enabled scraping. + Note: Consider instead to install via native package manager, + then set "CHROME_EXECUTABLE" in the environment + ` + "\n") } func StartFilebrowser() { - if err := rootCmd.Execute(); err != nil { + // Global flags + var configPath string + var help bool + // Override the default usage output to use generalUsage() + flag.Usage = generalUsage + flag.StringVar(&configPath, "c", "filebrowser.yaml", "Path to the config file.") + flag.BoolVar(&help, "h", false, "Get help about commands") + + // Parse global flags (before subcommands) + flag.Parse() // print generalUsage on error + + // Show help if requested + if help { + generalUsage() + return + } + + // Create a new FlagSet for the 'set' subcommand + setCmd := flag.NewFlagSet("set", flag.ExitOnError) + var user, scope, dbConfig string + var asAdmin bool + + setCmd.StringVar(&user, "u", "", "Comma-separated username and password: \"set -u ,\"") + setCmd.BoolVar(&asAdmin, "a", false, "Create user as admin user, used in combination with -u") + setCmd.StringVar(&scope, "s", "", "Specify a user scope, otherwise default user config scope is used") + setCmd.StringVar(&dbConfig, "c", "filebrowser.yaml", "Path to the config file.") + + // Parse subcommand flags only if a subcommand is specified + if len(os.Args) > 1 { + switch os.Args[1] { + case "set": + err := setCmd.Parse(os.Args) + if err != nil { + setCmd.PrintDefaults() + os.Exit(1) + } + userInfo := strings.Split(user, ",") + if len(userInfo) < 2 { + fmt.Println("not enough info to create user: \"set -u username,password\"") + setCmd.PrintDefaults() + os.Exit(1) + } + username := userInfo[0] + password := userInfo[1] + getStore(dbConfig) + // Create the user logic + if asAdmin { + log.Printf("Creating user as admin: %s\n", username) + } else { + log.Printf("Creating user: %s\n", username) + } + newUser := users.User{ + Username: username, + Password: password, + } + if scope != "" { + newUser.Scope = scope + } + err = storage.CreateUser(newUser, asAdmin) + if err != nil { + log.Fatal("Could not create user: ", err) + } + return + case "version": + fmt.Println("FileBrowser Quantum - A modern web-based file manager") + fmt.Printf("Version : %v\n", version.Version) + fmt.Printf("Commit : %v\n", version.CommitSHA) + fmt.Printf("Release Info : https://github.com/gtsteffaniak/filebrowser/releases/tag/%v\n", version.Version) + return + } + } + store, dbExists := getStore(configPath) + indexingInterval := fmt.Sprint(settings.Config.Server.IndexingInterval, " minutes") + if !settings.Config.Server.Indexing { + indexingInterval = "disabled" + } + database := fmt.Sprintf("Using existing database : %v", settings.Config.Server.Database) + if !dbExists { + database = fmt.Sprintf("Creating new database : %v", settings.Config.Server.Database) + } + log.Printf("Initializing FileBrowser Quantum (%v)\n", version.Version) + log.Println("Embeded frontend :", !nonEmbededFS) + log.Println(database) + log.Println("Sources :", settings.Config.Server.Root) + log.Print("Indexing interval : ", indexingInterval) + + serverConfig := settings.Config.Server + // initialize indexing and schedule indexing ever n minutes (default 5) + go files.InitializeIndex(serverConfig.IndexingInterval, serverConfig.Indexing) + if err := rootCMD(store, &serverConfig); err != nil { log.Fatal("Error starting filebrowser:", err) } } @@ -149,37 +166,77 @@ func cleanupHandler(listener net.Listener, c chan os.Signal) { //nolint:interfac os.Exit(0) } -func quickSetup(d pythonData) { - settings.Config.Auth.Key = generateKey() - if settings.Config.Auth.Method == "noauth" { - err := d.store.Auth.Save(&auth.NoAuth{}) - checkErr("d.store.Auth.Save", err) +func rootCMD(store *storage.Storage, serverConfig *settings.Server) error { + if serverConfig.NumImageProcessors < 1 { + log.Fatal("Image resize workers count could not be < 1") + } + imgSvc := img.New(serverConfig.NumImageProcessors) + + cacheDir := "/tmp" + var fileCache diskcache.Interface + + // Use file cache if cacheDir is specified + if cacheDir != "" { + var err error + fileCache, err = diskcache.NewFileCache(cacheDir) + if err != nil { + log.Fatalf("failed to create file cache: %v", err) + } } else { - settings.Config.Auth.Method = "password" - err := d.store.Auth.Save(&auth.JSONAuth{}) - checkErr("d.store.Auth.Save", err) + // No-op cache if no cacheDir is specified + fileCache = diskcache.NewNoOp() } - err := d.store.Settings.Save(&settings.Config) - checkErr("d.store.Settings.Save", err) - err = d.store.Settings.SaveServer(&settings.Config.Server) - checkErr("d.store.Settings.SaveServer", err) - user := users.ApplyDefaults(users.User{}) - user.Username = settings.Config.Auth.AdminUsername - user.Password = settings.Config.Auth.AdminPassword - user.Perm.Admin = true - user.Scope = "./" - user.DarkMode = true - user.ViewMode = "normal" - user.LockPassword = false - user.Perm = settings.Permissions{ - Create: true, - Rename: true, - Modify: true, - Delete: true, - Share: true, - Download: true, - Admin: true, + + fbhttp.SetupEnv(store, serverConfig, fileCache) + + _, err := os.Stat(serverConfig.Root) + utils.CheckErr(fmt.Sprint("cmd os.Stat ", serverConfig.Root), err) + var listener net.Listener + address := serverConfig.Address + ":" + strconv.Itoa(serverConfig.Port) + switch { + case serverConfig.Socket != "": + listener, err = net.Listen("unix", serverConfig.Socket) + utils.CheckErr("net.Listen", err) + err = os.Chmod(serverConfig.Socket, os.FileMode(0666)) // socket-perm + utils.CheckErr("os.Chmod", err) + case serverConfig.TLSKey != "" && serverConfig.TLSCert != "": + cer, err := tls.LoadX509KeyPair(serverConfig.TLSCert, serverConfig.TLSKey) //nolint:govet + utils.CheckErr("tls.LoadX509KeyPair", err) + listener, err = tls.Listen("tcp", address, &tls.Config{ + MinVersion: tls.VersionTLS12, + Certificates: []tls.Certificate{cer}}, + ) + utils.CheckErr("tls.Listen", err) + default: + listener, err = net.Listen("tcp", address) + utils.CheckErr("net.Listen", err) } - err = d.store.Users.Save(&user) - checkErr("d.store.Users.Save", err) + sigc := make(chan os.Signal, 1) + signal.Notify(sigc, os.Interrupt, syscall.SIGTERM) + go cleanupHandler(listener, sigc) + if !nonEmbededFS { + assetsFs, err := fs.Sub(assets, "dist") + if err != nil { + log.Fatal("Could not embed frontend. Does backend/cmd/dist exist? Must be built and exist first") + } + handler, err := fbhttp.NewHandler(imgSvc, assetsFs) + utils.CheckErr("fbhttp.NewHandler", err) + defer listener.Close() + log.Println("Listening on", listener.Addr().String()) + //nolint: gosec + if err := http.Serve(listener, handler); err != nil { + log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err) + } + } else { + assetsFs := dirFS{Dir: http.Dir("frontend/dist")} + handler, err := fbhttp.NewHandler(imgSvc, assetsFs) + utils.CheckErr("fbhttp.NewHandler", err) + defer listener.Close() + log.Println("Listening on", listener.Addr().String()) + //nolint: gosec + if err := http.Serve(listener, handler); err != nil { + log.Fatalf("Could not start server on port %d: %v", serverConfig.Port, err) + } + } + return nil } diff --git a/backend/cmd/rule_rm.go b/backend/cmd/rule_rm.go index cfa60761..90b3f787 100644 --- a/backend/cmd/rule_rm.go +++ b/backend/cmd/rule_rm.go @@ -6,7 +6,9 @@ import ( "github.com/spf13/cobra" "github.com/gtsteffaniak/filebrowser/settings" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) func init() { @@ -40,27 +42,27 @@ including 'index_end'.`, return nil }, - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { i, err := strconv.Atoi(args[0]) - checkErr("strconv.Atoi", err) + utils.CheckErr("strconv.Atoi", err) f := i if len(args) == 2 { //nolint:gomnd f, err = strconv.Atoi(args[1]) - checkErr("strconv.Atoi", err) + utils.CheckErr("strconv.Atoi", err) } user := func(u *users.User) { u.Rules = append(u.Rules[:i], u.Rules[f+1:]...) - err := d.store.Users.Save(u) - checkErr("d.store.Users.Save", err) + err := store.Users.Save(u) + utils.CheckErr("store.Users.Save", err) } global := func(s *settings.Settings) { s.Rules = append(s.Rules[:i], s.Rules[f+1:]...) - err := d.store.Settings.Save(s) - checkErr("d.store.Settings.Save", err) + err := store.Settings.Save(s) + utils.CheckErr("store.Settings.Save", err) } - runRules(d.store, cmd, user, global) - }, pythonConfig{}), + runRules(store, cmd, user, global) + }), } diff --git a/backend/cmd/rules.go b/backend/cmd/rules.go index 7638dd8d..bfa2e6a1 100644 --- a/backend/cmd/rules.go +++ b/backend/cmd/rules.go @@ -10,10 +10,10 @@ import ( "github.com/gtsteffaniak/filebrowser/settings" "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) func init() { - rootCmd.AddCommand(rulesCmd) rulesCmd.PersistentFlags().StringP("username", "u", "", "username of user to which the rules apply") rulesCmd.PersistentFlags().UintP("id", "i", 0, "id of user to which the rules apply") } @@ -33,7 +33,7 @@ func runRules(st *storage.Storage, cmd *cobra.Command, usersFn func(*users.User) id := getUserIdentifier(cmd.Flags()) if id != nil { user, err := st.Users.Get("", id) - checkErr("st.Users.Get", err) + utils.CheckErr("st.Users.Get", err) if usersFn != nil { usersFn(user) @@ -44,7 +44,7 @@ func runRules(st *storage.Storage, cmd *cobra.Command, usersFn func(*users.User) } s, err := st.Settings.Get() - checkErr("st.Settings.Get", err) + utils.CheckErr("st.Settings.Get", err) if globalFn != nil { globalFn(s) diff --git a/backend/cmd/rules_add.go b/backend/cmd/rules_add.go index c6a13a73..d7fa4064 100644 --- a/backend/cmd/rules_add.go +++ b/backend/cmd/rules_add.go @@ -7,7 +7,9 @@ import ( "github.com/gtsteffaniak/filebrowser/rules" "github.com/gtsteffaniak/filebrowser/settings" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) func init() { @@ -21,7 +23,7 @@ var rulesAddCmd = &cobra.Command{ Short: "Add a global rule or user rule", Long: `Add a global rule or user rule.`, Args: cobra.ExactArgs(1), - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { allow := mustGetBool(cmd.Flags(), "allow") regex := mustGetBool(cmd.Flags(), "regex") exp := args[0] @@ -43,16 +45,16 @@ var rulesAddCmd = &cobra.Command{ user := func(u *users.User) { u.Rules = append(u.Rules, rule) - err := d.store.Users.Save(u) - checkErr("d.store.Users.Save", err) + err := store.Users.Save(u) + utils.CheckErr("store.Users.Save", err) } global := func(s *settings.Settings) { s.Rules = append(s.Rules, rule) - err := d.store.Settings.Save(s) - checkErr("d.store.Settings.Save", err) + err := store.Settings.Save(s) + utils.CheckErr("store.Settings.Save", err) } - runRules(d.store, cmd, user, global) - }, pythonConfig{}), + runRules(store, cmd, user, global) + }), } diff --git a/backend/cmd/rules_ls.go b/backend/cmd/rules_ls.go index e0e5f8f8..a29f6098 100644 --- a/backend/cmd/rules_ls.go +++ b/backend/cmd/rules_ls.go @@ -1,6 +1,7 @@ package cmd import ( + "github.com/gtsteffaniak/filebrowser/storage" "github.com/spf13/cobra" ) @@ -13,7 +14,7 @@ var rulesLsCommand = &cobra.Command{ Short: "List global rules or user specific rules", Long: `List global rules or user specific rules.`, Args: cobra.NoArgs, - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { - runRules(d.store, cmd, nil, nil) - }, pythonConfig{}), + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { + runRules(store, cmd, nil, nil) + }), } diff --git a/backend/cmd/users.go b/backend/cmd/users.go index 0d214456..9d8e4f40 100644 --- a/backend/cmd/users.go +++ b/backend/cmd/users.go @@ -11,10 +11,6 @@ import ( "github.com/gtsteffaniak/filebrowser/users" ) -func init() { - rootCmd.AddCommand(usersCmd) -} - var usersCmd = &cobra.Command{ Use: "users", Short: "Users management utility", diff --git a/backend/cmd/users_add.go b/backend/cmd/users_add.go index bd9e8b51..ccf9a119 100644 --- a/backend/cmd/users_add.go +++ b/backend/cmd/users_add.go @@ -3,7 +3,9 @@ package cmd import ( "github.com/spf13/cobra" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) func init() { @@ -15,26 +17,26 @@ var usersAddCmd = &cobra.Command{ Short: "Create a new user", Long: `Create a new user and add it to the database.`, Args: cobra.ExactArgs(2), //nolint:gomnd - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { user := &users.User{ Username: args[0], Password: args[1], LockPassword: mustGetBool(cmd.Flags(), "lockPassword"), } - servSettings, err := d.store.Settings.GetServer() - checkErr("d.store.Settings.GetServer()", err) + servSettings, err := store.Settings.GetServer() + utils.CheckErr("store.Settings.GetServer()", err) // since getUserDefaults() polluted s.Defaults.Scope // which makes the Scope not the one saved in the db // we need the right s.Defaults.Scope here - s2, err := d.store.Settings.Get() - checkErr("d.store.Settings.Get()", err) + s2, err := store.Settings.Get() + utils.CheckErr("store.Settings.Get()", err) userHome, err := s2.MakeUserDir(user.Username, user.Scope, servSettings.Root) - checkErr("s2.MakeUserDir", err) + utils.CheckErr("s2.MakeUserDir", err) user.Scope = userHome - err = d.store.Users.Save(user) - checkErr("d.store.Users.Save", err) + err = store.Users.Save(user) + utils.CheckErr("store.Users.Save", err) printUsers([]*users.User{user}) - }, pythonConfig{}), + }), } diff --git a/backend/cmd/users_export.go b/backend/cmd/users_export.go index f12eb778..d62cddfb 100644 --- a/backend/cmd/users_export.go +++ b/backend/cmd/users_export.go @@ -1,6 +1,8 @@ package cmd import ( + "github.com/gtsteffaniak/filebrowser/storage" + "github.com/gtsteffaniak/filebrowser/utils" "github.com/spf13/cobra" ) @@ -14,11 +16,11 @@ var usersExportCmd = &cobra.Command{ Long: `Export all users to a json or yaml file. Please indicate the path to the file where you want to write the users.`, Args: jsonYamlArg, - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { - list, err := d.store.Users.Gets("") - checkErr("d.store.Users.Gets", err) + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { + list, err := store.Users.Gets("") + utils.CheckErr("store.Users.Gets", err) err = marshal(args[0], list) - checkErr("marshal", err) - }, pythonConfig{}), + utils.CheckErr("marshal", err) + }), } diff --git a/backend/cmd/users_find.go b/backend/cmd/users_find.go index 3f66f793..43ac9a5f 100644 --- a/backend/cmd/users_find.go +++ b/backend/cmd/users_find.go @@ -3,7 +3,9 @@ package cmd import ( "github.com/spf13/cobra" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) func init() { @@ -26,7 +28,7 @@ var usersLsCmd = &cobra.Command{ Run: findUsers, } -var findUsers = python(func(cmd *cobra.Command, args []string, d pythonData) { +var findUsers = cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { var ( list []*users.User user *users.User @@ -36,16 +38,16 @@ var findUsers = python(func(cmd *cobra.Command, args []string, d pythonData) { if len(args) == 1 { username, id := parseUsernameOrID(args[0]) if username != "" { - user, err = d.store.Users.Get("", username) + user, err = store.Users.Get("", username) } else { - user, err = d.store.Users.Get("", id) + user, err = store.Users.Get("", id) } list = []*users.User{user} } else { - list, err = d.store.Users.Gets("") + list, err = store.Users.Gets("") } - checkErr("findUsers", err) + utils.CheckErr("findUsers", err) printUsers(list) -}, pythonConfig{}) +}) diff --git a/backend/cmd/users_import.go b/backend/cmd/users_import.go index a588f19f..b2984296 100644 --- a/backend/cmd/users_import.go +++ b/backend/cmd/users_import.go @@ -8,7 +8,9 @@ import ( "github.com/spf13/cobra" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) func init() { @@ -25,47 +27,47 @@ file. You can use this command to import new users to your installation. For that, just don't place their ID on the files list or set it to 0.`, Args: jsonYamlArg, - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { fd, err := os.Open(args[0]) - checkErr("os.Open", err) + utils.CheckErr("os.Open", err) defer fd.Close() list := []*users.User{} err = unmarshal(args[0], &list) - checkErr("unmarshal", err) + utils.CheckErr("unmarshal", err) if mustGetBool(cmd.Flags(), "replace") { - oldUsers, err := d.store.Users.Gets("") - checkErr("d.store.Users.Gets", err) + oldUsers, err := store.Users.Gets("") + utils.CheckErr("store.Users.Gets", err) err = marshal("users.backup.json", list) - checkErr("marshal users.backup.json", err) + utils.CheckErr("marshal users.backup.json", err) for _, user := range oldUsers { - err = d.store.Users.Delete(user.ID) - checkErr("d.store.Users.Delete", err) + err = store.Users.Delete(user.ID) + utils.CheckErr("store.Users.Delete", err) } } overwrite := mustGetBool(cmd.Flags(), "overwrite") for _, user := range list { - onDB, err := d.store.Users.Get("", user.ID) + onDB, err := store.Users.Get("", user.ID) // User exists in DB. if err == nil { if !overwrite { newErr := errors.New("user " + strconv.Itoa(int(user.ID)) + " is already registered") - checkErr("", newErr) + utils.CheckErr("", newErr) } // If the usernames mismatch, check if there is another one in the DB // with the new username. If there is, print an error and cancel the // operation if user.Username != onDB.Username { - if conflictuous, err := d.store.Users.Get("", user.Username); err == nil { //nolint:govet + if conflictuous, err := store.Users.Get("", user.Username); err == nil { //nolint:govet newErr := usernameConflictError(user.Username, conflictuous.ID, user.ID) - checkErr("usernameConflictError", newErr) + utils.CheckErr("usernameConflictError", newErr) } } } else { @@ -74,10 +76,10 @@ list or set it to 0.`, user.ID = 0 } - err = d.store.Users.Save(user) - checkErr("d.store.Users.Save", err) + err = store.Users.Save(user) + utils.CheckErr("store.Users.Save", err) } - }, pythonConfig{}), + }), } func usernameConflictError(username string, originalID, newID uint) error { diff --git a/backend/cmd/users_rm.go b/backend/cmd/users_rm.go index 63beb900..6cc89868 100644 --- a/backend/cmd/users_rm.go +++ b/backend/cmd/users_rm.go @@ -3,6 +3,8 @@ package cmd import ( "log" + "github.com/gtsteffaniak/filebrowser/storage" + "github.com/gtsteffaniak/filebrowser/utils" "github.com/spf13/cobra" ) @@ -15,17 +17,17 @@ var usersRmCmd = &cobra.Command{ Short: "Delete a user by username or id", Long: `Delete a user by username or id`, Args: cobra.ExactArgs(1), - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { username, id := parseUsernameOrID(args[0]) var err error if username != "" { - err = d.store.Users.Delete(username) + err = store.Users.Delete(username) } else { - err = d.store.Users.Delete(id) + err = store.Users.Delete(id) } - checkErr("usersRmCmd", err) + utils.CheckErr("usersRmCmd", err) log.Println("user deleted successfully") - }, pythonConfig{}), + }), } diff --git a/backend/cmd/users_update.go b/backend/cmd/users_update.go index 86352ecc..7dc75fe4 100644 --- a/backend/cmd/users_update.go +++ b/backend/cmd/users_update.go @@ -3,7 +3,9 @@ package cmd import ( "github.com/spf13/cobra" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) func init() { @@ -16,7 +18,7 @@ var usersUpdateCmd = &cobra.Command{ Long: `Updates an existing user. Set the flags for the options you want to change.`, Args: cobra.ExactArgs(1), - Run: python(func(cmd *cobra.Command, args []string, d pythonData) { + Run: cobraCmd(func(cmd *cobra.Command, args []string, store *storage.Storage) { username, id := parseUsernameOrID(args[0]) var ( @@ -25,14 +27,14 @@ options you want to change.`, ) if id != 0 { - user, err = d.store.Users.Get("", id) + user, err = store.Users.Get("", id) } else { - user, err = d.store.Users.Get("", username) + user, err = store.Users.Get("", username) } - checkErr("d.store.Users.Get", err) + utils.CheckErr("store.Users.Get", err) - err = d.store.Users.Update(user) - checkErr("d.store.Users.Update", err) + err = store.Users.Update(user) + utils.CheckErr("store.Users.Update", err) printUsers([]*users.User{user}) - }, pythonConfig{}), + }), } diff --git a/backend/cmd/utils.go b/backend/cmd/utils.go index 6b8b8fc9..43ac168b 100644 --- a/backend/cmd/utils.go +++ b/backend/cmd/utils.go @@ -3,113 +3,42 @@ package cmd import ( "encoding/json" "errors" - "fmt" - "log" "os" "path/filepath" - "github.com/asdine/storm/v3" "github.com/goccy/go-yaml" "github.com/spf13/cobra" "github.com/spf13/pflag" - "github.com/gtsteffaniak/filebrowser/settings" "github.com/gtsteffaniak/filebrowser/storage" - "github.com/gtsteffaniak/filebrowser/storage/bolt" + "github.com/gtsteffaniak/filebrowser/utils" ) -func checkErr(source string, err error) { - if err != nil { - log.Fatalf("%s: %v", source, err) - } -} - func mustGetString(flags *pflag.FlagSet, flag string) string { s, err := flags.GetString(flag) - checkErr("mustGetString", err) + utils.CheckErr("mustGetString", err) return s } func mustGetBool(flags *pflag.FlagSet, flag string) bool { b, err := flags.GetBool(flag) - checkErr("mustGetBool", err) + utils.CheckErr("mustGetBool", err) return b } func mustGetUint(flags *pflag.FlagSet, flag string) uint { b, err := flags.GetUint(flag) - checkErr("mustGetUint", err) + utils.CheckErr("mustGetUint", err) return b } -func generateKey() []byte { - k, err := settings.GenerateKey() - checkErr("generateKey", err) - return k -} - type cobraFunc func(cmd *cobra.Command, args []string) -type pythonFunc func(cmd *cobra.Command, args []string, data pythonData) - -type pythonConfig struct { - noDB bool - allowNoDB bool -} - -type pythonData struct { - hadDB bool - store *storage.Storage -} - -func dbExists(path string) (bool, error) { - stat, err := os.Stat(path) - if err == nil { - return stat.Size() != 0, nil - } - - if os.IsNotExist(err) { - d := filepath.Dir(path) - _, err = os.Stat(d) - if os.IsNotExist(err) { - if err := os.MkdirAll(d, 0700); err != nil { //nolint:govet,gomnd - return false, err - } - return false, nil - } - } - - return false, err -} - -func python(fn pythonFunc, cfg pythonConfig) cobraFunc { - return func(cmd *cobra.Command, args []string) { - data := pythonData{hadDB: true} - path := settings.Config.Server.Database - exists, err := dbExists(path) - - if err != nil { - panic(err) - } else if exists && cfg.noDB { - log.Fatal(path + " already exists") - } else if !exists && !cfg.noDB && !cfg.allowNoDB { - log.Fatal(path + " does not exist. Please run 'filebrowser config init' first.") - } - - data.hadDB = exists - db, err := storm.Open(path) - checkErr(fmt.Sprintf("storm.Open path %v", path), err) - - defer db.Close() - data.store, err = bolt.NewStorage(db) - checkErr("bolt.NewStorage", err) - fn(cmd, args, data) - } -} +type pythonFunc func(cmd *cobra.Command, args []string, store *storage.Storage) func marshal(filename string, data interface{}) error { fd, err := os.Create(filename) - checkErr("os.Create", err) + utils.CheckErr("os.Create", err) defer fd.Close() switch ext := filepath.Ext(filename); ext { @@ -127,7 +56,7 @@ func marshal(filename string, data interface{}) error { func unmarshal(filename string, data interface{}) error { fd, err := os.Open(filename) - checkErr("os.Open", err) + utils.CheckErr("os.Open", err) defer fd.Close() switch ext := filepath.Ext(filename); ext { @@ -152,3 +81,8 @@ func jsonYamlArg(cmd *cobra.Command, args []string) error { return errors.New("invalid format: " + ext) } } + +func cobraCmd(fn pythonFunc) cobraFunc { + return func(cmd *cobra.Command, args []string) { + } +} diff --git a/backend/cmd/version.go b/backend/cmd/version.go deleted file mode 100644 index cc37f9b5..00000000 --- a/backend/cmd/version.go +++ /dev/null @@ -1,21 +0,0 @@ -package cmd - -import ( - "fmt" - - "github.com/spf13/cobra" - - "github.com/gtsteffaniak/filebrowser/version" -) - -func init() { - rootCmd.AddCommand(versionCmd) -} - -var versionCmd = &cobra.Command{ - Use: "version", - Short: "Print the version number", - Run: func(cmd *cobra.Command, args []string) { - fmt.Println("File Browser " + version.Version + "/" + version.CommitSHA) - }, -} diff --git a/backend/files/conditions.go b/backend/files/conditions.go index 7fec95b9..09d70b43 100644 --- a/backend/files/conditions.go +++ b/backend/files/conditions.go @@ -91,7 +91,7 @@ func ParseSearch(value string) *SearchOptions { opts.LargerThan = updateSize(size) } if strings.HasPrefix(filter, "smallerThan=") { - opts.Conditions["larger"] = true + opts.Conditions["smaller"] = true size := strings.TrimPrefix(filter, "smallerThan=") opts.SmallerThan = updateSize(size) } diff --git a/backend/files/conditions_test.go b/backend/files/conditions_test.go new file mode 100644 index 00000000..6bd08610 --- /dev/null +++ b/backend/files/conditions_test.go @@ -0,0 +1,154 @@ +package files + +import ( + "fmt" + "testing" + + "github.com/stretchr/testify/assert" +) + +// Helper function to create error messages dynamically +func errorMsg(extension, expectedType string, expectedMatch bool) string { + matchStatus := "to match" + if !expectedMatch { + matchStatus = "to not match" + } + return fmt.Sprintf("Expected %s %s type '%s'", extension, matchStatus, expectedType) +} + +func TestIsMatchingType(t *testing.T) { + // Test cases where IsMatchingType should return true + trueTestCases := []struct { + extension string + expectedType string + }{ + {".pdf", "pdf"}, + {".doc", "doc"}, + {".docx", "doc"}, + {".json", "text"}, + {".sh", "text"}, + {".zip", "archive"}, + {".rar", "archive"}, + } + + for _, tc := range trueTestCases { + assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true)) + } + + // Test cases where IsMatchingType should return false + falseTestCases := []struct { + extension string + expectedType string + }{ + {".mp4", "doc"}, + {".mp4", "text"}, + {".mp4", "archive"}, + } + + for _, tc := range falseTestCases { + assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false)) + } +} + +func TestUpdateSize(t *testing.T) { + // Helper function for size error messages + sizeErrorMsg := func(input string, expected, actual int) string { + return fmt.Sprintf("Expected size for input '%s' to be %d, got %d", input, expected, actual) + } + + // Test cases for updateSize + testCases := []struct { + input string + expected int + }{ + {"150", 150}, + {"invalid", 100}, + {"", 100}, + } + + for _, tc := range testCases { + actual := updateSize(tc.input) + assert.Equal(t, tc.expected, actual, sizeErrorMsg(tc.input, tc.expected, actual)) + } +} + +func TestIsDoc(t *testing.T) { + // Test cases where IsMatchingType should return true for document types + docTrueTestCases := []struct { + extension string + expectedType string + }{ + {".doc", "doc"}, + {".pdf", "doc"}, + } + + for _, tc := range docTrueTestCases { + assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true)) + } + + // Test case where IsMatchingType should return false for document types + docFalseTestCases := []struct { + extension string + expectedType string + }{ + {".mp4", "doc"}, + } + + for _, tc := range docFalseTestCases { + assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false)) + } +} + +func TestIsText(t *testing.T) { + // Test cases where IsMatchingType should return true for text types + textTrueTestCases := []struct { + extension string + expectedType string + }{ + {".json", "text"}, + {".sh", "text"}, + } + + for _, tc := range textTrueTestCases { + assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true)) + } + + // Test case where IsMatchingType should return false for text types + textFalseTestCases := []struct { + extension string + expectedType string + }{ + {".mp4", "text"}, + } + + for _, tc := range textFalseTestCases { + assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false)) + } +} + +func TestIsArchive(t *testing.T) { + // Test cases where IsMatchingType should return true for archive types + archiveTrueTestCases := []struct { + extension string + expectedType string + }{ + {".zip", "archive"}, + {".rar", "archive"}, + } + + for _, tc := range archiveTrueTestCases { + assert.True(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, true)) + } + + // Test case where IsMatchingType should return false for archive types + archiveFalseTestCases := []struct { + extension string + expectedType string + }{ + {".mp4", "archive"}, + } + + for _, tc := range archiveFalseTestCases { + assert.False(t, IsMatchingType(tc.extension, tc.expectedType), errorMsg(tc.extension, tc.expectedType, false)) + } +} diff --git a/backend/files/file.go b/backend/files/file.go index 2cf30832..6d362dc6 100644 --- a/backend/files/file.go +++ b/backend/files/file.go @@ -21,32 +21,42 @@ import ( "github.com/gtsteffaniak/filebrowser/errors" "github.com/gtsteffaniak/filebrowser/rules" "github.com/gtsteffaniak/filebrowser/settings" - "github.com/gtsteffaniak/filebrowser/users" ) var ( - bytesInMegabyte int64 = 1000000 - pathMutexes = make(map[string]*sync.Mutex) - pathMutexesMu sync.Mutex // Mutex to protect the pathMutexes map + pathMutexes = make(map[string]*sync.Mutex) + pathMutexesMu sync.Mutex // Mutex to protect the pathMutexes map ) +type ReducedItem struct { + Name string `json:"name"` + Size int64 `json:"size"` + ModTime time.Time `json:"modified"` + IsDir bool `json:"isDir,omitempty"` + Type string `json:"type"` +} + // FileInfo describes a file. +// reduced item is non-recursive reduced "Items", used to pass flat items array type FileInfo struct { - *Listing - Path string `json:"path,omitempty"` - Name string `json:"name"` - Size int64 `json:"size"` - Extension string `json:"-"` - ModTime time.Time `json:"modified"` - CacheTime time.Time `json:"-"` - Mode os.FileMode `json:"-"` - IsDir bool `json:"isDir,omitempty"` - IsSymlink bool `json:"isSymlink,omitempty"` - Type string `json:"type"` - Subtitles []string `json:"subtitles,omitempty"` - Content string `json:"content,omitempty"` - Checksums map[string]string `json:"checksums,omitempty"` - Token string `json:"token,omitempty"` + Items []*FileInfo `json:"-"` + ReducedItems []ReducedItem `json:"items,omitempty"` + Path string `json:"path,omitempty"` + Name string `json:"name"` + Size int64 `json:"size"` + Extension string `json:"-"` + ModTime time.Time `json:"modified"` + CacheTime time.Time `json:"-"` + Mode os.FileMode `json:"-"` + IsDir bool `json:"isDir,omitempty"` + IsSymlink bool `json:"isSymlink,omitempty"` + Type string `json:"type"` + Subtitles []string `json:"subtitles,omitempty"` + Content string `json:"content,omitempty"` + Checksums map[string]string `json:"checksums,omitempty"` + Token string `json:"token,omitempty"` + NumDirs int `json:"numDirs"` + NumFiles int `json:"numFiles"` } // FileOptions are the options when getting a file info. @@ -61,26 +71,11 @@ type FileOptions struct { Content bool } -// Sorting constants -const ( - SortingByName = "name" - SortingBySize = "size" - SortingByModified = "modified" -) - -// Listing is a collection of files. -type Listing struct { - Items []*FileInfo `json:"items"` - Path string `json:"path"` - NumDirs int `json:"numDirs"` - NumFiles int `json:"numFiles"` - Sorting users.Sorting `json:"sorting"` -} - -// NewFileInfo creates a File object from a path and a given user. This File -// object will be automatically filled depending on if it is a directory -// or a file. If it's a video file, it will also detect any subtitles. +// Legacy file info method, only called on non-indexed directories. +// Once indexing completes for the first time, NewFileInfo is never called. func NewFileInfo(opts FileOptions) (*FileInfo, error) { + + index := GetIndex(rootPath) if !opts.Checker.Check(opts.Path) { return nil, os.ErrPermission } @@ -93,6 +88,26 @@ func NewFileInfo(opts FileOptions) (*FileInfo, error) { if err = file.readListing(opts.Path, opts.Checker, opts.ReadHeader); err != nil { return nil, err } + cleanedItems := []ReducedItem{} + for _, item := range file.Items { + // This is particularly useful for root of index, while indexing hasn't finished. + // adds the directory sizes for directories that have been indexed already. + if item.IsDir { + adjustedPath := index.makeIndexPath(opts.Path+"/"+item.Name, true) + info, _ := index.GetMetadataInfo(adjustedPath) + item.Size = info.Size + } + cleanedItems = append(cleanedItems, ReducedItem{ + Name: item.Name, + Size: item.Size, + IsDir: item.IsDir, + ModTime: item.ModTime, + Type: item.Type, + }) + } + + file.Items = nil + file.ReducedItems = cleanedItems return file, nil } err = file.detectType(opts.Path, opts.Modify, opts.Content, true) @@ -102,6 +117,7 @@ func NewFileInfo(opts FileOptions) (*FileInfo, error) { } return file, err } + func FileInfoFaster(opts FileOptions) (*FileInfo, error) { // Lock access for the specific path pathMutex := getMutex(opts.Path) @@ -133,12 +149,11 @@ func FileInfoFaster(opts FileOptions) (*FileInfo, error) { file, err := NewFileInfo(opts) return file, err } - info, exists := index.GetMetadataInfo(adjustedPath) + info, exists := index.GetMetadataInfo(adjustedPath + "/" + filepath.Base(opts.Path)) if !exists || info.Name == "" { - return &FileInfo{}, errors.ErrEmptyKey + return NewFileInfo(opts) } return &info, nil - } func RefreshFileInfo(opts FileOptions) error { @@ -491,9 +506,8 @@ func (i *FileInfo) readListing(path string, checker rules.Checker, readHeader bo return err } - listing := &Listing{ + listing := &FileInfo{ Items: []*FileInfo{}, - Path: i.Path, NumDirs: 0, NumFiles: 0, } @@ -548,7 +562,7 @@ func (i *FileInfo) readListing(path string, checker rules.Checker, readHeader bo listing.Items = append(listing.Items, file) } - i.Listing = listing + i.Items = listing.Items return nil } diff --git a/backend/files/indexing.go b/backend/files/indexing.go index b6a1cc7e..505c8e47 100644 --- a/backend/files/indexing.go +++ b/backend/files/indexing.go @@ -1,7 +1,6 @@ package files import ( - "bytes" "log" "os" "path/filepath" @@ -12,23 +11,12 @@ import ( "github.com/gtsteffaniak/filebrowser/settings" ) -type Directory struct { - Metadata map[string]FileInfo - Files string -} - -type File struct { - Name string - IsDir bool -} - type Index struct { Root string - Directories map[string]Directory + Directories map[string]FileInfo NumDirs int NumFiles int inProgress bool - quickList []File LastIndexed time.Time mu sync.RWMutex } @@ -50,16 +38,12 @@ func indexingScheduler(intervalMinutes uint32) { rootPath = settings.Config.Server.Root } si := GetIndex(rootPath) - log.Printf("Indexing Files...") - log.Printf("Configured to run every %v minutes", intervalMinutes) - log.Printf("Indexing from root: %s", si.Root) for { startTime := time.Now() // Set the indexing flag to indicate that indexing is in progress si.resetCount() // Perform the indexing operation err := si.indexFiles(si.Root) - si.quickList = []File{} // Reset the indexing flag to indicate that indexing has finished si.inProgress = false // Update the LastIndexed time @@ -81,78 +65,114 @@ func indexingScheduler(intervalMinutes uint32) { // Define a function to recursively index files and directories func (si *Index) indexFiles(path string) error { - // Check if the current directory has been modified since the last indexing + // Ensure path is cleaned and normalized adjustedPath := si.makeIndexPath(path, true) + + // Open the directory dir, err := os.Open(path) if err != nil { - // Directory must have been deleted, remove it from the index + // If the directory can't be opened (e.g., deleted), remove it from the index si.RemoveDirectory(adjustedPath) + return err } + defer dir.Close() + dirInfo, err := dir.Stat() if err != nil { - dir.Close() return err } - // Compare the last modified time of the directory with the last indexed time - lastIndexed := si.LastIndexed - if dirInfo.ModTime().Before(lastIndexed) { - dir.Close() + // Check if the directory is already up-to-date + if dirInfo.ModTime().Before(si.LastIndexed) { return nil } - // Read the directory contents + // Read directory contents files, err := dir.Readdir(-1) if err != nil { return err } - dir.Close() - si.UpdateQuickList(files) - si.InsertFiles(path) - // done separately for memory efficiency on recursion - si.InsertDirs(path) + + // Recursively process files and directories + fileInfos := []*FileInfo{} + var totalSize int64 + var numDirs, numFiles int + + for _, file := range files { + parentInfo := &FileInfo{ + Name: file.Name(), + Size: file.Size(), + ModTime: file.ModTime(), + IsDir: file.IsDir(), + } + childInfo, err := si.InsertInfo(path, parentInfo) + if err != nil { + // Log error, but continue processing other files + continue + } + + // Accumulate directory size and items + totalSize += childInfo.Size + if childInfo.IsDir { + numDirs++ + } else { + numFiles++ + } + _ = childInfo.detectType(path, true, false, false) + fileInfos = append(fileInfos, childInfo) + } + + // Create FileInfo for the current directory + dirFileInfo := &FileInfo{ + Items: fileInfos, + Name: filepath.Base(path), + Size: totalSize, + ModTime: dirInfo.ModTime(), + CacheTime: time.Now(), + IsDir: true, + NumDirs: numDirs, + NumFiles: numFiles, + } + + // Add directory to index + si.mu.Lock() + si.Directories[adjustedPath] = *dirFileInfo + si.NumDirs += numDirs + si.NumFiles += numFiles + si.mu.Unlock() return nil } -func (si *Index) InsertFiles(path string) { - adjustedPath := si.makeIndexPath(path, true) - subDirectory := Directory{} - buffer := bytes.Buffer{} +// InsertInfo function to handle adding a file or directory into the index +func (si *Index) InsertInfo(parentPath string, file *FileInfo) (*FileInfo, error) { + filePath := filepath.Join(parentPath, file.Name) - for _, f := range si.GetQuickList() { - if !f.IsDir { - buffer.WriteString(f.Name + ";") - si.UpdateCount("files") + // Check if it's a directory and recursively index it + if file.IsDir { + // Recursively index directory + err := si.indexFiles(filePath) + if err != nil { + return nil, err } - } - // Use GetMetadataInfo and SetFileMetadata for safer read and write operations - subDirectory.Files = buffer.String() - si.SetDirectoryInfo(adjustedPath, subDirectory) -} -func (si *Index) InsertDirs(path string) { - for _, f := range si.GetQuickList() { - if f.IsDir { - adjustedPath := si.makeIndexPath(path, true) - if _, exists := si.Directories[adjustedPath]; exists { - si.UpdateCount("dirs") - // Add or update the directory in the map - if adjustedPath == "/" { - si.SetDirectoryInfo("/"+f.Name, Directory{}) - } else { - si.SetDirectoryInfo(adjustedPath+"/"+f.Name, Directory{}) - } - } - err := si.indexFiles(path + "/" + f.Name) - if err != nil { - if err.Error() == "invalid argument" { - log.Printf("Could not index \"%v\": %v \n", path, "Permission Denied") - } else { - log.Printf("Could not index \"%v\": %v \n", path, err) - } - } - } + // Return directory info from the index + adjustedPath := si.makeIndexPath(filePath, true) + si.mu.RLock() + dirInfo := si.Directories[adjustedPath] + si.mu.RUnlock() + return &dirInfo, nil } + + // Create FileInfo for regular files + fileInfo := &FileInfo{ + Path: filePath, + Name: file.Name, + Size: file.Size, + ModTime: file.ModTime, + IsDir: false, + } + + return fileInfo, nil } func (si *Index) makeIndexPath(subPath string, isDir bool) string { @@ -171,5 +191,8 @@ func (si *Index) makeIndexPath(subPath string, isDir bool) string { } else if !isDir { adjustedPath = filepath.Dir(adjustedPath) } + if !strings.HasPrefix(adjustedPath, "/") { + adjustedPath = "/" + adjustedPath + } return adjustedPath } diff --git a/backend/files/indexing_test.go b/backend/files/indexing_test.go index 890679a2..0758f1f4 100644 --- a/backend/files/indexing_test.go +++ b/backend/files/indexing_test.go @@ -2,6 +2,7 @@ package files import ( "encoding/json" + "fmt" "math/rand" "reflect" "testing" @@ -23,18 +24,26 @@ func BenchmarkFillIndex(b *testing.B) { func (si *Index) createMockData(numDirs, numFilesPerDir int) { for i := 0; i < numDirs; i++ { dirName := generateRandomPath(rand.Intn(3) + 1) - files := []File{} - // Append a new Directory to the slice + files := []*FileInfo{} // Slice of FileInfo + + // Simulating files and directories with FileInfo for j := 0; j < numFilesPerDir; j++ { - newFile := File{ - Name: "file-" + getRandomTerm() + getRandomExtension(), - IsDir: false, + newFile := &FileInfo{ + Name: "file-" + getRandomTerm() + getRandomExtension(), + IsDir: false, + Size: rand.Int63n(1000), // Random size + ModTime: time.Now().Add(-time.Duration(rand.Intn(100)) * time.Hour), // Random mod time } files = append(files, newFile) } - si.UpdateQuickListForTests(files) - si.InsertFiles(dirName) - si.InsertDirs(dirName) + + // Simulate inserting files into index + for _, file := range files { + _, err := si.InsertInfo(dirName, file) + if err != nil { + fmt.Println("Error inserting file:", err) + } + } } } diff --git a/backend/files/search.go b/backend/files/search.go index 13a0d3b5..24e45b74 100644 --- a/backend/files/search.go +++ b/backend/files/search.go @@ -2,7 +2,6 @@ package files import ( "math/rand" - "os" "path/filepath" "sort" "strings" @@ -30,12 +29,17 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st continue } si.mu.Lock() - defer si.mu.Unlock() for dirName, dir := range si.Directories { isDir := true - files := strings.Split(dir.Files, ";") + files := []string{} + for _, item := range dir.Items { + if !item.IsDir { + files = append(files, item.Name) + } + } value, found := sessionInProgress.Load(sourceSession) if !found || value != runningHash { + si.mu.Unlock() return []string{}, map[string]map[string]bool{} } if count > maxSearchResults { @@ -46,7 +50,9 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st continue // path not matched } fileTypes := map[string]bool{} - matches, fileType := containsSearchTerm(dirName, searchTerm, *searchOptions, isDir, fileTypes) + si.mu.Unlock() + matches, fileType := si.containsSearchTerm(dirName, searchTerm, *searchOptions, isDir, fileTypes) + si.mu.Lock() if matches { fileListTypes[pathName] = fileType matching = append(matching, pathName) @@ -67,8 +73,9 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st } fullName := strings.TrimLeft(pathName+file, "/") fileTypes := map[string]bool{} - - matches, fileType := containsSearchTerm(fullName, searchTerm, *searchOptions, isDir, fileTypes) + si.mu.Unlock() + matches, fileType := si.containsSearchTerm(fullName, searchTerm, *searchOptions, isDir, fileTypes) + si.mu.Lock() if !matches { continue } @@ -77,6 +84,7 @@ func (si *Index) Search(search string, scope string, sourceSession string) ([]st count++ } } + si.mu.Unlock() } // Sort the strings based on the number of elements after splitting by "/" sort.Slice(matching, func(i, j int) bool { @@ -102,65 +110,88 @@ func scopedPathNameFilter(pathName string, scope string, isDir bool) string { return pathName } -func containsSearchTerm(pathName string, searchTerm string, options SearchOptions, isDir bool, fileTypes map[string]bool) (bool, map[string]bool) { +func (si *Index) containsSearchTerm(pathName string, searchTerm string, options SearchOptions, isDir bool, fileTypes map[string]bool) (bool, map[string]bool) { + largerThan := int64(options.LargerThan) * 1024 * 1024 + smallerThan := int64(options.SmallerThan) * 1024 * 1024 conditions := options.Conditions - path := getLastPathComponent(pathName) - // Convert to lowercase once + fileName := filepath.Base(pathName) + adjustedPath := si.makeIndexPath(pathName, isDir) + + // Convert to lowercase if not exact match if !conditions["exact"] { - path = strings.ToLower(path) + fileName = strings.ToLower(fileName) searchTerm = strings.ToLower(searchTerm) } - if strings.Contains(path, searchTerm) { - // Calculate fileSize only if needed - var fileSize int64 - matchesAllConditions := true - extension := filepath.Ext(path) - for _, k := range AllFiletypeOptions { - if IsMatchingType(extension, k) { - fileTypes[k] = true + + // Check if the file name contains the search term + if !strings.Contains(fileName, searchTerm) { + return false, map[string]bool{} + } + + // Initialize file size and fileTypes map + var fileSize int64 + extension := filepath.Ext(fileName) + + // Collect file types + for _, k := range AllFiletypeOptions { + if IsMatchingType(extension, k) { + fileTypes[k] = true + } + } + fileTypes["dir"] = isDir + // Get file info if needed for size-related conditions + if largerThan > 0 || smallerThan > 0 { + fileInfo, exists := si.GetMetadataInfo(adjustedPath) + if !exists { + return false, fileTypes + } else if !isDir { + // Look for specific file in ReducedItems + for _, item := range fileInfo.ReducedItems { + lower := strings.ToLower(item.Name) + if strings.Contains(lower, searchTerm) { + if item.Size == 0 { + return false, fileTypes + } + fileSize = item.Size + break + } + } + } else { + fileSize = fileInfo.Size + } + if fileSize == 0 { + return false, fileTypes + } + } + + // Evaluate all conditions + for t, v := range conditions { + if t == "exact" { + continue + } + switch t { + case "larger": + if largerThan > 0 { + if fileSize <= largerThan { + return false, fileTypes + } + } + case "smaller": + if smallerThan > 0 { + if fileSize >= smallerThan { + return false, fileTypes + } + } + default: + // Handle other file type conditions + notMatchType := v != fileTypes[t] + if notMatchType { + return false, fileTypes } } - fileTypes["dir"] = isDir - for t, v := range conditions { - if t == "exact" { - continue - } - var matchesCondition bool - switch t { - case "larger": - if fileSize == 0 { - fileSize = getFileSize(pathName) - } - matchesCondition = fileSize > int64(options.LargerThan)*bytesInMegabyte - case "smaller": - if fileSize == 0 { - fileSize = getFileSize(pathName) - } - matchesCondition = fileSize < int64(options.SmallerThan)*bytesInMegabyte - default: - matchesCondition = v == fileTypes[t] - } - if !matchesCondition { - matchesAllConditions = false - } - } - return matchesAllConditions, fileTypes } - // Clear variables and return - return false, map[string]bool{} -} -func getFileSize(filepath string) int64 { - fileInfo, err := os.Stat(rootPath + "/" + filepath) - if err != nil { - return 0 - } - return fileInfo.Size() -} - -func getLastPathComponent(path string) string { - // Use filepath.Base to extract the last component of the path - return filepath.Base(path) + return true, fileTypes } func generateRandomHash(length int) string { diff --git a/backend/files/search_test.go b/backend/files/search_test.go index 5f727707..a78f0968 100644 --- a/backend/files/search_test.go +++ b/backend/files/search_test.go @@ -11,7 +11,7 @@ func BenchmarkSearchAllIndexes(b *testing.B) { InitializeIndex(5, false) si := GetIndex(rootPath) - si.createMockData(50, 3) // 1000 dirs, 3 files per dir + si.createMockData(50, 3) // 50 dirs, 3 files per dir // Generate 100 random search terms searchTerms := generateRandomSearchTerms(100) @@ -26,87 +26,90 @@ func BenchmarkSearchAllIndexes(b *testing.B) { } } -// loop over test files and compare output func TestParseSearch(t *testing.T) { - value := ParseSearch("my test search") - want := &SearchOptions{ - Conditions: map[string]bool{ - "exact": false, + tests := []struct { + input string + want *SearchOptions + }{ + { + input: "my test search", + want: &SearchOptions{ + Conditions: map[string]bool{"exact": false}, + Terms: []string{"my test search"}, + }, }, - Terms: []string{"my test search"}, - } - if !reflect.DeepEqual(value, want) { - t.Fatalf("\n got: %+v\n want: %+v", value, want) - } - value = ParseSearch("case:exact my|test|search") - want = &SearchOptions{ - Conditions: map[string]bool{ - "exact": true, + { + input: "case:exact my|test|search", + want: &SearchOptions{ + Conditions: map[string]bool{"exact": true}, + Terms: []string{"my", "test", "search"}, + }, }, - Terms: []string{"my", "test", "search"}, - } - if !reflect.DeepEqual(value, want) { - t.Fatalf("\n got: %+v\n want: %+v", value, want) - } - value = ParseSearch("type:largerThan=100 type:smallerThan=1000 test") - want = &SearchOptions{ - Conditions: map[string]bool{ - "exact": false, - "larger": true, + { + input: "type:largerThan=100 type:smallerThan=1000 test", + want: &SearchOptions{ + Conditions: map[string]bool{"exact": false, "larger": true, "smaller": true}, + Terms: []string{"test"}, + LargerThan: 100, + SmallerThan: 1000, + }, }, - Terms: []string{"test"}, - LargerThan: 100, - SmallerThan: 1000, - } - if !reflect.DeepEqual(value, want) { - t.Fatalf("\n got: %+v\n want: %+v", value, want) - } - value = ParseSearch("type:audio thisfile") - want = &SearchOptions{ - Conditions: map[string]bool{ - "exact": false, - "audio": true, + { + input: "type:audio thisfile", + want: &SearchOptions{ + Conditions: map[string]bool{"exact": false, "audio": true}, + Terms: []string{"thisfile"}, + }, }, - Terms: []string{"thisfile"}, } - if !reflect.DeepEqual(value, want) { - t.Fatalf("\n got: %+v\n want: %+v", value, want) + + for _, tt := range tests { + t.Run(tt.input, func(t *testing.T) { + value := ParseSearch(tt.input) + if !reflect.DeepEqual(value, tt.want) { + t.Fatalf("\n got: %+v\n want: %+v", value, tt.want) + } + }) } } func TestSearchWhileIndexing(t *testing.T) { InitializeIndex(5, false) si := GetIndex(rootPath) - // Generate 100 random search terms - // Generate 100 random search terms + searchTerms := generateRandomSearchTerms(10) for i := 0; i < 5; i++ { - // Execute the SearchAllIndexes function - go si.createMockData(100, 100) // 1000 dirs, 3 files per dir + go si.createMockData(100, 100) // Creating mock data concurrently for _, term := range searchTerms { - go si.Search(term, "/", "test") + go si.Search(term, "/", "test") // Search concurrently } } } func TestSearchIndexes(t *testing.T) { index := Index{ - Directories: map[string]Directory{ - "test": { - Files: "audio1.wav;", - }, - "test/path": { - Files: "file.txt;", - }, - "new": {}, - "new/test": { - Files: "audio.wav;video.mp4;video.MP4;", - }, - "new/test/path": { - Files: "archive.zip;", + Directories: map[string]FileInfo{ + "test": {Items: []*FileInfo{{Name: "audio1.wav"}}}, + "test/path": {Items: []*FileInfo{{Name: "file.txt"}}}, + "new/test": {Items: []*FileInfo{ + {Name: "audio.wav"}, + {Name: "video.mp4"}, + {Name: "video.MP4"}, + }}, + "new/test/path": {Items: []*FileInfo{{Name: "archive.zip"}}}, + "/firstDir": {Items: []*FileInfo{ + {Name: "archive.zip", Size: 100}, + {Name: "thisIsDir", IsDir: true, Size: 2 * 1024 * 1024}, + }}, + "/firstDir/thisIsDir": { + Items: []*FileInfo{ + {Name: "hi.txt"}, + }, + Size: 2 * 1024 * 1024, }, }, } + tests := []struct { search string scope string @@ -118,7 +121,7 @@ func TestSearchIndexes(t *testing.T) { scope: "/new/", expectedResult: []string{"test/audio.wav"}, expectedTypes: map[string]map[string]bool{ - "test/audio.wav": map[string]bool{"audio": true, "dir": false}, + "test/audio.wav": {"audio": true, "dir": false}, }, }, { @@ -126,16 +129,41 @@ func TestSearchIndexes(t *testing.T) { scope: "/", expectedResult: []string{"test/", "new/test/"}, expectedTypes: map[string]map[string]bool{ - "test/": map[string]bool{"dir": true}, - "new/test/": map[string]bool{"dir": true}, + "test/": {"dir": true}, + "new/test/": {"dir": true}, }, }, { search: "archive", scope: "/", - expectedResult: []string{"new/test/path/archive.zip"}, + expectedResult: []string{"firstDir/archive.zip", "new/test/path/archive.zip"}, expectedTypes: map[string]map[string]bool{ - "new/test/path/archive.zip": map[string]bool{"archive": true, "dir": false}, + "new/test/path/archive.zip": {"archive": true, "dir": false}, + "firstDir/archive.zip": {"archive": true, "dir": false}, + }, + }, + { + search: "arch", + scope: "/firstDir", + expectedResult: []string{"archive.zip"}, + expectedTypes: map[string]map[string]bool{ + "archive.zip": {"archive": true, "dir": false}, + }, + }, + { + search: "isdir", + scope: "/", + expectedResult: []string{"firstDir/thisIsDir/"}, + expectedTypes: map[string]map[string]bool{ + "firstDir/thisIsDir/": {"dir": true}, + }, + }, + { + search: "dir type:largerThan=1", + scope: "/", + expectedResult: []string{"firstDir/thisIsDir/"}, + expectedTypes: map[string]map[string]bool{ + "firstDir/thisIsDir/": {"dir": true}, }, }, { @@ -146,18 +174,17 @@ func TestSearchIndexes(t *testing.T) { "new/test/video.MP4", }, expectedTypes: map[string]map[string]bool{ - "new/test/video.MP4": map[string]bool{"video": true, "dir": false}, - "new/test/video.mp4": map[string]bool{"video": true, "dir": false}, + "new/test/video.MP4": {"video": true, "dir": false}, + "new/test/video.mp4": {"video": true, "dir": false}, }, }, } + for _, tt := range tests { t.Run(tt.search, func(t *testing.T) { actualResult, actualTypes := index.Search(tt.search, tt.scope, "") assert.Equal(t, tt.expectedResult, actualResult) - if !reflect.DeepEqual(tt.expectedTypes, actualTypes) { - t.Fatalf("\n got: %+v\n want: %+v", actualTypes, tt.expectedTypes) - } + assert.Equal(t, tt.expectedTypes, actualTypes) }) } } @@ -186,6 +213,7 @@ func Test_scopedPathNameFilter(t *testing.T) { want: "", // Update this with the expected result }, } + for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { if got := scopedPathNameFilter(tt.args.pathName, tt.args.scope, tt.args.isDir); got != tt.want { @@ -194,103 +222,3 @@ func Test_scopedPathNameFilter(t *testing.T) { }) } } - -func Test_isDoc(t *testing.T) { - type args struct { - extension string - } - tests := []struct { - name string - args args - want bool - }{ - // TODO: Add test cases. - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := isDoc(tt.args.extension); got != tt.want { - t.Errorf("isDoc() = %v, want %v", got, tt.want) - } - }) - } -} - -func Test_getFileSize(t *testing.T) { - type args struct { - filepath string - } - tests := []struct { - name string - args args - want int64 - }{ - // TODO: Add test cases. - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := getFileSize(tt.args.filepath); got != tt.want { - t.Errorf("getFileSize() = %v, want %v", got, tt.want) - } - }) - } -} - -func Test_isArchive(t *testing.T) { - type args struct { - extension string - } - tests := []struct { - name string - args args - want bool - }{ - // TODO: Add test cases. - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := isArchive(tt.args.extension); got != tt.want { - t.Errorf("isArchive() = %v, want %v", got, tt.want) - } - }) - } -} - -func Test_getLastPathComponent(t *testing.T) { - type args struct { - path string - } - tests := []struct { - name string - args args - want string - }{ - // TODO: Add test cases. - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := getLastPathComponent(tt.args.path); got != tt.want { - t.Errorf("getLastPathComponent() = %v, want %v", got, tt.want) - } - }) - } -} - -func Test_generateRandomHash(t *testing.T) { - type args struct { - length int - } - tests := []struct { - name string - args args - want string - }{ - // TODO: Add test cases. - } - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - if got := generateRandomHash(tt.args.length); got != tt.want { - t.Errorf("generateRandomHash() = %v, want %v", got, tt.want) - } - }) - } -} diff --git a/backend/files/sync.go b/backend/files/sync.go index cdd70d86..050e262f 100644 --- a/backend/files/sync.go +++ b/backend/files/sync.go @@ -1,7 +1,6 @@ package files import ( - "io/fs" "log" "time" @@ -13,15 +12,10 @@ func (si *Index) UpdateFileMetadata(adjustedPath string, info FileInfo) bool { si.mu.Lock() defer si.mu.Unlock() dir, exists := si.Directories[adjustedPath] - if !exists || exists && dir.Metadata == nil { - // Initialize the Metadata map if it is nil - if dir.Metadata == nil { - dir.Metadata = make(map[string]FileInfo) - } - si.Directories[adjustedPath] = dir - // Release the read lock before calling SetFileMetadata + if !exists { + si.Directories[adjustedPath] = FileInfo{} } - return si.SetFileMetadata(adjustedPath, info) + return si.SetFileMetadata(adjustedPath, dir) } // SetFileMetadata sets the FileInfo for the specified directory in the index. @@ -32,37 +26,45 @@ func (si *Index) SetFileMetadata(adjustedPath string, info FileInfo) bool { return false } info.CacheTime = time.Now() - si.Directories[adjustedPath].Metadata[adjustedPath] = info + si.Directories[adjustedPath] = info return true } // GetMetadataInfo retrieves the FileInfo from the specified directory in the index. func (si *Index) GetMetadataInfo(adjustedPath string) (FileInfo, bool) { - fi := FileInfo{} si.mu.RLock() dir, exists := si.Directories[adjustedPath] si.mu.RUnlock() - if exists { - // Initialize the Metadata map if it is nil - if dir.Metadata == nil { - dir.Metadata = make(map[string]FileInfo) - si.SetDirectoryInfo(adjustedPath, dir) - } else { - fi = dir.Metadata[adjustedPath] - } + if !exists { + return dir, exists } - return fi, exists + // remove recursive items, we only want this directories direct files + cleanedItems := []ReducedItem{} + for _, item := range dir.Items { + cleanedItems = append(cleanedItems, ReducedItem{ + Name: item.Name, + Size: item.Size, + IsDir: item.IsDir, + ModTime: item.ModTime, + Type: item.Type, + }) + } + dir.Items = nil + dir.ReducedItems = cleanedItems + realPath, _, _ := GetRealPath(adjustedPath) + dir.Path = realPath + return dir, exists } // SetDirectoryInfo sets the directory information in the index. -func (si *Index) SetDirectoryInfo(adjustedPath string, dir Directory) { +func (si *Index) SetDirectoryInfo(adjustedPath string, dir FileInfo) { si.mu.Lock() si.Directories[adjustedPath] = dir si.mu.Unlock() } // SetDirectoryInfo sets the directory information in the index. -func (si *Index) GetDirectoryInfo(adjustedPath string) (Directory, bool) { +func (si *Index) GetDirectoryInfo(adjustedPath string) (FileInfo, bool) { si.mu.RLock() dir, exists := si.Directories[adjustedPath] si.mu.RUnlock() @@ -106,7 +108,7 @@ func GetIndex(root string) *Index { } newIndex := &Index{ Root: rootPath, - Directories: make(map[string]Directory), // Initialize the map + Directories: map[string]FileInfo{}, NumDirs: 0, NumFiles: 0, inProgress: false, @@ -116,36 +118,3 @@ func GetIndex(root string) *Index { indexesMutex.Unlock() return newIndex } - -func (si *Index) UpdateQuickList(files []fs.FileInfo) { - si.mu.Lock() - defer si.mu.Unlock() - si.quickList = []File{} - for _, file := range files { - newFile := File{ - Name: file.Name(), - IsDir: file.IsDir(), - } - si.quickList = append(si.quickList, newFile) - } -} - -func (si *Index) UpdateQuickListForTests(files []File) { - si.mu.Lock() - defer si.mu.Unlock() - si.quickList = []File{} - for _, file := range files { - newFile := File{ - Name: file.Name, - IsDir: file.IsDir, - } - si.quickList = append(si.quickList, newFile) - } -} - -func (si *Index) GetQuickList() []File { - si.mu.Lock() - defer si.mu.Unlock() - newQuickList := si.quickList - return newQuickList -} diff --git a/backend/files/sync_test.go b/backend/files/sync_test.go index ef1f6ac4..6727e4d7 100644 --- a/backend/files/sync_test.go +++ b/backend/files/sync_test.go @@ -1,92 +1,118 @@ package files import ( - "io/fs" - "os" "testing" - "time" + + "github.com/stretchr/testify/assert" ) -// Mock for fs.FileInfo -type mockFileInfo struct { - name string - isDir bool -} - -func (m mockFileInfo) Name() string { return m.name } -func (m mockFileInfo) Size() int64 { return 0 } -func (m mockFileInfo) Mode() os.FileMode { return 0 } -func (m mockFileInfo) ModTime() time.Time { return time.Now() } -func (m mockFileInfo) IsDir() bool { return m.isDir } -func (m mockFileInfo) Sys() interface{} { return nil } - var testIndex Index -// Test for GetFileMetadata -//func TestGetFileMetadata(t *testing.T) { -// t.Parallel() -// tests := []struct { -// name string -// adjustedPath string -// fileName string -// expectedName string -// expectedExists bool -// }{ -// { -// name: "testpath exists", -// adjustedPath: "/testpath", -// fileName: "testfile.txt", -// expectedName: "testfile.txt", -// expectedExists: true, -// }, -// { -// name: "testpath not exists", -// adjustedPath: "/testpath", -// fileName: "nonexistent.txt", -// expectedName: "", -// expectedExists: false, -// }, -// { -// name: "File exists in /anotherpath", -// adjustedPath: "/anotherpath", -// fileName: "afile.txt", -// expectedName: "afile.txt", -// expectedExists: true, -// }, -// { -// name: "File does not exist in /anotherpath", -// adjustedPath: "/anotherpath", -// fileName: "nonexistentfile.txt", -// expectedName: "", -// expectedExists: false, -// }, -// { -// name: "Directory does not exist", -// adjustedPath: "/nonexistentpath", -// fileName: "testfile.txt", -// expectedName: "", -// expectedExists: false, -// }, -// } -// -// for _, tt := range tests { -// t.Run(tt.name, func(t *testing.T) { -// fileInfo, exists := testIndex.GetFileMetadata(tt.adjustedPath) -// if exists != tt.expectedExists || fileInfo.Name != tt.expectedName { -// t.Errorf("expected %v:%v but got: %v:%v", tt.expectedName, tt.expectedExists, //fileInfo.Name, exists) -// } -// }) -// } -//} +// Test for GetFileMetadata// Test for GetFileMetadata +func TestGetFileMetadataSize(t *testing.T) { + t.Parallel() + tests := []struct { + name string + adjustedPath string + expectedName string + expectedSize int64 + }{ + { + name: "testpath exists", + adjustedPath: "/testpath", + expectedName: "testfile.txt", + expectedSize: 100, + }, + { + name: "testpath exists", + adjustedPath: "/testpath", + expectedName: "directory", + expectedSize: 100, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + fileInfo, _ := testIndex.GetMetadataInfo(tt.adjustedPath) + // Iterate over fileInfo.Items to look for expectedName + for _, item := range fileInfo.ReducedItems { + // Assert the existence and the name + if item.Name == tt.expectedName { + assert.Equal(t, tt.expectedSize, item.Size) + break + } + } + }) + } +} + +// Test for GetFileMetadata// Test for GetFileMetadata +func TestGetFileMetadata(t *testing.T) { + t.Parallel() + tests := []struct { + name string + adjustedPath string + expectedName string + expectedExists bool + }{ + { + name: "testpath exists", + adjustedPath: "/testpath", + expectedName: "testfile.txt", + expectedExists: true, + }, + { + name: "testpath not exists", + adjustedPath: "/testpath", + expectedName: "nonexistent.txt", + expectedExists: false, + }, + { + name: "File exists in /anotherpath", + adjustedPath: "/anotherpath", + expectedName: "afile.txt", + expectedExists: true, + }, + { + name: "File does not exist in /anotherpath", + adjustedPath: "/anotherpath", + expectedName: "nonexistentfile.txt", + expectedExists: false, + }, + { + name: "Directory does not exist", + adjustedPath: "/nonexistentpath", + expectedName: "", + expectedExists: false, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + fileInfo, _ := testIndex.GetMetadataInfo(tt.adjustedPath) + found := false + // Iterate over fileInfo.Items to look for expectedName + for _, item := range fileInfo.ReducedItems { + // Assert the existence and the name + if item.Name == tt.expectedName { + found = true + break + } + } + assert.Equal(t, tt.expectedExists, found) + }) + } +} // Test for UpdateFileMetadata func TestUpdateFileMetadata(t *testing.T) { index := &Index{ - Directories: map[string]Directory{ + Directories: map[string]FileInfo{ "/testpath": { - Metadata: map[string]FileInfo{ - "testfile.txt": {Name: "testfile.txt"}, - "anotherfile.txt": {Name: "anotherfile.txt"}, + Path: "/testpath", + Name: "testpath", + IsDir: true, + ReducedItems: []ReducedItem{ + {Name: "testfile.txt"}, + {Name: "anotherfile.txt"}, }, }, }, @@ -100,7 +126,7 @@ func TestUpdateFileMetadata(t *testing.T) { } dir, exists := index.Directories["/testpath"] - if !exists || dir.Metadata["testfile.txt"].Name != "testfile.txt" { + if !exists || dir.ReducedItems[0].Name != "testfile.txt" { t.Fatalf("expected testfile.txt to be updated in the directory metadata") } } @@ -122,19 +148,29 @@ func TestGetDirMetadata(t *testing.T) { // Test for SetDirectoryInfo func TestSetDirectoryInfo(t *testing.T) { index := &Index{ - Directories: map[string]Directory{ + Directories: map[string]FileInfo{ "/testpath": { - Metadata: map[string]FileInfo{ - "testfile.txt": {Name: "testfile.txt"}, - "anotherfile.txt": {Name: "anotherfile.txt"}, + Path: "/testpath", + Name: "testpath", + IsDir: true, + Items: []*FileInfo{ + {Name: "testfile.txt"}, + {Name: "anotherfile.txt"}, }, }, }, } - dir := Directory{Metadata: map[string]FileInfo{"testfile.txt": {Name: "testfile.txt"}}} + dir := FileInfo{ + Path: "/newPath", + Name: "newPath", + IsDir: true, + Items: []*FileInfo{ + {Name: "testfile.txt"}, + }, + } index.SetDirectoryInfo("/newPath", dir) storedDir, exists := index.Directories["/newPath"] - if !exists || storedDir.Metadata["testfile.txt"].Name != "testfile.txt" { + if !exists || storedDir.Items[0].Name != "testfile.txt" { t.Fatalf("expected SetDirectoryInfo to store directory info correctly") } } @@ -143,7 +179,7 @@ func TestSetDirectoryInfo(t *testing.T) { func TestGetDirectoryInfo(t *testing.T) { t.Parallel() dir, exists := testIndex.GetDirectoryInfo("/testpath") - if !exists || dir.Metadata["testfile.txt"].Name != "testfile.txt" { + if !exists || dir.Items[0].Name != "testfile.txt" { t.Fatalf("expected GetDirectoryInfo to return correct directory info") } @@ -156,7 +192,7 @@ func TestGetDirectoryInfo(t *testing.T) { // Test for RemoveDirectory func TestRemoveDirectory(t *testing.T) { index := &Index{ - Directories: map[string]Directory{ + Directories: map[string]FileInfo{ "/testpath": {}, }, } @@ -194,27 +230,33 @@ func TestUpdateCount(t *testing.T) { func init() { testIndex = Index{ + Root: "/", NumFiles: 10, NumDirs: 5, inProgress: false, - Directories: map[string]Directory{ + Directories: map[string]FileInfo{ "/testpath": { - Metadata: map[string]FileInfo{ - "testfile.txt": {Name: "testfile.txt"}, - "anotherfile.txt": {Name: "anotherfile.txt"}, + Path: "/testpath", + Name: "testpath", + IsDir: true, + NumDirs: 1, + NumFiles: 2, + Items: []*FileInfo{ + {Name: "testfile.txt", Size: 100}, + {Name: "anotherfile.txt", Size: 100}, }, }, "/anotherpath": { - Metadata: map[string]FileInfo{ - "afile.txt": {Name: "afile.txt"}, + Path: "/anotherpath", + Name: "anotherpath", + IsDir: true, + NumDirs: 1, + NumFiles: 1, + Items: []*FileInfo{ + {Name: "directory", IsDir: true, Size: 100}, + {Name: "afile.txt", Size: 100}, }, }, }, } - - files := []fs.FileInfo{ - mockFileInfo{name: "file1.txt", isDir: false}, - mockFileInfo{name: "dir1", isDir: true}, - } - testIndex.UpdateQuickList(files) } diff --git a/backend/http/http.go b/backend/http/http.go index 2bd4345f..e6e566a5 100644 --- a/backend/http/http.go +++ b/backend/http/http.go @@ -15,11 +15,20 @@ type modifyRequest struct { Which []string `json:"which"` // Answer to: which fields? } +var ( + store *storage.Storage + server *settings.Server + fileCache FileCache +) + +func SetupEnv(storage *storage.Storage, s *settings.Server, cache FileCache) { + store = storage + server = s + fileCache = cache +} + func NewHandler( imgSvc ImgService, - fileCache FileCache, - store *storage.Storage, - server *settings.Server, assetsFs fs.FS, ) (http.Handler, error) { server.Clean() diff --git a/backend/http/public_test.go b/backend/http/public_test.go index 03b44176..3648c6ab 100644 --- a/backend/http/public_test.go +++ b/backend/http/public_test.go @@ -11,6 +11,7 @@ import ( "github.com/gtsteffaniak/filebrowser/settings" "github.com/gtsteffaniak/filebrowser/share" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/storage/bolt" "github.com/gtsteffaniak/filebrowser/users" ) @@ -73,8 +74,13 @@ func TestPublicShareHandlerAuthentication(t *testing.T) { t.Errorf("failed to close db: %v", err) } }) - - storage, err := bolt.NewStorage(db) + authStore, userStore, shareStore, settingsStore, err := bolt.NewStorage(db) + storage := &storage.Storage{ + Auth: authStore, + Users: userStore, + Share: shareStore, + Settings: settingsStore, + } if err != nil { t.Fatalf("failed to get storage: %v", err) } diff --git a/backend/http/users.go b/backend/http/users.go index 16e8182a..7bfdd50a 100644 --- a/backend/http/users.go +++ b/backend/http/users.go @@ -2,7 +2,6 @@ package http import ( "encoding/json" - "log" "net/http" "reflect" "sort" @@ -14,6 +13,7 @@ import ( "github.com/gtsteffaniak/filebrowser/errors" "github.com/gtsteffaniak/filebrowser/files" + "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" ) @@ -130,21 +130,7 @@ var userPostHandler = withAdmin(func(w http.ResponseWriter, r *http.Request, d * return http.StatusBadRequest, errors.ErrEmptyPassword } - newUser := users.ApplyDefaults(*req.Data) - - userHome, err := d.settings.MakeUserDir(req.Data.Username, req.Data.Scope, d.server.Root) - if err != nil { - log.Printf("create user: failed to mkdir user home dir: [%s]", userHome) - return http.StatusInternalServerError, err - } - newUser.Scope = userHome - log.Printf("user: %s, home dir: [%s].", req.Data.Username, userHome) - _, _, err = files.GetRealPath(d.server.Root, req.Data.Scope) - if err != nil { - log.Println("user path is not valid", req.Data.Scope) - return http.StatusBadRequest, nil - } - err = d.store.Users.Save(&newUser) + err = storage.CreateUser(*req.Data, req.Data.Perm.Admin) if err != nil { return http.StatusInternalServerError, err } diff --git a/backend/settings/config.go b/backend/settings/config.go index 0cb12e24..479a1c40 100644 --- a/backend/settings/config.go +++ b/backend/settings/config.go @@ -34,15 +34,14 @@ func loadConfigFile(configFile string) []byte { // Open and read the YAML file yamlFile, err := os.Open(configFile) if err != nil { - log.Printf("ERROR: opening config file\n %v\n WARNING: Using default config only\n If this was a mistake, please make sure the file exists and is accessible by the filebrowser binary.\n\n", err) - Config = setDefaults() - return []byte{} + log.Println(err) + os.Exit(1) } defer yamlFile.Close() stat, err := yamlFile.Stat() if err != nil { - log.Fatalf("Error getting file information: %s", err.Error()) + log.Fatalf("error getting file information: %s", err.Error()) } yamlData := make([]byte, stat.Size()) diff --git a/backend/settings/settings.go b/backend/settings/settings.go index 0f7c3e94..ae04b3d3 100644 --- a/backend/settings/settings.go +++ b/backend/settings/settings.go @@ -39,3 +39,15 @@ func GenerateKey() ([]byte, error) { func GetSettingsConfig(nameType string, Value string) string { return nameType + Value } + +func AdminPerms() Permissions { + return Permissions{ + Create: true, + Rename: true, + Modify: true, + Delete: true, + Share: true, + Download: true, + Admin: true, + } +} diff --git a/backend/storage/bolt/auth.go b/backend/storage/bolt/auth.go index e3f6977b..c843d825 100644 --- a/backend/storage/bolt/auth.go +++ b/backend/storage/bolt/auth.go @@ -28,5 +28,5 @@ func (s authBackend) Get(t string) (auth.Auther, error) { } func (s authBackend) Save(a auth.Auther) error { - return save(s.db, "auther", a) + return Save(s.db, "auther", a) } diff --git a/backend/storage/bolt/bolt.go b/backend/storage/bolt/bolt.go index cc8c37fe..f771785f 100644 --- a/backend/storage/bolt/bolt.go +++ b/backend/storage/bolt/bolt.go @@ -6,26 +6,14 @@ import ( "github.com/gtsteffaniak/filebrowser/auth" "github.com/gtsteffaniak/filebrowser/settings" "github.com/gtsteffaniak/filebrowser/share" - "github.com/gtsteffaniak/filebrowser/storage" "github.com/gtsteffaniak/filebrowser/users" ) // NewStorage creates a storage.Storage based on Bolt DB. -func NewStorage(db *storm.DB) (*storage.Storage, error) { +func NewStorage(db *storm.DB) (*auth.Storage, *users.Storage, *share.Storage, *settings.Storage, error) { userStore := users.NewStorage(usersBackend{db: db}) shareStore := share.NewStorage(shareBackend{db: db}) settingsStore := settings.NewStorage(settingsBackend{db: db}) authStore := auth.NewStorage(authBackend{db: db}, userStore) - - err := save(db, "version", 2) //nolint:gomnd - if err != nil { - return nil, err - } - - return &storage.Storage{ - Auth: authStore, - Users: userStore, - Share: shareStore, - Settings: settingsStore, - }, nil + return authStore, userStore, shareStore, settingsStore, nil } diff --git a/backend/storage/bolt/config.go b/backend/storage/bolt/config.go index 3cf1c02b..93e896e9 100644 --- a/backend/storage/bolt/config.go +++ b/backend/storage/bolt/config.go @@ -15,7 +15,7 @@ func (s settingsBackend) Get() (*settings.Settings, error) { } func (s settingsBackend) Save(set *settings.Settings) error { - return save(s.db, "settings", set) + return Save(s.db, "settings", set) } func (s settingsBackend) GetServer() (*settings.Server, error) { @@ -27,5 +27,5 @@ func (s settingsBackend) GetServer() (*settings.Server, error) { } func (s settingsBackend) SaveServer(server *settings.Server) error { - return save(s.db, "server", server) + return Save(s.db, "server", server) } diff --git a/backend/storage/bolt/utils.go b/backend/storage/bolt/utils.go index fa84e3c3..40e3a1eb 100644 --- a/backend/storage/bolt/utils.go +++ b/backend/storage/bolt/utils.go @@ -15,6 +15,6 @@ func get(db *storm.DB, name string, to interface{}) error { return err } -func save(db *storm.DB, name string, from interface{}) error { +func Save(db *storm.DB, name string, from interface{}) error { return db.Set("config", name, from) } diff --git a/backend/storage/storage.go b/backend/storage/storage.go index 019c6e89..46431318 100644 --- a/backend/storage/storage.go +++ b/backend/storage/storage.go @@ -1,10 +1,20 @@ package storage import ( + "fmt" + "log" + "os" + "path/filepath" + + "github.com/asdine/storm/v3" "github.com/gtsteffaniak/filebrowser/auth" + "github.com/gtsteffaniak/filebrowser/errors" + "github.com/gtsteffaniak/filebrowser/files" "github.com/gtsteffaniak/filebrowser/settings" "github.com/gtsteffaniak/filebrowser/share" + "github.com/gtsteffaniak/filebrowser/storage/bolt" "github.com/gtsteffaniak/filebrowser/users" + "github.com/gtsteffaniak/filebrowser/utils" ) // Storage is a storage powered by a Backend which makes the necessary @@ -15,3 +25,112 @@ type Storage struct { Auth *auth.Storage Settings *settings.Storage } + +var store *Storage + +func InitializeDb(path string) (*Storage, bool, error) { + exists, err := dbExists(path) + if err != nil { + panic(err) + } + db, err := storm.Open(path) + + utils.CheckErr(fmt.Sprintf("storm.Open path %v", path), err) + authStore, userStore, shareStore, settingsStore, err := bolt.NewStorage(db) + if err != nil { + return nil, exists, err + } + + err = bolt.Save(db, "version", 2) //nolint:gomnd + if err != nil { + return nil, exists, err + } + store = &Storage{ + Auth: authStore, + Users: userStore, + Share: shareStore, + Settings: settingsStore, + } + if !exists { + quickSetup(store) + } + + return store, exists, err +} + +func dbExists(path string) (bool, error) { + stat, err := os.Stat(path) + if err == nil { + return stat.Size() != 0, nil + } + + if os.IsNotExist(err) { + d := filepath.Dir(path) + _, err = os.Stat(d) + if os.IsNotExist(err) { + if err := os.MkdirAll(d, 0700); err != nil { //nolint:govet,gomnd + return false, err + } + return false, nil + } + } + + return false, err +} + +func quickSetup(store *Storage) { + settings.Config.Auth.Key = utils.GenerateKey() + if settings.Config.Auth.Method == "noauth" { + err := store.Auth.Save(&auth.NoAuth{}) + utils.CheckErr("store.Auth.Save", err) + } else { + settings.Config.Auth.Method = "password" + err := store.Auth.Save(&auth.JSONAuth{}) + utils.CheckErr("store.Auth.Save", err) + } + err := store.Settings.Save(&settings.Config) + utils.CheckErr("store.Settings.Save", err) + err = store.Settings.SaveServer(&settings.Config.Server) + utils.CheckErr("store.Settings.SaveServer", err) + user := users.ApplyDefaults(users.User{}) + user.Username = settings.Config.Auth.AdminUsername + user.Password = settings.Config.Auth.AdminPassword + user.Perm.Admin = true + user.Scope = "./" + user.DarkMode = true + user.ViewMode = "normal" + user.LockPassword = false + user.Perm = settings.AdminPerms() + err = store.Users.Save(&user) + utils.CheckErr("store.Users.Save", err) +} + +// create new user +func CreateUser(userInfo users.User, asAdmin bool) error { + // must have username or password to create + if userInfo.Username == "" || userInfo.Password == "" { + return errors.ErrInvalidRequestParams + } + newUser := users.ApplyDefaults(userInfo) + if asAdmin { + newUser.Perm = settings.AdminPerms() + } + // create new home directory + userHome, err := settings.Config.MakeUserDir(newUser.Username, newUser.Scope, settings.Config.Server.Root) + if err != nil { + log.Printf("create user: failed to mkdir user home dir: [%s]", userHome) + return err + } + newUser.Scope = userHome + log.Printf("user: %s, home dir: [%s].", newUser.Username, userHome) + _, _, err = files.GetRealPath(settings.Config.Server.Root, newUser.Scope) + if err != nil { + log.Println("user path is not valid", newUser.Scope) + return nil + } + err = store.Users.Save(&newUser) + if err != nil { + return err + } + return nil +} diff --git a/backend/utils/main.go b/backend/utils/main.go new file mode 100644 index 00000000..3c44aff4 --- /dev/null +++ b/backend/utils/main.go @@ -0,0 +1,19 @@ +package utils + +import ( + "log" + + "github.com/gtsteffaniak/filebrowser/settings" +) + +func CheckErr(source string, err error) { + if err != nil { + log.Fatalf("%s: %v", source, err) + } +} + +func GenerateKey() []byte { + k, err := settings.GenerateKey() + CheckErr("generateKey", err) + return k +} diff --git a/configuration.md b/docs/configuration.md similarity index 100% rename from configuration.md rename to docs/configuration.md diff --git a/docs/contributing.md b/docs/contributing.md new file mode 100644 index 00000000..a320bea6 --- /dev/null +++ b/docs/contributing.md @@ -0,0 +1,2 @@ +# Contributing Guide + diff --git a/docs/getting_started.md b/docs/getting_started.md new file mode 100644 index 00000000..eb6de3c9 --- /dev/null +++ b/docs/getting_started.md @@ -0,0 +1,2 @@ +# Getting Started using FileBrowser Quantum + diff --git a/docs/migration.md b/docs/migration.md new file mode 100644 index 00000000..45a5ffa1 --- /dev/null +++ b/docs/migration.md @@ -0,0 +1,22 @@ +# Migration help + +It is possible to use the same database as used by filebrowser/filebrowser, +but you will need to follow the following process: + +1. Create a configuration file as mentioned above. +2. Copy your database file from the original filebrowser to the path of + the new one. +3. Update the configuration file to use the database (under server in + filebrowser.yml) +4. If you are using docker, update the docker-compose file or docker run + command to use the config file as described in the install section + above. +5. If you are not using docker, just make sure you run filebrowser -c + filebrowser.yml and have a valid filebrowser config. + + +Note: share links will not work and will need to be re-created after migration. + +The filebrowser Quantum application should run with the same user and rules that +you have from the original. But keep in mind the differences that may not work +the same way, but all user configuration should be available. \ No newline at end of file diff --git a/docs/roadmap.md b/docs/roadmap.md new file mode 100644 index 00000000..bb25f5fd --- /dev/null +++ b/docs/roadmap.md @@ -0,0 +1,24 @@ +# Planned Roadmap + +upcoming 0.2.x releases: + +- Replace http routes for gorilla/mux with stdlib +- Theme configuration from settings +- File syncronization improvements +- more filetype previews + +next major 0.3.0 release : + +- multiple sources https://github.com/filebrowser/filebrowser/issues/2514 +- introduce jobs as replacement to runners. +- Add Job status to the sidebar + - index status. + - Job status from users + - upload status + +Unplanned Future releases: + - Add tools to sidebar + - duplicate file detector. + - bulk rename https://github.com/filebrowser/filebrowser/issues/2473 + - metrics tracker - user access, file access, download count, last login, etc + - support minio, s3, and backblaze sources https://github.com/filebrowser/filebrowser/issues/2544 diff --git a/frontend/src/components/Breadcrumbs.vue b/frontend/src/components/Breadcrumbs.vue index 2e1bc2a9..b8a05072 100644 --- a/frontend/src/components/Breadcrumbs.vue +++ b/frontend/src/components/Breadcrumbs.vue @@ -19,18 +19,20 @@ diff --git a/frontend/src/components/Search.vue b/frontend/src/components/Search.vue index 35433eb7..3b9d529d 100644 --- a/frontend/src/components/Search.vue +++ b/frontend/src/components/Search.vue @@ -166,10 +166,6 @@ Multiple Search terms: Additional terms separated by |, for example "test|not" searches for both terms independently.

-

- File size: Searching files by size may have significantly longer search - times. -

    @@ -311,6 +307,9 @@ export default { path = path.slice(1); path = "./" + path.substring(path.indexOf("/") + 1); path = path.replace(/\/+$/, "") + "/"; + if (path == "./files/") { + path = "./"; + } return path; }, }, @@ -391,10 +390,10 @@ export default { return; } let searchTypesFull = this.searchTypes; - if (this.largerThan != "") { + if (this.largerThan != "" && !this.isTypeSelectDisabled) { searchTypesFull = searchTypesFull + "type:largerThan=" + this.largerThan + " "; } - if (this.smallerThan != "") { + if (this.smallerThan != "" && !this.isTypeSelectDisabled) { searchTypesFull = searchTypesFull + "type:smallerThan=" + this.smallerThan + " "; } let path = state.route.path; diff --git a/frontend/src/components/files/ListingItem.vue b/frontend/src/components/files/ListingItem.vue index df57f5a8..d7440afa 100644 --- a/frontend/src/components/files/ListingItem.vue +++ b/frontend/src/components/files/ListingItem.vue @@ -1,7 +1,7 @@