util

package module
v1.91.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 17, 2024 License: GPL-3.0 Imports: 34 Imported by: 0

README

util

The code in this project is designed to be reusable across many different Go projects. It's designed to be completely general and can be reused without any adaptation.

This library calls panic in a number of places so you may want to capture both stdout and stderr like this:

go run ./fileserver/main.go 2>stderr.log 1>stdout.log

To run all tests:

go test $(go list ./... | grep -v manual)

For an overview of this package see https://pkg.go.dev/github.com/1f604/util

The AWS dependencies are only needed for the cloudflare stuff.

Documentation

Overview

Map with expiring entries For slightly better performance, replace map[string]string with map[int64]string. See https://www.komu.engineer/blogs/01/go-gc-maps Memory usage can be more than double what you actually store in it. Based on my own testing, storing 10 million 128 byte URLs will take around 3.6GB of RAM, so each 128 byte URL took around 360 bytes of RAM. Entries can only be inserted, they cannot be updated or deleted before they expire. Uses sync.Mutex to protect concurrent access. Adding, getting, and removing entries require obtaining the mutex first. TODO: Benchmark switching to use a RWMutex or a sync.Map for improved performance. I tested sync.Map, it apparently has no reserve feature? Bulk load is slow - 7.8 seconds. Heap-based implementation for performance and simplicity Benchmarks show that Remove_All_Expired takes 3 seconds to remove 10 million expired entries Benchmarks show that NewConcurrentExpiringMapFromSlice takes 3.5 seconds to load 10 million entries No requirement for entries to have same TTL duration No support for updating expiry time - though this functionality can be added later if necessary. Example use cases: 1. Expiring short URLs - short URL -> long URL map 2. Expiring pastebins - short URL -> file path map 3. Expiring tokens - token -> expiry time map Map will return an error for expired entries

It provides an API that has 3 methods: 1. Update map size rounded 2. Append new entry to log file 3. Delete expired log files As well as an "expiring" LogStructuredStorage where you can remove expired entries from old log files and rewrite them into new log files The reason this works is because it's okay to see the same entry multiple times since we'll just ignore it when we see the same entry again We ignore entries that expire earlier than the current entry we have, and we overwrite the current entry as soon as we see another entry that has a later expiration time

This file provides 2 structs: a "base" LogStructuredStorage for storing permanent data As well as an "expiring" LogStructuredStorage where you can remove expired entries from old log files and rewrite them into new log files The reason this works is because it's okay to see the same entry multiple times since we'll just ignore it when we see the same entry again We ignore entries that expire earlier than the current entry we have, and we overwrite the current entry as soon as we see another entry that has a later expiration time

Deletes the oldest files until the size of directory is back within limit

It provides an API that has 3 methods: 1. Update map size rounded 2. Append new entry to log file 3. Delete expired log files As well as an "expiring" LogStructuredStorage where you can remove expired entries from old log files and rewrite them into new log files The reason this works is because it's okay to see the same entry multiple times since we'll just ignore it when we see the same entry again We ignore entries that expire earlier than the current entry we have, and we overwrite the current entry as soon as we see another entry that has a later expiration time

This file provides 2 structs: a "base" LogStructuredStorage for storing permanent data As well as an "expiring" LogStructuredStorage where you can remove expired entries from old log files and rewrite them into new log files The reason this works is because it's okay to see the same entry multiple times since we'll just ignore it when we see the same entry again We ignore entries that expire earlier than the current entry we have, and we overwrite the current entry as soon as we see another entry that has a later expiration time

The purpose of this file is for storing the map size on disk. The purpose of the map size file is to make it faster to load entries from disk into map By storing the size of the map in the file, the next time on program on startup creates the map, the make() function can be given the correct size It doesn't matter if it's slightly too big or small since it won't affect the performance much. Better to be too big than too small to avoid resizing which is costly

This is a horrible hack to work around Go's lack of built-in support for type-safe enums True type safe enums are not possible in Go due to its lack of sum types This is a "best effort" workaround that will prevent some kinds of bugs but is not perfect

We use byte instead of rune because this alphabet contains only printable ASCII characters.

Index

Constants

View Source
const BASE53_ALPHABET_SIZE = 53
View Source
const XATTR_1F604_FILESERVER_CAN_BE_SERVED = "user.1f604.fileserver.can_be_served"

Variables

View Source
var IsAlnum = [256]byte{}/* 256 elements not displayed */

Character class lookup table

Functions

func Assert_error_equals

func Assert_error_equals(t *testing.T, err error, expected string, skip_level int)

func Assert_no_error

func Assert_no_error(t *testing.T, err error, skip_level int)

func Assert_result_equals_bool

func Assert_result_equals_bool(t *testing.T, actual bool, err error, expected bool, skip_level int)

func Assert_result_equals_bytes

func Assert_result_equals_bytes(t *testing.T, actual []byte, err error, expected string, skip_level int)

func Assert_result_equals_interface

func Assert_result_equals_interface(t *testing.T, actual interface{}, err error, expected interface{}, skip_level int)

func Assert_result_equals_string_slice

func Assert_result_equals_string_slice(t *testing.T, actual []string, err error, expected []string, skip_level int)

func Assert_result_equals_time

func Assert_result_equals_time(t *testing.T, actual time.Time, err error, expected time.Time, line_number int)

func BuildStruct

func BuildStruct[T any]() *T

func Check_err

func Check_err(err error)

func Check_no_other_instances_running

func Check_no_other_instances_running(socket_addr string) net.Listener

Users MUST keep a reference to the returned listener to ensure it does not get garbage-collected!!! This caveat applies regardless of whether you're using TCP, Unix, or some other socket!!

func Compute_String_Checksum added in v1.0.0

func Compute_String_Checksum(str string) string

func Convert_str_to_uint64 added in v1.0.0

func Convert_str_to_uint64(input_str string) uint64

func Convert_uint64_to_str added in v1.0.0

func Convert_uint64_to_str(bigendian_uint64 uint64, length int) string

func Copy_Slice_Into_150_Arr

func Copy_Slice_Into_150_Arr(slice []byte, arr [150]byte)

func Crypto_Rand_Alnum_String added in v1.7.0

func Crypto_Rand_Alnum_String(length int) string

Returns random string consisting of letters and numbers

func Crypto_Rand_Base64String added in v1.7.0

func Crypto_Rand_Base64String(length int) string

Returns random string consisting of letters and numbers

func Crypto_Randint

func Crypto_Randint(max int) (int, error)

This function works, I've manually tested it. Returns integers from 0 up to AND NOT INCLUDING max

func Crypto_Random_Choice

func Crypto_Random_Choice[T any](arr *[]T) (T, error)

This function works, I've manually tested it.

func Divmod

func Divmod(numerator, denominator int) (int, int)

func GetPasteFileName_Common added in v1.11.1

func GetPasteFileName_Common(prefix string, file_contents []byte, timestamp int64) string

func Get_file_size

func Get_file_size(f *os.File) int64

this function assumes file pointer is valid. We could probably make this more efficient by calculating the file size in-process instead of making syscall each time.

func Getxattr

func Getxattr(path string, name string, data []byte) (int, error)

func Int64_to_string

func Int64_to_string(num int64) string

func IsSameType added in v1.0.0

func IsSameType(a, b interface{}) bool

This function works, I've manually tested it.

func LBSES_Get_bucket_filename added in v1.0.0

func LBSES_Get_bucket_filename(timestamp int64) string

func LBSES_Parse_bucket_filename_to_timestamp added in v1.0.0

func LBSES_Parse_bucket_filename_to_timestamp(filename string) (int64, error)

func LSPS_Parse_log_filename_to_number added in v1.0.0

func LSPS_Parse_log_filename_to_number(filename string) (int64, error)

func LoadStoredRecordsFromDisk added in v1.0.0

func LoadStoredRecordsFromDisk(params *LSRFD_Params) (ConcurrentMap, *MapSizeFileManager)

This is the one you want to use in production

func Power_Naive

func Power_Naive(a, b int) int

Naive algorithm, only suitable for small b.

func Power_Slow

func Power_Slow(a, b, m int) int

calculates a to the power of b mod m. If m is 0 then just returns a to the power of b. This function seems to create a memory leak, but it doesn't. Anyway, it's better to use custom power

func PrintMemUsage

func PrintMemUsage()

PrintMemUsage outputs the current, total and OS memory being used. As well as the number of garage collection cycles completed.

func PutEntry_Common added in v1.0.0

func PutEntry_Common(requested_length int, long_url string, value_type MapItemValueType, timestamp int64, generate_strings_up_to int,
	slice_storage map[int]*RandomBag64, urlmap URLMap, b53m *Base53IDManager, log_storage LogStorage, paste_storage PasteStorage, map_size_persister *MapSizeFileManager,
	xattr_params *XattrParams) (string, error)

Shorten long URL into short URL and return the short URL and store the entry both in map and on disk

func ReplaceString

func ReplaceString(str string, replacement rune, index int) string

func Retryfunc

func Retryfunc(taskname string, dotask retrylib_task, expected_duration time.Duration, max_wait time.Duration)

func Retryproc

func Retryproc(procname string, expected_duration time.Duration, max_wait time.Duration)

func ReverseString

func ReverseString(s string) string

func RunFuncEveryXSeconds added in v1.0.0

func RunFuncEveryXSeconds(fn fn_type, run_interval_seconds int)

Does not tick shift - will run function precisely every X seconds even if function takes some time to run - as long as the function doesn't take too long of course.

Synchronous - next call cannot start until previous call has finished.

func Setxattr

func Setxattr(path string, name string, data []byte, flags int) error

func String_to_int64

func String_to_int64(s string) (int64, error)

func Validate_Timestamp_Common added in v1.0.0

func Validate_Timestamp_Common(timestamp_unix int64) error

Returns error if unix timestamp is before 2023 or after the year 20,000

Otherwise returns nil

func Write_Entry_To_File added in v1.0.0

func Write_Entry_To_File(key string, value string, value_type MapItemValueType, timestamp int64, file_handle *os.File) error

IMPORTANT: This function DOES NOT close the file handle!!!

Types

type BandwidthMonitor

type BandwidthMonitor struct {
	// contains filtered or unexported fields
}

func (*BandwidthMonitor) GetTotalAllBytes

func (bm *BandwidthMonitor) GetTotalAllBytes() int64

func (*BandwidthMonitor) GetTotalTXBytes

func (bm *BandwidthMonitor) GetTotalTXBytes() int64

func (*BandwidthMonitor) RunThread

func (bm *BandwidthMonitor) RunThread(time_interval_secs int)

type Base53ErrorChecksumMismatch

type Base53ErrorChecksumMismatch struct{}

func (Base53ErrorChecksumMismatch) Error

type Base53ErrorIllegalCharacter

type Base53ErrorIllegalCharacter struct{}

func (Base53ErrorIllegalCharacter) Error

type Base53ErrorIllegalPair

type Base53ErrorIllegalPair struct{}

func (Base53ErrorIllegalPair) Error

func (e Base53ErrorIllegalPair) Error() string

type Base53ErrorStrWithoutCsumTooLong

type Base53ErrorStrWithoutCsumTooLong struct{}

func (Base53ErrorStrWithoutCsumTooLong) Error

type Base53ErrorStrWithoutCsumTooShort

type Base53ErrorStrWithoutCsumTooShort struct{}

Custom error types

func (Base53ErrorStrWithoutCsumTooShort) Error

type Base53ID

type Base53ID interface {
	GetStrWithoutCsum() string
	GetCsum() byte
	GetCombinedString() string
	Length() int
}

See https://stackoverflow.com/questions/57993809/how-to-hide-the-default-type-constructor-in-golang

type Base53IDManager

type Base53IDManager struct {
	// contains filtered or unexported fields
}

func NewBase53IDManager

func NewBase53IDManager() *Base53IDManager

pregenerate means strings up to n characters will be pre-generated and stored in RandomBags for fast PopRandom and Push later.

func (*Base53IDManager) B53_generate_all_Base53IDs

func (b53m *Base53IDManager) B53_generate_all_Base53IDs(n int) ([]Base53ID, error)

Generate all IDs of length n

func (*Base53IDManager) B53_generate_all_Base53IDs_int64

func (b53m *Base53IDManager) B53_generate_all_Base53IDs_int64(n int) ([]uint64, error)

func (*Base53IDManager) B53_generate_all_Base53IDs_int64_optimized

func (b53m *Base53IDManager) B53_generate_all_Base53IDs_int64_optimized(n int, should_be_added_fn ShouldBase53IDBePlacedIntoSliceFn) ([]uint64, error)

Doesn't push it into slice if it's already in map.

func (*Base53IDManager) B53_generate_all_Base53IDs_int64_test

func (b53m *Base53IDManager) B53_generate_all_Base53IDs_int64_test(n int) ([]uint64, error)

func (*Base53IDManager) B53_generate_next_Base53ID

func (b53m *Base53IDManager) B53_generate_next_Base53ID(old_id Base53ID) (Base53ID, error)

func (*Base53IDManager) B53_generate_random_Base53ID

func (b53m *Base53IDManager) B53_generate_random_Base53ID(n int) (Base53ID, error)

func (*Base53IDManager) Convert_uint64_to_Base53ID

func (b53m *Base53IDManager) Convert_uint64_to_Base53ID(bigendian_uint64 uint64, length int) (*_base53ID_impl, error)

func (*Base53IDManager) Convert_uint64_to_byte_array

func (b53m *Base53IDManager) Convert_uint64_to_byte_array(bigendian_uint64 uint64) []byte

func (*Base53IDManager) NewBase53ID

func (b53m *Base53IDManager) NewBase53ID(str_without_csum string, csum byte, remap bool) (*_base53ID_impl, error)

Construction is validation.

type CEMItem

type CEMItem struct {
	Key              string
	Value            string
	Expiry_time_unix int64
}

type CEMNonExistentKeyError added in v1.0.0

type CEMNonExistentKeyError struct{}

func (CEMNonExistentKeyError) Error added in v1.0.0

func (e CEMNonExistentKeyError) Error() string

type CEPUMParams added in v1.0.0

type CEPUMParams struct {
	Expiry_check_interval_seconds_ram    int
	Expiry_check_interval_seconds_disk   int
	Extra_keeparound_seconds_ram         int64
	Extra_keeparound_seconds_disk        int64
	Bucket_interval                      int64
	Bucket_directory_path_absolute       string
	Paste_bucket_directory_path_absolute string
	Size_file_path_absolute              string
	B53m                                 *Base53IDManager
	Size_file_rounded_multiple           int64
	Generate_strings_up_to               int
	Xattr_params                         *XattrParams
}

type CPMNonExistentKeyError added in v1.0.0

type CPMNonExistentKeyError struct{}

func (CPMNonExistentKeyError) Error added in v1.0.0

func (e CPMNonExistentKeyError) Error() string

type CPPUMParams added in v1.0.0

type CPPUMParams struct {
	Log_directory_path_absolute    string
	Bucket_directory_path_absolute string
	B53m                           *Base53IDManager
	Generate_strings_up_to         int
	Log_file_max_size_bytes        int64
	Size_file_rounded_multiple     int64
	Size_file_path_absolute        string
	Xattr_params                   *XattrParams
}

type ConcurrentExpiringMap

type ConcurrentExpiringMap struct {
	// contains filtered or unexported fields
}

keys are strings

func NewConcurrentExpiringMapFromSlice

func NewConcurrentExpiringMapFromSlice(expiry_callback ExpiryCallback, kv_pairs []CEMItem) *ConcurrentExpiringMap

func NewEmptyConcurrentExpiringMap

func NewEmptyConcurrentExpiringMap(expiry_callback ExpiryCallback) *ConcurrentExpiringMap

func (*ConcurrentExpiringMap) BeginConstruction added in v1.0.0

func (*ConcurrentExpiringMap) BeginConstruction(stored_map_length int64, expiry_callback ExpiryCallback) ConcurrentMap

This method properly constructs the object

func (*ConcurrentExpiringMap) ContinueConstruction added in v1.0.0

func (cem *ConcurrentExpiringMap) ContinueConstruction(key_str string, value_str string, expiry_time int64, item_value_type MapItemValueType)

Caller must check that the key_str is not already in the map.

func (*ConcurrentExpiringMap) FinishConstruction added in v1.0.0

func (cem *ConcurrentExpiringMap) FinishConstruction()

func (*ConcurrentExpiringMap) Get_Entry

func (cem *ConcurrentExpiringMap) Get_Entry(key string) (MapItem, error)

func (*ConcurrentExpiringMap) NumItems

func (cem *ConcurrentExpiringMap) NumItems() int

func (*ConcurrentExpiringMap) NumPastes added in v1.8.0

func (cem *ConcurrentExpiringMap) NumPastes() int

func (*ConcurrentExpiringMap) Put_New_Entry

func (cem *ConcurrentExpiringMap) Put_New_Entry(key string, value string, expiry_time int64, value_type MapItemValueType) error

Will only return an error if the key already exists.

func (*ConcurrentExpiringMap) Remove_All_Expired

func (cem *ConcurrentExpiringMap) Remove_All_Expired(extra_keeparound_seconds int64)

keep links around for extra_keeparound_seconds just to tell people that the link has expired this function will remove 10 million entries in 3 seconds

type ConcurrentExpiringPersistentURLMap added in v1.0.0

type ConcurrentExpiringPersistentURLMap struct {
	// contains filtered or unexported fields
}

func CreateConcurrentExpiringPersistentURLMapFromDisk added in v1.0.0

func CreateConcurrentExpiringPersistentURLMapFromDisk(cepum_params *CEPUMParams) *ConcurrentExpiringPersistentURLMap

This is the one you want to use in production

func (*ConcurrentExpiringPersistentURLMap) GetEntry added in v1.0.0

func (manager *ConcurrentExpiringPersistentURLMap) GetEntry(short_url string) (MapItem, error)

func (*ConcurrentExpiringPersistentURLMap) NumItems added in v1.3.1

func (manager *ConcurrentExpiringPersistentURLMap) NumItems() int
func (manager *ConcurrentExpiringPersistentURLMap) PrintInternalState() {
	manager.mut.Lock()
	defer manager.mut.Unlock()

	log.Println(" ============ Printing CCPUM internal state ===========")
	log.Println("Printing slice_storage:")
	for k, v := range manager.slice_storage {
		log.Println("k,v:", k, *v)
	}
	log.Println(manager.map_storage)
	values := People{}
	for k, v := range manager.map_storage.m {
		values = append(values, MapItem2{k, v.value, v.expiry_time_unix})
	}
	sort.Sort(values)
	for k, v := range values {
		fmt.Println("kv:", k, v)
	}
	fmt.Println("time now:", time.Now().Unix())

	log.Println(" ------------------------------------------------------")
}

func (*ConcurrentExpiringPersistentURLMap) NumPastes added in v1.9.0

func (manager *ConcurrentExpiringPersistentURLMap) NumPastes() int

func (*ConcurrentExpiringPersistentURLMap) PutEntry added in v1.0.0

func (manager *ConcurrentExpiringPersistentURLMap) PutEntry(requested_length int, long_url string, expiry_time int64, value_type MapItemValueType) (string, error)

Shorten long URL into short URL and return the short URL and store the entry both in map and on disk

func (*ConcurrentExpiringPersistentURLMap) RemoveAllExpiredURLsFromDisk added in v1.0.0

func (manager *ConcurrentExpiringPersistentURLMap) RemoveAllExpiredURLsFromDisk()

Removed expired URLs from disk every x seconds

func (*ConcurrentExpiringPersistentURLMap) RemoveAllExpiredURLsFromRAM added in v1.0.0

func (manager *ConcurrentExpiringPersistentURLMap) RemoveAllExpiredURLsFromRAM()

Removed expired URLs from map in RAM every x seconds

type ConcurrentMap added in v1.0.0

type ConcurrentMap interface {
	Get_Entry(string) (MapItem, error)
	BeginConstruction(int64, ExpiryCallback) ConcurrentMap
	ContinueConstruction(string, string, int64, MapItemValueType)
	FinishConstruction()
	NumItems() int
	NumPastes() int
}

type ConcurrentPermanentMap

type ConcurrentPermanentMap struct {
	// contains filtered or unexported fields
}

keys are strings

func NewEmptyConcurrentPermanentMap

func NewEmptyConcurrentPermanentMap() *ConcurrentPermanentMap

func (*ConcurrentPermanentMap) BeginConstruction added in v1.0.0

func (*ConcurrentPermanentMap) BeginConstruction(stored_map_length int64, expiry_callback ExpiryCallback) ConcurrentMap

You can call this on nil receiver

func (*ConcurrentPermanentMap) ContinueConstruction added in v1.0.0

func (cpm *ConcurrentPermanentMap) ContinueConstruction(key_str string, value_str string, expiry_time int64, item_value_type MapItemValueType)

Caller must check that the key_str is not already in the map.

func (*ConcurrentPermanentMap) FinishConstruction added in v1.0.0

func (cpm *ConcurrentPermanentMap) FinishConstruction()

func (*ConcurrentPermanentMap) Get_Entry

func (cpm *ConcurrentPermanentMap) Get_Entry(key string) (MapItem, error)

func (*ConcurrentPermanentMap) NumItems

func (cpm *ConcurrentPermanentMap) NumItems() int

func (*ConcurrentPermanentMap) NumPastes added in v1.8.0

func (cpm *ConcurrentPermanentMap) NumPastes() int

func (*ConcurrentPermanentMap) Put_New_Entry added in v1.0.0

func (cpm *ConcurrentPermanentMap) Put_New_Entry(key string, value string, _ int64, item_value_type MapItemValueType) error

Returns an error if the entry already exists, otherwise returns nil.

type ConcurrentPersistentPermanentURLMap added in v1.0.0

type ConcurrentPersistentPermanentURLMap struct {
	// contains filtered or unexported fields
}

func CreateConcurrentPersistentPermanentURLMapFromDisk added in v1.0.0

func CreateConcurrentPersistentPermanentURLMapFromDisk(cppum_params *CPPUMParams) *ConcurrentPersistentPermanentURLMap

This is the one you want to use in production

func (*ConcurrentPersistentPermanentURLMap) GetEntry added in v1.0.0

func (manager *ConcurrentPersistentPermanentURLMap) GetEntry(short_url string) (MapItem, error)

func (*ConcurrentPersistentPermanentURLMap) NumItems added in v1.3.1

func (manager *ConcurrentPersistentPermanentURLMap) NumItems() int

func (*ConcurrentPersistentPermanentURLMap) NumPastes added in v1.9.0

func (manager *ConcurrentPersistentPermanentURLMap) NumPastes() int

func (*ConcurrentPersistentPermanentURLMap) PrintInternalState added in v1.0.0

func (manager *ConcurrentPersistentPermanentURLMap) PrintInternalState()

func (*ConcurrentPersistentPermanentURLMap) PutEntry added in v1.0.0

func (manager *ConcurrentPersistentPermanentURLMap) PutEntry(requested_length int, long_url string, _ int64, value_type MapItemValueType) (string, error)

Shorten long URL into short URL and return the short URL and store the entry both in map and on disk

type CryptoRandomChoiceEmptySliceError

type CryptoRandomChoiceEmptySliceError struct{}

Custom error types

func (CryptoRandomChoiceEmptySliceError) Error

type ERROR

type ERROR struct {
	S string
}

func Error

func Error(s string) ERROR

type EXACT_MATCH_HANDLER_t

type EXACT_MATCH_HANDLER_t struct{}
var EXACT_MATCH_HANDLER EXACT_MATCH_HANDLER_t = EXACT_MATCH_HANDLER_t{}

type ExpiringBucketStorage added in v1.7.0

type ExpiringBucketStorage struct {
	// contains filtered or unexported fields
}

func NewExpiringBucketStorage added in v1.7.0

func NewExpiringBucketStorage(bucket_directory_path_absolute string) *ExpiringBucketStorage

the bucket interval is the all-important parameter that determines the number of buckets and when buckets will be deleted the bucket interval is in Unix time (seconds). it means that all entries between two time points will go into one bucket when that bucket expires, it will be deleted example: if bucket interval is 100, then all timestamps from 0 to 100 will go into one bucket, all timestamps 100 to 200 will go into next bucket and so on bucketing is done simply by the / (round-to-zero division) operation. expiry time will be divided by the bucket interval and placed into appropriate bucket (log file) e.g. if bucket interval is 100, then all timestamps from 86400 to 86500 will go into bucket 865 e.g. if bucket interval is 200, then all timestamps from 1200 to 1400 will go into bucket 7, all timestamps from 1400 to 1600 will go to bucket 8 and so on. e.g. if bucket interval is 200, then bucket 200 holds all timestamps 0-199, bucket 400 holds all timestamps 200-399, bucket 600 holds 400-599, and so on. bucket files are named "expires_before_18400" where the last number is a unix timestamp

func (*ExpiringBucketStorage) InsertFile added in v1.7.0

func (ebs *ExpiringBucketStorage) InsertFile(file_contents []byte, expiry_time int64, xattr_params *XattrParams) string

Adds a new entry to the log file

Also important: Make sure the input does not contain carriage return or newline.

type ExpiringHeapItem

type ExpiringHeapItem struct {
	// contains filtered or unexported fields
}

func (ExpiringHeapItem) String

func (p ExpiringHeapItem) String() string

type ExpiringHeapQueue

type ExpiringHeapQueue []*ExpiringHeapItem

============= All this stuff is just to implement the interface required by heap ===================

func (ExpiringHeapQueue) Len

func (pq ExpiringHeapQueue) Len() int

func (ExpiringHeapQueue) Less

func (pq ExpiringHeapQueue) Less(i, j int) bool

func (*ExpiringHeapQueue) Pop

func (pq *ExpiringHeapQueue) Pop() any

func (*ExpiringHeapQueue) Push

func (pq *ExpiringHeapQueue) Push(x any)

func (ExpiringHeapQueue) Swap

func (pq ExpiringHeapQueue) Swap(i, j int)

type ExpiringMapItem

type ExpiringMapItem struct {
	// contains filtered or unexported fields
}

func NewTestExpiringMapItem added in v1.8.0

func NewTestExpiringMapItem(value string, valuetype MapItemValueType, timestamp int64) *ExpiringMapItem

func (*ExpiringMapItem) GetExpiryTime added in v1.2.0

func (emi *ExpiringMapItem) GetExpiryTime() int64

func (*ExpiringMapItem) GetType added in v1.3.0

func (emi *ExpiringMapItem) GetType() MapItemType

func (*ExpiringMapItem) GetValue added in v1.0.0

func (emi *ExpiringMapItem) GetValue() string

func (*ExpiringMapItem) MapItemToString added in v1.0.0

func (emi *ExpiringMapItem) MapItemToString() string

func (ExpiringMapItem) String

func (p ExpiringMapItem) String() string

type ExpiryCallback

type ExpiryCallback func(string, MapItem)

type FileEntry

type FileEntry struct {
	FilePath    string
	FileInfo    fs.FileInfo
	TimeCreated time.Time
}

type GenericConcurrentPersistentMap added in v1.9.0

type GenericConcurrentPersistentMap interface {
	GetEntry(short_url string) (MapItem, error)
	PutEntry(requested_length int, long_url string, expiry_time int64, value_type MapItemValueType) (string, error)
	NumItems() int
	NumPastes() int
}

type HandlerTypeEnum

type HandlerTypeEnum interface {
	// contains filtered or unexported methods
}

type KeyAlreadyExistsError

type KeyAlreadyExistsError struct{}

func (KeyAlreadyExistsError) Error

func (e KeyAlreadyExistsError) Error() string

type KeyExpiredError

type KeyExpiredError struct {
	// contains filtered or unexported fields
}

func (KeyExpiredError) Error

func (e KeyExpiredError) Error() string

type LONGEST_PREFIX_HANDLER_t

type LONGEST_PREFIX_HANDLER_t struct{}
var LONGEST_PREFIX_HANDLER LONGEST_PREFIX_HANDLER_t = LONGEST_PREFIX_HANDLER_t{}

type LSRFD_Params added in v1.0.0

type LSRFD_Params struct {
	B53m                        *Base53IDManager
	Log_directory_path_absolute string
	Size_file_path_absolute     string
	Entry_should_be_deleted_fn  func(int64) bool
	Lss                         LogStructuredStorage
	Expiry_callback             ExpiryCallback
	Slice_storage               map[int]*RandomBag64
	Nil_ptr                     ConcurrentMap
	Size_file_rounded_multiple  int64
	Generate_strings_up_to      int
}

type LogBucketStructuredExpiringStorage added in v1.0.0

type LogBucketStructuredExpiringStorage struct {
	// contains filtered or unexported fields
}

func NewLogBucketStructuredExpiringStorage added in v1.0.0

func NewLogBucketStructuredExpiringStorage(bucket_interval int64, bucket_directory_path_absolute string) *LogBucketStructuredExpiringStorage

the bucket interval is the all-important parameter that determines the number of buckets and when buckets will be deleted the bucket interval is in Unix time (seconds). it means that all entries between two time points will go into one bucket when that bucket expires, it will be deleted example: if bucket interval is 100, then all timestamps from 0 to 100 will go into one bucket, all timestamps 100 to 200 will go into next bucket and so on bucketing is done simply by the / (round-to-zero division) operation. expiry time will be divided by the bucket interval and placed into appropriate bucket (log file) e.g. if bucket interval is 100, then all timestamps from 86400 to 86500 will go into bucket 865 e.g. if bucket interval is 200, then all timestamps from 1200 to 1400 will go into bucket 7, all timestamps from 1400 to 1600 will go to bucket 8 and so on. e.g. if bucket interval is 200, then bucket 200 holds all timestamps 0-199, bucket 400 holds all timestamps 200-399, bucket 600 holds 400-599, and so on. bucket files are named "expires_before_18400" where the last number is a unix timestamp

func (*LogBucketStructuredExpiringStorage) AppendNewEntry added in v1.0.0

func (lbses *LogBucketStructuredExpiringStorage) AppendNewEntry(key string, value string, value_type MapItemValueType, expiry_time int64) error

Adds a new entry to the log file

Also important: Make sure the input does not contain carriage return or newline.

func (*LogBucketStructuredExpiringStorage) DeleteExpiredLogFiles added in v1.0.0

func (lbses *LogBucketStructuredExpiringStorage) DeleteExpiredLogFiles(extra_keeparound_seconds_disk int64)

Delete expired buckets (log files) extra_keeparound_seconds_disk defines how long to keep around log files after they expired

func (*LogBucketStructuredExpiringStorage) ValidateLogFilename added in v1.0.0

func (*LogBucketStructuredExpiringStorage) ValidateLogFilename(filename string) error

Can be called with nil receiver.

type LogFileDeleter

type LogFileDeleter struct {
	AbsoluteDirectoryPath   string
	CurrentLogFileName      string
	DirectorySizeLimitBytes int64
}

func NewLogFileDeleter

func NewLogFileDeleter(directory_path string, size_limit int64, log_file_name string) *LogFileDeleter

func (*LogFileDeleter) Delete_Excess_Files

func (lfd *LogFileDeleter) Delete_Excess_Files()

func (*LogFileDeleter) RunThread

func (lfd *LogFileDeleter) RunThread(time_interval_secs int)

type LogStorage added in v1.0.0

type LogStorage interface {
	AppendNewEntry(string, string, MapItemValueType, int64) error
}

type LogStructuredPermanentStorage added in v1.0.0

type LogStructuredPermanentStorage struct {
	// contains filtered or unexported fields
}

func NewLogStructuredPermanentStorage added in v1.0.0

func NewLogStructuredPermanentStorage(log_file_max_size int64, log_directory_path_absolute string) *LogStructuredPermanentStorage

Works just like the log rotation library - once log file reaches the max size, create a new log file Except we don't need any clever naming scheme, just an increasing number will do, since we're going to read in every file on startup anyway The increasing number naming scheme is actually good for cloud backups since we can just send the highest numbered file every time

func (*LogStructuredPermanentStorage) AppendNewEntry added in v1.0.0

func (lsps *LogStructuredPermanentStorage) AppendNewEntry(key string, value string, value_type MapItemValueType, generation_time_unix int64) error

Adds a new entry to the log file

Also important: Make sure the input does not contain carriage return or newline.

func (*LogStructuredPermanentStorage) ValidateLogFilename added in v1.0.0

func (*LogStructuredPermanentStorage) ValidateLogFilename(filename string) error

Can be called with nil receiver.

type LogStructuredStorage added in v1.0.0

type LogStructuredStorage interface {
	ValidateLogFilename(filename string) error
}

type MapItem added in v1.0.0

type MapItem interface {
	MapItemToString() string
	GetValue() string
	GetExpiryTime() int64
	GetType() MapItemType
}

func GetEntryCommon added in v1.0.0

func GetEntryCommon(cm ConcurrentMap, short_url string) (MapItem, error)

type MapItem2 added in v1.0.0

type MapItem2 struct {
	// contains filtered or unexported fields
}

type MapItemType added in v1.7.0

type MapItemType struct {
	IsTemporary bool // if it's not temporary then it's permanent
	ValueType   MapItemValueType
}

type MapItemValueType added in v1.7.0

type MapItemValueType interface {
	ToString() string
	// contains filtered or unexported methods
}

type MapSizeFileManager added in v1.0.0

type MapSizeFileManager struct {
	// contains filtered or unexported fields
}

func NewMapSizeFileManager added in v1.0.0

func NewMapSizeFileManager(size_file_path_absolute string, size_multiple int64) *MapSizeFileManager

func (*MapSizeFileManager) UpdateMapSizeRounded added in v1.0.0

func (msfm *MapSizeFileManager) UpdateMapSizeRounded(actual_size int64)

rounds to the nearest size

You can call this function as many times as you like, since it only updates file if rounded size has changed.

type MapWithPastesCount added in v1.8.0

type MapWithPastesCount[T MapItem] interface {
	InsertNew(key string, value T) error
	GetKey(key string) (T, error)
	DeleteKey(key string)
	NumPastes() int
	NumItems() int
}

Users of this map are expected to access it with a mutex.

func NewMapWithPastesCount added in v1.8.0

func NewMapWithPastesCount[T MapItem](size int64) MapWithPastesCount[T]

type MapWithPastesCount_impl added in v1.8.0

type MapWithPastesCount_impl[T MapItem] struct {
	// contains filtered or unexported fields
}

func (*MapWithPastesCount_impl[T]) DeleteKey added in v1.8.0

func (mwpc *MapWithPastesCount_impl[T]) DeleteKey(key string)

func (*MapWithPastesCount_impl[T]) GetKey added in v1.8.0

func (mwpc *MapWithPastesCount_impl[T]) GetKey(key string) (T, error)

func (*MapWithPastesCount_impl[T]) InsertNew added in v1.8.0

func (mwpc *MapWithPastesCount_impl[T]) InsertNew(key string, value T) error

func (*MapWithPastesCount_impl[T]) NumItems added in v1.8.0

func (mwpc *MapWithPastesCount_impl[T]) NumItems() int

func (*MapWithPastesCount_impl[T]) NumPastes added in v1.8.0

func (mwpc *MapWithPastesCount_impl[T]) NumPastes() int

type NewBase53IDParams

type NewBase53IDParams struct {
	Str_without_csum string
	Csum             byte
	Remap            bool
}

inspired by StripeIntentParams

type NonExistentKeyError

type NonExistentKeyError interface {
	NonExistentKeyError() string
}

type PASTE_TYPE_t added in v1.7.0

type PASTE_TYPE_t struct{}
var TYPE_MAP_ITEM_PASTE PASTE_TYPE_t = PASTE_TYPE_t{}

func (PASTE_TYPE_t) ToString added in v1.7.0

func (PASTE_TYPE_t) ToString() string

type PasteStorage added in v1.7.2

type PasteStorage interface {
	InsertFile([]byte, int64, *XattrParams) string
}

type People added in v1.0.0

type People []MapItem2

Define a slice of Person structs

func (People) Len added in v1.0.0

func (p People) Len() int

Implement the Len method required by sort.Interface

func (People) Less added in v1.0.0

func (p People) Less(i, j int) bool

Implement the Less method required by sort.Interface

func (People) Swap added in v1.0.0

func (p People) Swap(i, j int)

Implement the Swap method required by sort.Interface

type PermanentBucketStorage added in v1.7.2

type PermanentBucketStorage struct {
	// contains filtered or unexported fields
}

func NewPermanentBucketStorage added in v1.7.2

func NewPermanentBucketStorage(bucket_directory_path_absolute string) *PermanentBucketStorage

the bucket interval is the all-important parameter that determines the number of buckets and when buckets will be deleted the bucket interval is in Unix time (seconds). it means that all entries between two time points will go into one bucket when that bucket expires, it will be deleted example: if bucket interval is 100, then all timestamps from 0 to 100 will go into one bucket, all timestamps 100 to 200 will go into next bucket and so on bucketing is done simply by the / (round-to-zero division) operation. expiry time will be divided by the bucket interval and placed into appropriate bucket (log file) e.g. if bucket interval is 100, then all timestamps from 86400 to 86500 will go into bucket 865 e.g. if bucket interval is 200, then all timestamps from 1200 to 1400 will go into bucket 7, all timestamps from 1400 to 1600 will go to bucket 8 and so on. e.g. if bucket interval is 200, then bucket 200 holds all timestamps 0-199, bucket 400 holds all timestamps 200-399, bucket 600 holds 400-599, and so on. bucket files are named "expires_before_18400" where the last number is a unix timestamp

func (*PermanentBucketStorage) InsertFile added in v1.7.2

func (pbs *PermanentBucketStorage) InsertFile(file_contents []byte, _ int64, xattr_params *XattrParams) string

type PermanentMapItem added in v1.0.0

type PermanentMapItem struct {
	// contains filtered or unexported fields
}

func (*PermanentMapItem) GetExpiryTime added in v1.2.0

func (emi *PermanentMapItem) GetExpiryTime() int64

func (*PermanentMapItem) GetType added in v1.3.0

func (pmi *PermanentMapItem) GetType() MapItemType

func (*PermanentMapItem) GetValue added in v1.0.0

func (pmi *PermanentMapItem) GetValue() string

func (*PermanentMapItem) MapItemToString added in v1.0.0

func (pmi *PermanentMapItem) MapItemToString() string

type RESULT

type RESULT struct {
	S interface{}
}

func Result

func Result(s interface{}) RESULT

type RandomBag64

type RandomBag64 struct {
	// contains filtered or unexported fields
}

func CreateRandomBagFromSlice

func CreateRandomBagFromSlice(items []uint64) *RandomBag64

The RandomBag steals the slice that you pass to it. You should not use the slice anywhere afterwards.

func (*RandomBag64) PopRandom

func (rb *RandomBag64) PopRandom() (uint64, error)

Removes from array and swaps last element into it

Will only return an error if the bag is empty.

func (*RandomBag64) Push

func (rb *RandomBag64) Push(item uint64)

Push should always succeed

func (*RandomBag64) Size

func (rb *RandomBag64) Size() int

type RandomBagEmptyError

type RandomBagEmptyError struct{}

func (RandomBagEmptyError) Error

func (e RandomBagEmptyError) Error() string

type SafeTLSAutoCertManager

type SafeTLSAutoCertManager struct {
	// contains filtered or unexported fields
}

func NewSafeAutoCertManager

func NewSafeAutoCertManager(tls_email_address string, ssl_cache_dir string, hostnames_whitelist []string) *SafeTLSAutoCertManager

func (*SafeTLSAutoCertManager) GetSecureTLSConfig

func (m *SafeTLSAutoCertManager) GetSecureTLSConfig() *tls.Config

SecureTLSConfig creates a new secure TLS config suitable for net/http.Server servers, supporting HTTP/2 and the tls-alpn-01 ACME challenge type.

type ShouldBase53IDBePlacedIntoSliceFn added in v1.0.0

type ShouldBase53IDBePlacedIntoSliceFn func(string) bool

type URLMap added in v1.0.0

type URLMap interface {
	Put_New_Entry(string, string, int64, MapItemValueType) error
	NumItems() int
}

type URL_TYPE_t added in v1.7.0

type URL_TYPE_t struct{}
var TYPE_MAP_ITEM_URL URL_TYPE_t = URL_TYPE_t{}

func (URL_TYPE_t) ToString added in v1.7.0

func (URL_TYPE_t) ToString() string

type ValidationResult

type ValidationResult struct {
	Success bool
	Message string
}

type XattrParams added in v1.12.0

type XattrParams struct {
	SetXattr   bool
	XattrName  string
	Xattrvalue string
}

Directories

Path Synopsis
json_internals
Checks that JSON file does not contain any fields that are not in the struct
Checks that JSON file does not contain any fields that are not in the struct
Create 200k files in one directory vs 200k files in 100 separate directories See if speed of accessing files is affected
Create 200k files in one directory vs 200k files in 100 separate directories See if speed of accessing files is affected
Defends against directory traversal attacks
Defends against directory traversal attacks

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL