Pocketbase is already the poor man's BaaS, and is minimalist compared to the two others mentioned.
> Data stored in human-readable CSVs
The choice to not use a database when two near-perfect tiny candidates exist, and furthermore to choose the notorious CSV format for storing data, is absolutely mystifying. One can use their Wasm builds if platform-specific binaries offend.
I just deployed a wasm built SQLite with FTS5 enabled and it’s insane what it is capable of. It’s basically elasticsearch entirely on the client. It’s not entirely as robust as ES but it’s like 80% of the way there, and I repeat, it runs on the client side on your phone or any other SQLite supported device
All these benefits also apply to SQLite, but SQLite is also typed, indexed, and works with tons of tools and libraries.
It can even be stored as a static file on various serving options mentioned above. Even better, it can be served on a per-page basis, so you can download just the index to the client, who can query for specific chunks of the database, further reducing the bandwidth required to serve.
Just to be pedantic, SQLite is not really typed. I'd call them type-hints, like in Python. Their (bad IMHO) arguments for it: https://www.sqlite.org/flextypegood.html
A sibling comment posted a blind link whose contents address this, but (for the benefit of people who aren't likely to follow such links), recent versions of SQLite support STRICT tables which are rigidly typed, if you have a meed tor that instead of the default loose type affinity system.
If I want to bother with a SQL database, I at least want the benefit of the physical layer compressing data to the declared types and PostgreSQL scales down surprisingly well to lower-resource (by 2025 standards) environments.
So why wouldn't you just use a text format to persist a personal website a handful of people might use?
I created one of the SQLite drivers, but why would you bring in a dependency that might not be available in a decade unless you really need it? (SQLite will be there in 2035, but maybe not the current Go drivers)
You didn't really answer the dependency argument though.
Until the data for a static website becomes large enough to make JSON parsing a bottleneck, where is the problem?
I know, it's not generally suitable to store data for quick access of arbitrary pieces without parsing the whole file.
But if you use it at build time anyway (that's how I read the argument), it's pretty likely that you never will reach this bottleneck that makes you require any DBMS. Your site is static, you don't need to serve any database requests.
There is also huge overhead in powering static websites by a full-blown DBMS, in the worst case serving predictable requests without caching.
So many websites are powered by MySQL while essentially being static... and there are often unnecessarily complicated layers of caching to allow that.
But I'm not arguing against these layers per se (the end result is the same), it's just that, if your ecosystem is already built on JSON as data storage, it might be completely unneeded to pull in another dependency.
Not the same as restricting syntax within one programming language.
Not so sure about this. At scale, sure, but how many apps are out there that perform basic CRUD for a few thousand records max and don't need the various benefits and guarantees a DB provides?
I assume parent's dispair is about CSV's amount of traps and parsing quirks.
I'd also be hard pressed to find any real reason to chose CSV over JSONL for instance. Parsing is fast and utterly standard, it's predictible and if your data is really simple JSONL files will be super simple.
At it's simplest, the difference between a CSV line and a JSON array is 4 characters.
I agree on both your main points. It's not like PB has a bunch of cruft and fat to trim. The BD of the project is very aggressive in constraining scope, which is one of the reasons it's so good.
The CSV-thing feels like an academic exercise. The fact I can't open an SQLite database in my text editor is a little thin, considering many tools are lighter weight than text editors, and "reading" a database (any format) is seldom the goal. You probably want to query it so the first thing you need to do here is import the CSV into DuckDB and write a bunch of queries with "WHERE active=1"
The append-only, text csv format you can concatenate to from a script, edit or query in a spreadsheet, and that's still fast because of the in-memory pointer cache, seems like a big win (assuming you're in the target scaling category).
Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
I have started writing web apps that simply store the user data as a file, and I am very pleased with this approach.
It works perfectly for Desktop and Android.
iOS does not allow for real Chrome everywhere (only in Europe, I think), so I also offer to store the data in the "Origin private file system" which all browsers support. Fortunately it has the same API, so implementing it was no additional work. Only downside is that it cannot put files in a user selected directory. So in that mode, I support a backup via an old-fashioned download link.
This way, users do not have to put their data into the cloud. It all stays on their own device.
What about those of us who use multiple devices, or multiple browsers? I've been using local storage for years and it's definitely hampering adoption, especially for multiplayer.
I never tried it, but from the descriptions I have read, Dropbox detects conflicting file saves (if you save on two devices while they are offline) and stores them as "conflicting copies". So the user can handle the conflict.
As a developer, you would do this in the application. "Hey, you are trying to save your data but the data on disk is newer than when you loaded it ... Here are the differences and your options how to merge.".
> Hey, you are trying to save your data but the data on disk is newer than when you loaded it
You're suggesting an actual API-facilitated data sync via Dropbox? Sure, but at that point why? Unless the data also needs to be read by 3rd party applications, might as well host it myself.
Syncthing pls. Pls try to use open source alternative whenever possible even though they are not as developed as the closed sourced one, it works better for the public.
TIL! I enjoy building cloudless apps and have been relying on localstorage for persistence with an "export" button. This is exactly what I've been looking for.
A lot of what I've read about local-first apps included solving for data syncing for collaborative features. I had no idea it could be this simple if all you need is local persistence.
At least on the Android front, I'd prefer the app allow me to write to my own storage target. The reason is because I already use Syncthing-Fork to monitor a parent Sync directory of stuff (Obsidian, OpenTracks, etc.) and send to my backup system. In effect it allows apps to be local first and potentially even without network access, but allow me to have automatic backups.
If there were something that formalized this a little more, developers could even make their apps in a... Bring Your Own Network... kinda way. Maybe there's already someone doing this?
I may have misunderstoood. Does that mean with this API on both desktop and phone I can point to an arbitrary drive on the system without restriction? If so, it does indeed do what I'd like.
> Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
Could this allow accessing a local db as well? Would love something that could allow an app to talk directly to a db that lives locally in my devices, and that the db could sync across the devices - that way I still get my data in all of my devices, but it always stays only in my devices
Of course this would be relatively straightforward to do with native applications, but it would be great to be able to do it with web applications that run on the browser
Btw, does Chrome sync local storage across devices when logged in?
Like IndexDB? It’s a browser API for an internal key-value storage database.
> Btw, does Chrome sync local storage across devices when logged in?
Syncing across devices still requires some amount of traffic through Google’s servers, if I’m not mistaken. Maybe you could cook something up with WebRTC, but I can’t imagine you could make something seamless.
> Btw, does Chrome sync local storage across devices when logged in?
No, but extensions have an API to a storage which syncs itself across logged-in devices. So potentially you can have a setup where you create a website and an extension and the extension reads the website's localStorage and copies it to `chrome.storage.sync`.
I've been playing with chrome extensions recently, and have made them directly talk to a local server with a db. So using extensions, it's relatively easy to to store data locally and potentially sync it across devices
I like the idea of leveraging chrome.storage.sync though, I wonder what the limitations are
In hindsight, JSONL would have been much easier to deal with as a developer.
But I still don't regret picking CSV -- DB interface is pluggable (so one can use JSONL if needed), and I deliberately wanted to have different formats for data storage (models) and data transfer objects (DTOs) in the API layer, just like with real databases.
I agree, CSV is very limited and fragile, but it made data conversion/validation part more explicit.
CSV database is interesting; probably the most trivially-debuggable a database can possibly be. Although why not SQLite? CSV is not very resistant corruption if host crashes midway through a write.
theoretically there is a go port of sqlite by either using wazero[1] (wasm runtime) and then using sqlite from there or some modernc package[2](i am not sure what this website is except for the sqlite part, so maybe someone can clarify that though)
There was also this wrapper of sorts to make the wazero thing genuinely easier to do and it was on r/golang but I don't remember its name but I do think it is semi popular.
> Another important file is _users.csv which contains user credentials and roles. It has the same format as other resources, but with a special _users collection name. There is no way to add new users via API, they must be created manually by editing this file:
> Here we have user ID which is user name, version number (always 1), salt for password hashing, and the password itself (hashed with SHA-256 and encoded as Base32). The last column is a list of roles assigned to the user.
I haven't had to handle password hashing in like a decade (thanks SSO), but isn't fast hashing like SHA-256 bad for it? Bcrypt was the standard last I did it. Or is this just an example and not what is actually used in the code?
Like others have guessed, I limited myself to what Go stdlib offers. Since it's a personal/educational project -- I only wanted to play around with this sort of architecture (similar to k8s apiserver and various popular BaaSes). It was never meant to run outside of my localhost, so password security or choice of the database was never a concern -- whatever is in stdlib and is "good enough" would work.
I also tried to make it a bit more flexible: to use `bcrypt` one can provide their own `pennybase.HashPasswd` function. To use SQLite one can implement five methods of `pennybase.DB` interface. It's not perfect, but at the code size of 700 lines it should be possible to customise any part of it without much cognitive difficulties.
Fast hashing is only a concern if your database becomes compromised and your users are incapable of using unique passwords on different sites. The hashing taking forever is entirely about protecting users from themselves in the case of an offline attack scenario. You are burning your own CPU time on their behalf.
In an online attack context, it is trivial to prevent an attacker from cranking through a billions attempts per second and/or make the hashing operation appear to take a constant amount of time.
Golang does not have built in SQLite. It has a SQL database abstraction in the stdlib but you must supply a sqlite driver, for example one of these: https://github.com/cvilsmeier/go-sqlite-bench
However using the stdlib abstraction adds a lot of performance overhead; although it’ll still be competitive with CSV files.
Well the project goal seems to be extreme minimalism and stdlib only, and the choice of human readable data stores and manually editing the user list suggests a goal is to only need `vim` and `sha256sum` for administration
I like the simplicity of the approach. I've been following trailbase for this purpose as well: https://trailbase.io
I appreciate how it seems like we have a spectrum of similar options emerging now for simple backends, ranging from pennybase to trailbase to pocketbase. I do hope one of them eventually implements postgres as an alternative to sqlite at some point though.
>Start cheap, gather market, then crank the costs after lock-in.
"cost"/"price" vocabulary clarification, should you ever want to read or write business plans, communicate with accountants, CFO's, etc.
"costs" are what companies pay for supplies/inputs that the company purchases.
"prices" are what those same companies offer to charge buyers for the products the company sells.
companies want to keep costs down, and companies benefit from high prices. (when you said "crank the costs", it thunks)
since people don't generally operate their lives as companies, it tends to seem like "costs" and "prices" are the same thing, but in addition to the above, "costs" to a company reflect actual expenditures in total, and "prices" represent an advertisement for each of something pending that has not transacted yet.
"cost" is an accounting term, total revenues - total costs = total profits
"price" is a marketing term, $1 each, $10 for a dozen!
(of course this could be quibbled into incomprehensively, which is another thing you should not do in "business communication", always streamline communication to get to the takeaway as quickly as possible)
This one isn't quite saying its cheaper, or even charging, I think you might get a laugh if you click through. I don't think we'll need to worry about the costs being cranked after lock-in.
Is the word baas so horrified to people that they won't even click on the link to see that the code is a MASSIVE 1k loc (just using irony hehe)
But in all seriousness, I may be going on a tangent but I don't think that anybody can monetize code under less than 1k loc. Are there any cool examples anybody want to share?
Maybe "simple" api's would generally be the only thing that would be more monetizable and still fall under less than 1k loc. But still I would love hearing more about this kind of thing.
There are smart contracts in the Ethereum and Binance network making millions a day extracting transaction fees with much less than 1k loc. The code is even public.
Can you give me some examples. Also I personally feel as if most of these are saturated and I don't think that I could earn a million with such loc. and maybe its me but personally I don't like touching most crypto since its grift. And the only one I'd like is monero for privacy but I doubt how much I need it anyway.
I don't understand why create a new project instead of contributing to Pocketbase, which looks very similar. What does it bring that is not already there in Pocketbase?
I always used to feel this about why?
But I mean, I've always felt like its the author's choice as to why they are doing this. And we think of it as a binary option (That they want to contribute to some other project or this) but I feel as if we don't think that maybe we had a binary option of this or absolutely nothing.
Now what I do like though is the second line of your post. What are the comparisons...
Now IMO, the biggest thing is that this thing is genuinely really tiny (less than 1k loc is wild) and maybe they really followed the occam's razor and just ditched sql and the simplest sql (sqllite) altogether for the sweet csv.
I never thought there would be a day where I would have to say that sqlite would be the one complex given how in all senses sqlite is like the most simplest / embeddable sql database or databases in general. Maybe I am going into a tangent but I love sqlite and what pocketbase does tbh. I think of sqlite + per user db and I just get so happy thinking about this architecture tbh. I love sqlite.
To be fair, the project is linked to the blog post I recently wrote, so it's merely a tiny personal/educational project.
I tried to experiment with an API similar to what k8s api server offers: dynamic schemas for custom resources, generated uniform REST API with well-defined RBAC rules, watch/real-time notifications, customisation of business logic with admission hooks etc.
I also attempted to make it as small as possible. So yeah, I don't try to compete with Pocketbase and others, just trying to see what it takes to build a minimally viable backend with a similar architecture.
The choice of the "database" is dictated by the very same goals. I deliberately made it an interface, better databases exist and can be plugged in with little code changes. But for starters I went with what Go stdlib offers, and CSV is easy enough to debug.
>> What does it bring that is not already there in Pocketbase?
NIH mostly.
A big part of why PocketBase is so good is because the project is aggressively constrained, both in features and contributions. I'd suggest people contribute to the ecosystem, which is big and growing.
You might be right, but the only place where regexps are applied in code is for validating resource text fields (which is optional). Those regexps are defined in read-only schemas by the developer (if needed). Schemas are immutable. There seems to be absolutely no connection between the data transmitted over the API (i.e. what user can inject) and regexps. I'm not dismissing the idea that there might be plenty of other possible vulnerabilities in other areas of this toy project.
Alternatively, you could use nostr, have your users pay for the database, and get access to rich content types, an existing social graph, and application interoperability.
Calling this a Poor Man’s backend isn’t even the wrong name for it. Admittedly, this is what I’d expect from a Sophomore in University.
To the others arguing you should’ve stored the data as a binary, might as well have created an API wrapper around SQLite at that rate and called it “JASW - Just Another Sqlite Wrapper”.
@ OP - what was the inspiration for the project? Were you learning DBs or intending to use this in a production environment for a chat session with GPT or something? Would love to help you improve this, but we’d have to understand the problem we’re trying to solve better.
Pocketbase is already the poor man's BaaS, and is minimalist compared to the two others mentioned.
> Data stored in human-readable CSVs
The choice to not use a database when two near-perfect tiny candidates exist, and furthermore to choose the notorious CSV format for storing data, is absolutely mystifying. One can use their Wasm builds if platform-specific binaries offend.
I just deployed a wasm built SQLite with FTS5 enabled and it’s insane what it is capable of. It’s basically elasticsearch entirely on the client. It’s not entirely as robust as ES but it’s like 80% of the way there, and I repeat, it runs on the client side on your phone or any other SQLite supported device
how large is the wasm package for an empty sqlite, together with the client library to access it ?
How large of a bundle is it? And are we talking about wikipedia stuffed into sqlite, or only a few hundred pages of internal docs?
In 2025, pretending that a CSV can be a reasonable alternative to a database because it is "smaller" is just wild. Totally unconscionable.
I use CSV files to run multiple sites with 40,000+ pages each. Close to 1mil pages total
Super fast
Can’t hack me because those CSV files are stored elsewhere and only pulled on build
Free, ultra fast, no latency. Every alternative I’ve tried is slower and eventually costs money.
CSV files stored on GitHub/vercel/netlify/cloudflare pages can scale to millions of rows for free if divided properly
Can't argue with what works, but...
All these benefits also apply to SQLite, but SQLite is also typed, indexed, and works with tons of tools and libraries.
It can even be stored as a static file on various serving options mentioned above. Even better, it can be served on a per-page basis, so you can download just the index to the client, who can query for specific chunks of the database, further reducing the bandwidth required to serve.
Just to be pedantic, SQLite is not really typed. I'd call them type-hints, like in Python. Their (bad IMHO) arguments for it: https://www.sqlite.org/flextypegood.html
https://www.sqlite.org/stricttables.html
Don’t you think it’s better in this dimension than CSV though? It seems to me like it’s a strictly better improvement than the other option discussed.
A sibling comment posted a blind link whose contents address this, but (for the benefit of people who aren't likely to follow such links), recent versions of SQLite support STRICT tables which are rigidly typed, if you have a meed tor that instead of the default loose type affinity system.
TBH this is why I've never messed with SQLite.
If I want to bother with a SQL database, I at least want the benefit of the physical layer compressing data to the declared types and PostgreSQL scales down surprisingly well to lower-resource (by 2025 standards) environments.
How exactly do you anticipate using Postgres on client? Or are you ignoring the problem statement and saying it’s better to run a backend?
https://pglite.dev/
It sounds like you use CSVs to build static websites, not store or update any dynamic data. That's not even remotely comparable.
So... SQLite with less features basically.
Every file format is SQLite with fewer features.
Unless it's Apache Arrow or Parquet.
For both fun and profit I’ve used the Parquet extension for SQLite to have the “Yes” answer to the question of “SQLite or Parquet?”
Is this a static website? If yes, what do you use to build?
In 2020 Tailscale used a JSON file.
https://tailscale.com/blog/an-unlikely-database-migration
If you continue reading, you'll see that they were forced to ditch JSON for a proper key-value database.
I know. Now see how far JSON got them.
So why wouldn't you just use a text format to persist a personal website a handful of people might use?
I created one of the SQLite drivers, but why would you bring in a dependency that might not be available in a decade unless you really need it? (SQLite will be there in 2035, but maybe not the current Go drivers)
It's self-restriction, like driving a car not using the rear view mirror. Or using "while" loops always instead of "for" loops.
It's great for an extra challenge. Or for writing good literature.
You didn't really answer the dependency argument though.
Until the data for a static website becomes large enough to make JSON parsing a bottleneck, where is the problem?
I know, it's not generally suitable to store data for quick access of arbitrary pieces without parsing the whole file.
But if you use it at build time anyway (that's how I read the argument), it's pretty likely that you never will reach this bottleneck that makes you require any DBMS. Your site is static, you don't need to serve any database requests.
There is also huge overhead in powering static websites by a full-blown DBMS, in the worst case serving predictable requests without caching.
So many websites are powered by MySQL while essentially being static... and there are often unnecessarily complicated layers of caching to allow that.
But I'm not arguing against these layers per se (the end result is the same), it's just that, if your ecosystem is already built on JSON as data storage, it might be completely unneeded to pull in another dependency.
Not the same as restricting syntax within one programming language.
Not so sure about this. At scale, sure, but how many apps are out there that perform basic CRUD for a few thousand records max and don't need the various benefits and guarantees a DB provides?
I assume parent's dispair is about CSV's amount of traps and parsing quirks.
I'd also be hard pressed to find any real reason to chose CSV over JSONL for instance. Parsing is fast and utterly standard, it's predictible and if your data is really simple JSONL files will be super simple.
At it's simplest, the difference between a CSV line and a JSON array is 4 characters.
[dead]
I agree on both your main points. It's not like PB has a bunch of cruft and fat to trim. The BD of the project is very aggressive in constraining scope, which is one of the reasons it's so good. The CSV-thing feels like an academic exercise. The fact I can't open an SQLite database in my text editor is a little thin, considering many tools are lighter weight than text editors, and "reading" a database (any format) is seldom the goal. You probably want to query it so the first thing you need to do here is import the CSV into DuckDB and write a bunch of queries with "WHERE active=1"
The append-only, text csv format you can concatenate to from a script, edit or query in a spreadsheet, and that's still fast because of the in-memory pointer cache, seems like a big win (assuming you're in the target scaling category).
For local use cases this could be useful. Run locally. Do your thing. Edit with Excel or tool of choice.
Also one less dependency.
What’s the other candidate besides pocketbase?
Apologies to anyone who found this unclear — the two near-perfect tiny candidate databases are SQLite and DuckDB.
My understanding is that SQLite is OLTP and duckdb is OLAP. Duckdb is column based so not a great fit for a traditional backend db
Firebase, Supabase, Pocketbase
Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
I have started writing web apps that simply store the user data as a file, and I am very pleased with this approach.
It works perfectly for Desktop and Android.
iOS does not allow for real Chrome everywhere (only in Europe, I think), so I also offer to store the data in the "Origin private file system" which all browsers support. Fortunately it has the same API, so implementing it was no additional work. Only downside is that it cannot put files in a user selected directory. So in that mode, I support a backup via an old-fashioned download link.
This way, users do not have to put their data into the cloud. It all stays on their own device.
What about those of us who use multiple devices, or multiple browsers? I've been using local storage for years and it's definitely hampering adoption, especially for multiplayer.
One approach might be to save the file to a shared drive like Google Drive?
Not sure I trust Dropbox to merge data. What happens when I want to migrate my data structures to a new schema?
As far as I know, Dropbox does not merge data.
I never tried it, but from the descriptions I have read, Dropbox detects conflicting file saves (if you save on two devices while they are offline) and stores them as "conflicting copies". So the user can handle the conflict.
As a developer, you would do this in the application. "Hey, you are trying to save your data but the data on disk is newer than when you loaded it ... Here are the differences and your options how to merge.".
> Hey, you are trying to save your data but the data on disk is newer than when you loaded it
You're suggesting an actual API-facilitated data sync via Dropbox? Sure, but at that point why? Unless the data also needs to be read by 3rd party applications, might as well host it myself.
Sure. You brought up Dropbox. Not me.
Syncthing pls. Pls try to use open source alternative whenever possible even though they are not as developed as the closed sourced one, it works better for the public.
TIL! I enjoy building cloudless apps and have been relying on localstorage for persistence with an "export" button. This is exactly what I've been looking for.
A lot of what I've read about local-first apps included solving for data syncing for collaborative features. I had no idea it could be this simple if all you need is local persistence.
At least on the Android front, I'd prefer the app allow me to write to my own storage target. The reason is because I already use Syncthing-Fork to monitor a parent Sync directory of stuff (Obsidian, OpenTracks, etc.) and send to my backup system. In effect it allows apps to be local first and potentially even without network access, but allow me to have automatic backups.
If there were something that formalized this a little more, developers could even make their apps in a... Bring Your Own Network... kinda way. Maybe there's already someone doing this?
What do you mean by "storage target"?
Since the File Access API lets web apps simply use the file system, I guess you could just write the file to a shared drive.
I may have misunderstoood. Does that mean with this API on both desktop and phone I can point to an arbitrary drive on the system without restriction? If so, it does indeed do what I'd like.
That's basically how the File System Access API works, yes.
Technically probably not completely "without restriction". But for all practical purposes, it works just fine for me.
> Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
Could this allow accessing a local db as well? Would love something that could allow an app to talk directly to a db that lives locally in my devices, and that the db could sync across the devices - that way I still get my data in all of my devices, but it always stays only in my devices
Of course this would be relatively straightforward to do with native applications, but it would be great to be able to do it with web applications that run on the browser
Btw, does Chrome sync local storage across devices when logged in?
> Could this allow accessing a local db as well?
Like IndexDB? It’s a browser API for an internal key-value storage database.
> Btw, does Chrome sync local storage across devices when logged in?
Syncing across devices still requires some amount of traffic through Google’s servers, if I’m not mistaken. Maybe you could cook something up with WebRTC, but I can’t imagine you could make something seamless.
> Btw, does Chrome sync local storage across devices when logged in?
No, but extensions have an API to a storage which syncs itself across logged-in devices. So potentially you can have a setup where you create a website and an extension and the extension reads the website's localStorage and copies it to `chrome.storage.sync`.
Sounds like an interesting idea actually.
That's a clever solution
I've been playing with chrome extensions recently, and have made them directly talk to a local server with a db. So using extensions, it's relatively easy to to store data locally and potentially sync it across devices
I like the idea of leveraging chrome.storage.sync though, I wonder what the limitations are
> I wonder what the limitations are
https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
says that there is a 100kb limit, and a 512 KV pair limit per extension.
Quite limiting, but if this pattern becomes popular I don't see why it can't be expanded to have the same limit as localStorage (5MB)
Any examples?
Why CSV instead of newline-separated JSON arrays?
Ambiguity in your storage format isn’t good in the long run… JSON lines can be trivially parsed anywhere without a second thought.
In hindsight, JSONL would have been much easier to deal with as a developer. But I still don't regret picking CSV -- DB interface is pluggable (so one can use JSONL if needed), and I deliberately wanted to have different formats for data storage (models) and data transfer objects (DTOs) in the API layer, just like with real databases. I agree, CSV is very limited and fragile, but it made data conversion/validation part more explicit.
or sqlite?
CSV database is interesting; probably the most trivially-debuggable a database can possibly be. Although why not SQLite? CSV is not very resistant corruption if host crashes midway through a write.
No dependencies apart from the standard library is my guess.
Go doesn’t have sqlite in the stdlib?
It doesn’t and using the standard 3rd party package requires compiling with CGO which is a pain for cross-platform :(
theoretically there is a go port of sqlite by either using wazero[1] (wasm runtime) and then using sqlite from there or some modernc package[2](i am not sure what this website is except for the sqlite part, so maybe someone can clarify that though) There was also this wrapper of sorts to make the wazero thing genuinely easier to do and it was on r/golang but I don't remember its name but I do think it is semi popular.
[1]wazero:https://wazero.io/ [2]:https://pkg.go.dev/modernc.org/sqlite
For modernc you gave the correct link.
For the wazero based driver, it's this package (I'm the author): https://github.com/ncruces/go-sqlite3
Yes btw if I may ask, how does the modernc code actually work and like, if I wanted my code to be minimalist, what should I rather pick?
Also didn't expect that I would be talking to the author of wazero myself haha. I really admire your project.
There is a CGO-free package for the basics: https://gitlab.com/cznic/sqlite
Not 100% drop-in though. I’ve hit some snags around VFS support.
> Another important file is _users.csv which contains user credentials and roles. It has the same format as other resources, but with a special _users collection name. There is no way to add new users via API, they must be created manually by editing this file:
> Here we have user ID which is user name, version number (always 1), salt for password hashing, and the password itself (hashed with SHA-256 and encoded as Base32). The last column is a list of roles assigned to the user.I haven't had to handle password hashing in like a decade (thanks SSO), but isn't fast hashing like SHA-256 bad for it? Bcrypt was the standard last I did it. Or is this just an example and not what is actually used in the code?
Like others have guessed, I limited myself to what Go stdlib offers. Since it's a personal/educational project -- I only wanted to play around with this sort of architecture (similar to k8s apiserver and various popular BaaSes). It was never meant to run outside of my localhost, so password security or choice of the database was never a concern -- whatever is in stdlib and is "good enough" would work.
I also tried to make it a bit more flexible: to use `bcrypt` one can provide their own `pennybase.HashPasswd` function. To use SQLite one can implement five methods of `pennybase.DB` interface. It's not perfect, but at the code size of 700 lines it should be possible to customise any part of it without much cognitive difficulties.
I think adding `golang.org/x/crypto` as a second dependency is fine. It's basically stdlib at this point (though slightly less stability guarantees).
> isn't fast hashing like SHA-256 bad for it
Fast hashing is only a concern if your database becomes compromised and your users are incapable of using unique passwords on different sites. The hashing taking forever is entirely about protecting users from themselves in the case of an offline attack scenario. You are burning your own CPU time on their behalf.
In an online attack context, it is trivial to prevent an attacker from cranking through a billions attempts per second and/or make the hashing operation appear to take a constant amount of time.
Users don’t use unique passwords. Don’t expect them to.
Indeed bcrypt is preferred but this is just a simple backend. My first ick was using CSV as storage as opposed to golang’s builtin SQLite support.
A SQLite connection can be made with just a sqlite://data.db connection string.
Golang does not have built in SQLite. It has a SQL database abstraction in the stdlib but you must supply a sqlite driver, for example one of these: https://github.com/cvilsmeier/go-sqlite-bench
However using the stdlib abstraction adds a lot of performance overhead; although it’ll still be competitive with CSV files.
Ok, one additional dependency to your go.mod - big deal. And by builtin I was referring to the database/sql module which was designed for this.
Most of the more common SQLite implementations for go require CGO and this is a pretty steep request, it's definitely more than a line in go.mod
Well the project goal seems to be extreme minimalism and stdlib only, and the choice of human readable data stores and manually editing the user list suggests a goal is to only need `vim` and `sha256sum` for administration
maybe this is why they used sha-256 too, it's in the stdlib whereas bcrypt is a package (even if "official")
The standard lib has pbkdf2 though.
I'm guessing the goal is that the file can be managed more easily with a text editor and some shell utils.
If it's in the examples, it WILL make it to someone's production code
I like the simplicity of the approach. I've been following trailbase for this purpose as well: https://trailbase.io
I appreciate how it seems like we have a spectrum of similar options emerging now for simple backends, ranging from pennybase to trailbase to pocketbase. I do hope one of them eventually implements postgres as an alternative to sqlite at some point though.
My biggest issue with "cheaper" alternatives is the same pathway they all take.
Start cheap, gather market, then crank the costs after lock-in.
Even "open-source" is abused. First everything is open-source, and then reasons come out for why premium services will be closed source.
>Start cheap, gather market, then crank the costs after lock-in.
"cost"/"price" vocabulary clarification, should you ever want to read or write business plans, communicate with accountants, CFO's, etc.
"costs" are what companies pay for supplies/inputs that the company purchases.
"prices" are what those same companies offer to charge buyers for the products the company sells.
companies want to keep costs down, and companies benefit from high prices. (when you said "crank the costs", it thunks)
since people don't generally operate their lives as companies, it tends to seem like "costs" and "prices" are the same thing, but in addition to the above, "costs" to a company reflect actual expenditures in total, and "prices" represent an advertisement for each of something pending that has not transacted yet.
"cost" is an accounting term, total revenues - total costs = total profits
"price" is a marketing term, $1 each, $10 for a dozen!
(of course this could be quibbled into incomprehensively, which is another thing you should not do in "business communication", always streamline communication to get to the takeaway as quickly as possible)
This one isn't quite saying its cheaper, or even charging, I think you might get a laugh if you click through. I don't think we'll need to worry about the costs being cranked after lock-in.
Is the word baas so horrified to people that they won't even click on the link to see that the code is a MASSIVE 1k loc (just using irony hehe)
But in all seriousness, I may be going on a tangent but I don't think that anybody can monetize code under less than 1k loc. Are there any cool examples anybody want to share?
Maybe "simple" api's would generally be the only thing that would be more monetizable and still fall under less than 1k loc. But still I would love hearing more about this kind of thing.
There are smart contracts in the Ethereum and Binance network making millions a day extracting transaction fees with much less than 1k loc. The code is even public.
Woah. Didn't expect it.
Can you give me some examples. Also I personally feel as if most of these are saturated and I don't think that I could earn a million with such loc. and maybe its me but personally I don't like touching most crypto since its grift. And the only one I'd like is monero for privacy but I doubt how much I need it anyway.
This is neat. Ruby on Rails -> Serverless (Firebase/Heroku/etc) -> Pocketbase etc?
What kinds apps are folks building with this? Are there any decently sized websites running on Pocketbase/trailbase?
I don't understand why create a new project instead of contributing to Pocketbase, which looks very similar. What does it bring that is not already there in Pocketbase?
I always used to feel this about why? But I mean, I've always felt like its the author's choice as to why they are doing this. And we think of it as a binary option (That they want to contribute to some other project or this) but I feel as if we don't think that maybe we had a binary option of this or absolutely nothing.
Now what I do like though is the second line of your post. What are the comparisons... Now IMO, the biggest thing is that this thing is genuinely really tiny (less than 1k loc is wild) and maybe they really followed the occam's razor and just ditched sql and the simplest sql (sqllite) altogether for the sweet csv.
I never thought there would be a day where I would have to say that sqlite would be the one complex given how in all senses sqlite is like the most simplest / embeddable sql database or databases in general. Maybe I am going into a tangent but I love sqlite and what pocketbase does tbh. I think of sqlite + per user db and I just get so happy thinking about this architecture tbh. I love sqlite.
To be fair, the project is linked to the blog post I recently wrote, so it's merely a tiny personal/educational project.
I tried to experiment with an API similar to what k8s api server offers: dynamic schemas for custom resources, generated uniform REST API with well-defined RBAC rules, watch/real-time notifications, customisation of business logic with admission hooks etc.
I also attempted to make it as small as possible. So yeah, I don't try to compete with Pocketbase and others, just trying to see what it takes to build a minimally viable backend with a similar architecture.
The choice of the "database" is dictated by the very same goals. I deliberately made it an interface, better databases exist and can be plugged in with little code changes. But for starters I went with what Go stdlib offers, and CSV is easy enough to debug.
Minimalism. It’s tiny and its data can be completely administered with `vim` and `sha256sum`
Given the setup I’d guess it makes sense for little household scale apps w/ a user list in the low tens of people.
>> What does it bring that is not already there in Pocketbase?
NIH mostly.
A big part of why PocketBase is so good is because the project is aggressively constrained, both in features and contributions. I'd suggest people contribute to the ecosystem, which is big and growing.
Why do you buy/rent your own house instead of joining a house share?
Self host Convex https://convex.dev ?
Mentioned on this week's Cup o Go episode. Cool fun toy project.
BEasS
why would you not use sqlite?
Can it do joins? I may have missed it in the doc.
"Back End-as-a-Service" in the title is confusing. Consider adhering to the original project description.
[dead]
I find Manifest.js really gives me that minimalist feel that you are going for here.
I would find it more palatable if format was JSON or YML or ideally TOML right...
It is definitely interesting what you are doing and you know, thanks for sharing. Not for everyone but good to see what and how people are thinking.
Or use something that has been used in production for decades like Laravel, Rails, Django or even Spring.
[dead]
[dead]
This screams regex injection
You might be right, but the only place where regexps are applied in code is for validating resource text fields (which is optional). Those regexps are defined in read-only schemas by the developer (if needed). Schemas are immutable. There seems to be absolutely no connection between the data transmitted over the API (i.e. what user can inject) and regexps. I'm not dismissing the idea that there might be plenty of other possible vulnerabilities in other areas of this toy project.
Sorry, that acronym is already in use for banking as a service. Try again.
So let me get this straight. You read "back end as a service" and your mind went to BANKING?
Alternatively, you could use nostr, have your users pay for the database, and get access to rich content types, an existing social graph, and application interoperability.
Calling this a Poor Man’s backend isn’t even the wrong name for it. Admittedly, this is what I’d expect from a Sophomore in University.
To the others arguing you should’ve stored the data as a binary, might as well have created an API wrapper around SQLite at that rate and called it “JASW - Just Another Sqlite Wrapper”.
@ OP - what was the inspiration for the project? Were you learning DBs or intending to use this in a production environment for a chat session with GPT or something? Would love to help you improve this, but we’d have to understand the problem we’re trying to solve better.
zserge is one of my favorite authors and programmers.