Nanogram is designed for the enthusiest who wants complete data sovereignty on their social media platform.
Spin up your own instance on termux for Android.
Demo here.
Install instructions are at the bottom of the readme.
Nanogram is designed for the enthusiest who wants complete data sovereignty on their social media platform.
Spin up your own instance on termux for Android.
Demo here.
Install instructions are at the bottom of the readme.
Curious is that just out of principle basically? Because it’s actually a pretty good way to stay away from AI now that it exists. That’s why I made it.
For the same reason I wouldn’t trust a car designed with the help of AI:
I would be concerned that the internals have the equivalent of a sixth finger. In a picture, that’s fairly harmless, but I’m not giving my personal information to a six-fingered hand if I don’t have to.
Maybe if the designer has a solid track record independent of AI, and the AI’s contributions were strictly monitored and checked by humans. But then… why would you use AI?
The backbone and internals were made by great developers…not me. That’s a good thing. Each time you run the script these packages are updated to the latest and greatest.
What I’ve done is brought it all together and generated some harmless html, css, python app to bring it all to life.
Things I didn’t make:
tor - networking backbone
clang - compiler infrastructure.
libjpeg-turbo - server side image compression to keep it all light weight
openssl - open library for encrypted internet communications over tor
gnupg - encrypted backups
flask - lightweight web framework
sqlalchemy - the database backbone
pillow - image processing
itsdangerous - handling session data securely
werkzeug - webserver gateway interface
gunicorn - wsgi complient server for performance and support for handling the server requests efficiently.
If any of these packages get some new security update or performance improvement, nanogram would instantly benefit and patch because it’s searching for the most up to date version of these utilities on each run.
If a single exploit was discovered in what you have here, would you know how to go in and fix it and then verify the fix yourself outside of the dubious words of an LLM?
I’m not interested in entrusting my data/software/device to your faith in some models instead of the wisdom of a human being.
This is why I would not use it.
No not without a LLM but I’m pretty sure I could patch with it.
If there is an exploit discovered it’s going to be getting past the login somehow in which case the attacker has the .onion address that was leaked from a user. I tried every possible way to penetrate the login without credentials and made it as bullet proof as I could. I also implemented a function in the manager to rotate the onion address and discard the old. This brings it back to square one of distributing the address securely.
This is totally fair and I respect your opinion I just think it’s a little naive.
Under isos 27002, 90003, 25000, and 9001, and their requirements for software pedigree and sustainability, it’s just best-practice.
Is it ironic that you’re calling best-practice “naive”?
It’s naive because we are all running “Ai Generated” code on our machines by now and you would never know it. That doesn’t mean its inherently not infused with human wisdom now.
I’m not criticizing for wanting to stay away from AI…like I said, that’s why I made this. I don’t want my private photos/conversations fed into AI. This was my best attempt with the tools I have today to achieve that.
Fair enough, in that case we think the same of each other.