

It’s a gimmick to get publicity. But if it happens, the company has to generate revenue to pay it off.
Guess how browser makers make money off a ‘free’ product?
It’s a gimmick to get publicity. But if it happens, the company has to generate revenue to pay it off.
Guess how browser makers make money off a ‘free’ product?
Ran the 20B on a Mac under LMStudio. Pretty zippy and did OK on basic coding tasks.
By all means, run MCPs that give full access to your desktop. Nothing can go wrong.
Others have already explained the secure boot process. But one thing that might impact gaming is that TPMs also implement cryptographic acceleration in hardware. Not only does it speed up operations, it guarantees that the binary code for the library running on the chip hasn’t been modified.
Some anti-cheat libraries might require the TPM and having secure boot on guarantees that feature exists.
A few suggestions:
Some of those components may end up costing a lot to operate. You said you’re doing it as a portfolio piece. May want to create a spreadsheet with all the services, then run a cost simulation. You can use the AWS Cost calculator, but it won’t be as flexible doing ‘what if’ scenarios. Any prospective employer will appreciate that you’ve given some thought to runtime pricing.
You may want to bifurcate static media out and put them in S3 buckets, plus put a CloudFront CDN in front for regional scaling (and cost). Static media coming out of local server uses up processing power, bandwidth, storage, and memory. S3/CloudFront is designed for just this and is a lot cheaper. All fonts, js scripts, images, CSS stylesheets, videos, etc. can be moved out.
Definitely expire your CloudWatch log records (maybe no more than a week), otherwise they’ll pile up and end up costing a lot.
Consider where backups and logs may go. Backups should also account for Disaster Recovery (DR). Is the purpose of multiple AZs for scaling or DR? If for DR, you should think about different recovery strategies and how much down-time is acceptable.
Using Pulumi is good if the goal is to go multi-cloud. But if you’ve hardcoded Aurora or ALBs into the stack, you’re stuck with AWS. If that’s the case, maybe consider going with AWS CDK in a language you like. It would get you farther and let you do more native DevOps.
Consider how updates and revisions might work, especially once rolled out. What scripts will you need to run to upgrade the NextCloud stack. What are the implications if only one AZ is updated, but not the other. Etc.
If this is meant for business or multiple users, consider where user accounts would go? What about OAuth or 2FA? If it’s a business, they may already have an Identity Provider (IDP) and now you need to tie into it.
If tire-kicking, may want to also script switching to plain old RDS/Postgres so you can stay under the free tier.
To make this all reusable, you want to take whatever is generated (i.e. Aurora endpoints, and save everything to a JSON or .env file. This way, the whole thing can be zapped and re-created and should work without having to manually do much in the console or CLI.
Any step that uses the console or CLI adds friction and risk. Either automate them, or document the crap out of them as a favor to your future self.
All secrets could go in .env files (which should be in .gitignore). Aurora/RDS Database passwords could also be auto-generated and kept in SecretsManager and periodically rotated. Hardcoded DB passwords are a risk.
Think about putting WAF in front of everything with web access to prevent DDOS attacks.
This is a great, learning exercise. Hope you don’t find these suggestions overwhelming. They only apply if you want to show it off for future employers. If it’s just for personal use, ignore all the rest I said and just think about operating costs. See if you can find an AWS sales or support person and get some freebie credits.
Best of luck!
There’s more: https://lemmy.world/comment/18699894
“If we stop testing right now, we’d have very few cases, if any.”
OpenAI and the California State University system bring AI to 500,000 students and faculty: https://openai.com/index/openai-and-the-csu-system/
What can go wrong?
Management is on 1034.
The one guy hand-soldering and fumes with no PPE or vent 😱
Totally understandable.
If scanning to help send traffic to your website, that’s cool. If scanning to generate summaries that won’t send any traffic your way. No bueno.
Ultimately, it should be whatever most benefits users.
If nginx, here’s an open-source blocker/honeypot: https://github.com/raminf/RoboNope-nginx
If you have it set up to be proxied or hosted by Cloudflare, they have their own solution: https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/
Absolute horseshit. Bulbs don’t have microphones. If they did, any junior security hacker could sniff out the traffic and post about it for cred.
The article quickly pivots to TP-Link and other devices exposing certificates. That has nothing to do with surveillance and everything to do with incompetent programming. Then it swings over to Matter and makes a bunch of incorrect assertion I don’t even care to correct. Also, all the links are to articles on the same site, every single one of which is easily refutable crap.
Yes, there are privacy tradeoffs with connected devices, but this article is nothing but hot clickbait garbage.
They’ve already got it mapped out.
My car needed some repair work. Insurance set me up for a rental with Hertz who told me not to pay for bridge tolls with my own car’s transponder. When I take the car back, they tell me I’ll be invoiced later for the tolls. Had 4 toll crossings which ordinarily would come to less than $30 (even less if I had used the transponder).
A month later, the Hertz charges show up: $77 (including ‘processing fee’). Called and complained. They said they’d look into it. Never heard back.
Not using them again.
On-device AI is the way to go. No privacy leak. Doesn’t have server and networking costs.
This specific use case (looking things up in Start menu and settings) is a good one, since finding out which setting to tweak is a major PITA.
Apple just announced at WWDC embedding Foundation Models on phones. Except they will allow apps to access them and give them custom prompts. This doesn’t go quite as far.
Similar to other apps, CoverDrop only provides limited protection on smartphones that are fully compromised by malware, e.g., Pegasus, which can record the screen content and user actions.
A lot of the Javascript attributes used for fingerprinting are used to decide WHAT to render and to cache settings so things work smoothly the next time you come back.
For example, the amount of RAM, your WebGL settings and version, presence of audio, mic, and camera, and screen dimensions are all relevant to a game, a browser-based video-conferencing app, or WebASM based tools like Figma.
And unless you want an app to do a full check each time it returns to foreground, it will likely cache those settings in a local store so it can quickly look it up.
If the app needs to send some of this data to the cloud so the server changes what it sends up, they now also have your IP address, rough reverse IP coordinates via ISP, and time. You can use VPN or Tor to obfuscate IP addresses, but you have to remember to turn that on each time you use the app, and in the case of VPN, to disconnect/reconnect to a random server to semi-randomize your IP (or use Tor, which does this for you).
But to answer the first question, changing or disabling those settings could break a bunch of features, especially Single-Page Applications, those using embedded analytics, or any amount of on-device graphics.
Grafana: https://tomerklein.dev/visualizing-traefik-metrics-with-grafana-and-prometheus-step-by-step-a6a1e9b5fb2c