

I’d like to understand how self managing all the lower level components abstracted by the cloud is saving on headcount. Care to math that out for us?
🌨️ 💻
I’d like to understand how self managing all the lower level components abstracted by the cloud is saving on headcount. Care to math that out for us?
I get you that it’s easy to over-provision in the cloud, but you can’t return an on-prem server. A cloud VM, just shut it down and you’re done.
AWS talks about minimizing undifferentiated heavy lifting as a reason to adopt managed services and I find that largely to be true. The majority of companies aren’t differentiating their services via some low-level technology advantage that allows them to cost less. It’s a different purchasing model, a smoother workflow, or a unique insight into data. The value an organization provides to customers should be the primary focus of the business, the rest is a means to sharpen that focus.
deleted by creator
😎 If you’re not compiling your own kernel 🏋🏼♂️ you’re missing out on substantial gains 💪🏼. Just drink a little Nix shake in the morning 🤙🏼, then hit your favorite systemd alternative GitHub pages and pull the nightly 🤘🏼. You’ll be so swole bro, you’ll be Godify. 🎶
They meant pooping
TF2 bots
I feel this. That Valve guy talking about ‘Nvidia you’re doing lighting wrong’ because Nvidia adopted the same logarithmic formula for gamma as photography. Like holy shit I’m sorry Carmack got this wrong, but fr get your shit together Valve. Y’all didn’t just invent how the world models light.
I wish it was hypothetical. Two slightly awkward conversations prompted this.
Touché…is also the name of a fetish community.
Blueiris and some hikvision cameras. It’s not fancy, but it’s pretty straightforward to get running. I’m not super concerned with alerting and just run continuous recording looping after a few days.
I dunno I RMA’d my Nomad so many times.
If budget is no object it’s only kind of a pain in the ass with Nvidia’s vGPU solutions for data centers. Even with $10 grand spent there’s hypervisor compatibility issues, license servers, compatibility challenges with drivers for games/consumer OS’s on hypervisors, and other inane garbage.
Consumer wise it’s technically the easiest it’s ever been with SRIOV support for hardware accelerating VMs on Intel 13 & 14 gen procs with iGPUs, however iGPU performance is kinda dogshit, drivers are wonky, and multiple display heads being passed through to VMs is weird for hypervisors.
On the docker side of things YMMV based on what you’re trying to accomplish. Technically nvidia container toolkit does support CUDA & display heads for containers: https://hub.docker.com/r/nvidia/vulkan/tags. I haven’t gotten it working yet, but this is the basis for my next set of experiments.
Are you running redundant routers, connections, ISPs…etc? Compromise is part of the design process. If you have resiliency requirements redundancy will help, but it ratchets up complexity and cost.
Security has the same kinds of compromises. I prefer to build security from the network up, leveraging tools like VLANs to start building the moat. Realistically, your reverse proxy is likely battle tested if it’s configured correctly and updated. It’ll probably be the most secure component in your stack. If that’s configured correctly and gets popped, half the Internet is already a wasteland.
If you’re running containers, yeah technically there are escape vectors, but again your attacker would need to pop the proxy software. It’d probably be way easier to go after the apps themselves.
Do something like this with NICs on each subnet:
DMZ VLAN <-> Proxy <-> Services VLAN
Double NIC on the proxy. One in each VLAN.
Rebuild time? Yeah it’ll take about that long.
This comment needs to load after a paywall, 2 ads to subscribe, and some auto play videos.