

I’m going to guess that you don’t work in tech, because good luck accomplishing that.
The overwhelming majority of businesses that allow remote work do so via VPN. It’s possible for some companies to get away with just Office 365 and some basic collaboration tools, but not for most. Some businesses even set up secure VPN tunnels between sections of their network and a section belonging to a contracted partner, software vendor, data processing firm, auditors, etc.
So banning is absurdly unlikely. But let’s say they do, or they restrict “allowed” VPN through government approved VPN software or something.
There are countless ways to hide VPN traffic as other legitimate/normal web traffic. Plenty of people in war torn areas, reporting from countries with locked down internet, and other similar scenarios have effectively proven that you can’t completely lock things down.
So they could ban it, but that’s absurdly unlikely, and even if they do there will be plenty of relatively safe ways around it.
Hopefully an infinite distance away. Put simply, we don’t have chatbots capable of what is required for counseling. We don’t have the technology for this, even assuming we could just pick and choose the positive aspects of what works from everything available to us right now, and somehow eliminate the downsides and added complexity of combining the various technology and systems.
LLMs are amazing but are just word association engines. Sure, across an almost incomprehensible amount of dimensions, but they don’t have real understanding. Concepts of true/false. Even the ones created to cite sources and “reason” can easily be shown to not reliably do either of those things, citing non-existent sources and clearly constructing their “thoughts” post-hoc. And due to those foundational limitations inherent to their comstruction and function, they can’t be guided to stay within specific bounds in a coversation in any fully reliable way. They can’t be made to analyze someone over long conversations, map that to their issues, and then guide them to even a widely accepted standard for positive growth. They may be able to approximate that in an incredibly lossy, error prone, and non-deterministic way… but that isn’t good enough.
Yes, there is potential for some combination of expert system, flowcharts, fuzzy logic, LLM, etc. But you can’t just slap them together and get something perfect with none of the flaws. Context switching between the underlying “engine” is a lossy, non-deterministic process that introduces more room for error. What’s most likely is you end up with something that has all the drawbacks of each underlying system/engine all piled together in an inconsistent non-deterministic mess of not reliably functioning.
It’s not terribly difficult to understand the underlying functionality of these systems, and the drawbacks of them, from a 1000ft high conceptual level. It’s fucking infuriating the neverending tide of absolutely knowledgeless articles about this shit just because people can’t think past “it appears to be able to carry a conversation”.