The old joke that the only time MS will build a product which doesn't suck will be when they start building vacuum cleaners still holds true for a scary amount of their products.Īnd in this particular case they're literally resurrecting the unholy abomination of Clippy and telling us the noxious "help" function will be better with an AI. Doesn't mean I trust them to not repeat their old pattern of every third version of windows or important updates being a repeat of win98, Vista, windows 8 or that bloody "creator's fall update" which I still maintain was for once honestly and aptly named. And why on every new install you have to go explicitly tell windows to opt out from giving MS your entire online history for."calibration and error correction purposes". It's why you get the security patches on day one but keep any actual version upgrades in the queue until enough time has passed that you can be reasonably sure your PC still works. It usually means the consumer gets to be the unpaid beta tester of products which shouldn't be in alpha yet. There is some to that, but there's a very good reason veterans in the field get leery whenever MS has a new and brilliant idea. (I mean outrageous in its vernacular - the PM in question had populated it with some very salty language - not outrageous in its responses, which were correct if somewhat narrow.) No real AI involved, of course, beyond some very simple grammar parsing. We'd initially intended to use those avatars in other places, and at one all-hands where we were showing off some app building various people had done to test the product, I saw a demo of it working with Visual Studio as a proof of concept using the "genie" avatar, with the avatar saying the most outrageous things. The avatar API was fun to play with, though. (For my own part, I didn't mind it, although I always changed it to the "paper cat" avatar instead, and it mostly sat in the upper corner of my monitor like some sort of Tamagotchi, the cat calmly watching butterflies flit through its own punchholes.) Perhaps, if the emphasis is on "arguably." It certainly didn't feel "beloved" at the time! To me, it seemed that there was much Clippy-hate, and it was a guarantee that, introducing myself as a Microsoft employee (as I still am), I'd always get teased about Clippy at any gathering I attended during that timeframe. Using LLMs in this manner will only exacerbate that problem. There are many scenarios in which verbose prose is vital to convey sufficient precision when describing complex, nuanced concepts, but people too often misuse verbosity to achieve a vacuous form of “politeness”. “Lily, can you be a last-minute presenter for the supply-chain all-hands meeting tomorrow?” is far more direct, clear, and understandable than the fluffed-up bullshit that GPT-4 spit out. Somewhat orthogonally, it seems that these ML models are ironically highlighting the absurdity of “professional” communication. Microsoft’s warning is wise: absolutely need a human to read through all of the output very carefully to verify accuracy and correctness. They’re prone to hallucinations (or “fake bullshit” or whatever your preferred term is). If even the marketing doesn’t heed the warnings, then how can anyone expect that actual users will do so? These ML models can’t reason semantically and therefore are not real AI. Click to expand.The user then proceeds to send the email far too quickly for them to have read through the entire message.
0 Comments
Leave a Reply. |