I see a lot of discussion here about over-hyped AI, and then I see the huge AI bubble at my workplace, in news, in PR statements, etc.

Are there folks who work at companies – especially interested in those in tech – that have a reasonable handle on AI’s practical uses and its limitations?

Where I work, there’s:

  • a dashboard of AI usage by team and individual, which will definitely not affect performance review in any way
  • a mandate to use one AI tool last month, and this month a new one to abandon that tool and adopt a different one
  • quarterly goals where almost every one has some amount of “with AI” in it
  • letters from the CEO asking which teams are using AI to implement features from ticket descriptions, or (inspired by the news) use flocks of agents, asking for positives without mention of asking for negatives
  • a team creating a review pipeline for AI-generated output in our product, planning to review the quality of the output… using AI
  • teammates are writing code and designs and sending them for review without ensuring functionality or pruning irrelevant portions, despite a statement that everyone is responsible for reviewing AI output

Is all the resistance to overuse of AI grassroots and is the pressure for rampant adoption uniform among executives/investors? Or are some companies or verticals not drinking the koolaid?

  • LifeInMultipleChoice@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    We have AI built into some tools I believe, but I have never been told I had to use them. The truth is they don’t work all the time for every situation and the client is more worried about user data accidentally getting scooped up and spending time warning us to never enter any users information anywere, even so much as notating a user saying they have a limitation that explains why we performed a task in a non standard fashion is a complete not happening.

    So if someone said, “I am vision impaired,” someone reading our notes would probably be wondering… Why the f didn’t they just do a,b,c it would have been much easier. But they are worried if those notes get integrated into something the AI gobbles up in the future, they don’t want to get sued for that user information to somehow be linked to them. As that could be considered medical data I guess.

    The funny part is, if an AI does use that data for learning now, it may start trying to instruct or perform tasks based off of highly inefficient solutions designed to assist a specific disability