Halfway through 2023 we are saturated with AI machine learning algorithms producing text and images. The ethics of how these systems were, and continue, to be trained raise many concerns about privacy and the value of labour. If a system appears to be "good enough" is that a good enough reason to replace people with advanced skills and knowledge? As more generated text and images flood the internet it will become increasingly hard to believe anything you see or read, as the feedback loop of content being used to generate new content can only result in errors. And that's before anyone with an agenda. So what do we do? We recommend that those using, or considering using, these systems get an honest understanding of the tech. Not from a sales person, and not from someone invested in the technology. Find out what the system has been trained on, find out what happens to any data you might put into it, find out if the interface is secure, and find out the estimated output accuracy. And understand that AI probably doesn't work or produce content in the way you think it does. There's nothing wrong with using great tools, there's a lot that can go wrong by not applying the due diligence you would when working with any other suppliers or clients. We'll be extra pleased if you declare any use of AI programmes, as transparency helps everyone.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Categories
All
|