However, can we really trust this technology as it stands?

Could we, in principle, ever trust it?

Established search outfits, notably Bing and Google, have added these technologies to their existing search engines.

A robot librarian looking through a filing cabinet searching for files.

Sydney Louw Butler / How-To Geek / MidJourney

We know how the information was produced, and we know who was responsible.

Of course, none of these systems are foolproof, but someone is always responsible.

So far, generative AI systems are prone to behaving in ways that even their creators can’t predict.

Even of something like Google’s Gemini was 99% accurate and trustworthy, we still have an issue.

Consider that Google handles around8.5 billion searches per day.

There are many promising technologies that have been mothballed thanks to a lack of public trust.

Nuclear accidents have made people wary of nuclear energy.

We tend to focus on the exceptions rather than the rule, which is why people play the lottery!

Ultimately, creators of systems like these can’t wash their hands of responsibility when those systems cause harm.

The buck has to stop somewhere, and humans will have to be responsible in some way.