Who is sourcing your crowd and do you trust them?……

If you’ve ever heard of the Wisdom of crowds (Surowiecki, 2004) - it argues that a few well informed people will outperform experts. Crowdsourcing by modern vernacular. The many are smarter than the few. But what if the many were almost infinite? What if the few represented majority…..of the population? The dynamics change. How can the Infinite be more informed than the majority it relies upon?

This is the metaphor for Artificial Intelligence where facts are almost infinitely processed for our use but how accurate is the intelligence we are relying upon and are we placing artificial trust in it for important decisions?

AI relies on available facts and machine learning (learned transactions) to process them for our use. In a domestic use case, our personal data for banking or insurance for self service, in our personal lives to use available images within Adobe to suggest and enhance photos and in business to use available data to make real time decisions. This is different than something like “Waze” for instance which relies on crowds of people updating and informing others of traffic which is an example of crowdsourcing in its purest form. We place our trust in AI to give us answers or “outcomes” so that we can make decisions. I say outcomes because these are all examples where we humans are presented with an outcome upon which to make a decision. AI is not actually making any decisions at all unless we allow it to or ask it to return a value based on an input. We people, the majority, are the final arbiter for what we want, approve or agree to. Currently.

Depending on which study you read, AI accuracy is currently around 80%, (Popular Science, IGI Global, et al 2023) but it’s falling having been at 95% a year ago. Along with it, consumer confidence in AI has also dropped to around 50% ( Digiday, 2024) and a growing number are as concerned as they are excited by it (Boston Consulting Group, 2023) Interpreting the data and looking at real world experiences, we know it is less accurate because we, the arbiter have to make more decisions to obtain the correct outcome and this in turn makes us less confident in the responses. One has a direct input onto another. Don’t believe me? Try using an AI web chat for a non standard question. After the quirky, funky Introduction and the friendly offer of help, you’ll quickly see the limit of its resources and be directed to login and self serve, review the faq’s or speak to someone. Try asking it to edit a photo. It will provide a number of different options with slight imperfections. Those imperfections are visual triggers of fallibility. It happens because non standard responses are just that. Non standard, there isn’t a defined outcome for AI to provide, so it provides options based on what it knows, what it has learned or what it has sourced. Check Instagram now for your favourite photographers and they now carry an AI watermark showing what is a true image and what has been manipulated. This is because trust is falling and a loss of confidence. I should say here that this isn’t an article about machine breaking (Luddism, 1811) versus machine learning. Quite the opposite but consider for a moment if the accuracy in responses by AI was consistently high but at the same time, the data that it used was false, by design. We’d spot it. Wouldn’t we? We’d reject it unless it was giving us the responses we wanted to hear or see at which point we’d continue.

What if the data that AI used was false or deliberately compromised? We’d spot it. Wouldn’t we? We’d reject it unless it was giving us the responses we wanted to hear or see at which point we’d continue.

At this point I’m talking about the security of the data that AI uses and our confidence in it. How confident are we in the integrity of our crowd? I recently saw an article (Rubrik, 2024) about the amount of cyberthreats and intrusions that happen on a daily basis which was mind boggling in fact a ransomware attack every 11 seconds. These threats happen repeatedly and the reason we don’t panic about them largely, is because we don’t know about them until something public happens such as public data breach or dedicated denial of service or DDOS. Access is denied to a service and it falls over or is taken down either by the hacker or by the company who runs the service. Consider if you will, the reverse happening, what if we call it a “Dedicated Allowance of Service” based on false data maliciously fed to AI and in turn us. Customers and web services being freely allowed to falsely use a protected service that appeared healthy. It could do far more damage to a company, public service or personal reputations in seconds than we have possibly seen before and far quicker than we could react. I’ll happily debate this point, but those in a service background reading this, check your response times for a significant outage or IT directors, the one or two people you rely upon at times such as these. Services won’t be able to react quickly enough. Remember Barings Bank and more recently Lehman Brothers? They disappeared overnight.

Looking at threat intrusions which are rising by 27% (Service Now, 2024) in the past 12 months and investment rising in AI by 37% (Forbes, 2023) by the end of the decade, that points to a significant increase cyber security events on top of those that are currently anticipated. ServiceNow recently surveyed organisations and their confidence to be able to repel a cyber attack as did other respected organisations with 60% (Harvard, CapGemini, 2023) conceding that they were unprepared to deal with it or even worse, didn’t understand the security risks posed by AI (Splunk, 2024) whilst investing heavily in it.

Opportunities in AI offer excitement, concern and conflicted views

Boston Consulting Group - Global Customer Sentiment Survey 2023

So going back to the start of this opinion piece, the wisdom of crowds and the many being smarter than the few. I firmly believe in that metaphor but firstly, we should protect the data that we rely upon and ensure we have complete confidence in it as well as it’s origin. Currently only 40% of companies think they could protect it and only 50% of people have confidence in it and the other 10% probably don’t have an opinion on it yet. Secondly and most importantly, intelligence should not be solely artificial especially given that we humans are the final arbiter of decisions for the outcomes that are processed for us. Informed decision making is paramount. Finally, I said that this wasn’t an opinion piece based on machine breaking and although it may seem that way, it isn’t. What it is, is a small jolt or reminder as you read this and you’re offered another advert for chatGPT or Grok as your scrolling through X or when you’re speaking to your friendly bot and discussing your financial affairs. Guard your data closely and continue to make informed decisions.

The wisdom of crowds is still very much valid but make sure you know who is sourcing your crowd for you.

References:

https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think

https://digiday.com/media/ai-briefing-falling-trust-in-ai-poses-a-new-set-of-challenges/#:~:text=In%20the%20past%20five%20years,from%2050%25%20to%2035%25.

https://www.popsci.com/technology/chatgpt-human-inaccurate/#:~:text=A%20pair%20of%20new%20studies,getting%20less%20accurate%20over%20time.

https://botpress.com/blog/how-accurate-is-chatgpt-in-providing-information-or-answers#:~:text=While%20ChatGPT%20has%20been%20trained,sources%20when%20accuracy%20is%20crucial.

https://www.igi-global.com/dictionary/systematic-literature-review/329#:~:text=Although%20reports%20indicate%20that%20AI,but%20normally%20higher%20than%2080%25.&text=It%20is%20a%20metric%20used,of%20a%20machine%20learning%20model.

https://en.wikipedia.org/wiki/The_Wisdom_of_Crowds

https://www.google.com/search?client=firefox-b-d&q=luddite+definition

https://www.capgemini.com/wp-content/uploads/2019/07/AI-in-Cybersecurity_Report_20190711_V06.pdf

https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think

https://www.splunk.com/en_us/form/state-of-security.html?utm_campaign=google_emea_tier1_en_search_generic_security&utm_source=google&utm_medium=cpc&utm_content=StateofSecurity24_EB&utm_term=cybersecurity%20attacks&device=c&_bt=698047645587&_bm=p&_bn=g&gad_source=1&gclid=EAIaIQobChMI5Nvp4vCmhgMVIY9QBh3vPwTkEAAYAyAAEgLygPD_BwE

Next
Next

Converged or divergent. Why your network might not be what you think