3 examples of 'scary AI'
Public concerns continue to grow with the development of technology. Many argue that its capabilities are evolving at too quick a pace with not enough checks and balances to maintain civil liberties.
Electronic Specifier looks back at three examples of technology testing boundaries and nearly crossing the line.
AI’s obstruction of justice
A recent report by the House of Lords entitled ‘Technology rules? The advent of new technology in the justice system’ sought to examine the use of technology in police investigations. Our own world echoing Orwell’s ‘surveillance society’, CCTV is commonplace amongst private and public
property.
Further to being recorded around the clock, artificial intelligence is now so developed that much footage is processed with automation and facial recognition can actually exacerbate discrimination and embed biases into algorithmic outcomes.
Worries over the proliferation of AI tools used in the justice system have surfaced. The House of Lords and Home Affairs Committee have found that many cases of AI do not have proper oversight and could seriously infringe on human rights and civil liberties.
The committee has called for more responsible implementation and use of AI. This includes a duty of candour on the police to ensure full transparency.
Read the full article here.
Voice assistants hindering not helping
Programmed to ‘come to life’ at the sound of a wake word, smart speakers and voice assistants need access to the world around them. Whether your device responds to ‘Alexa’, ‘Hello Google’ or ‘Hey Siri’, most are well acquainted with this smart technology which can now be found in many cars and homes.
Research from Accenture UK found that 40% of voice assistant users were concerned about who was listening and how their data was used.
More people are expressing their concerns over the possibility of a voice assistant recording all of their conversations, which may later be held against them. Obvious ethical ramifications surround such a scenario, and data privacy and protection laws are constantly being updated to reflect changing attitudes.
Read the full article here.
Facial recognition in the war
In recent weeks, Ukraine has been using Clearview AI’s facial recognition technology to identify the dead and reunite refugees with their separated families.
Whilst this may seem to exemplify the way that technology can be utilised for the greater good, Clearview has fallen to criticism for its failure to comply with UK data protection laws. It has been found to gather data by illegal means, without people’s knowledge or consent, from publicly available information such as social media platforms.
Like everything, AI is not perfect. Last year a man was wrongly arrested due to faulty facial recognition. With many algorithms inherently biased, some software does not recognise factors based on age, race and ethnicity.
Read the full article here.