The police introduce new technologies without realizing the impact on people

It matters because the police departments are racing way ahead and are starting to use drones anyway, for everything from surveillance and intelligence gathering to tracking criminals.

Last week, San Francisco approved the use of robots, including drones, that can kill people in certain emergencies, such as dealing with a mass gunman. In the UK, most police drones have thermal imaging cameras that can detect how many people are inside houses, says Pósch. This has been used for all sorts of things: catching human traffickers or rogue landlords, and even targeting people throwing suspected parties during the Covid-19 lockdown.

Virtual reality will allow researchers to test the technology in a controlled and safe manner with many test subjects, says Pósch.

Even though I knew I was in a VR environment, I found the encounter with the drone unnerving. My opinion of these drones hasn’t improved, despite meeting a supposedly polite human-operated one (there are even more aggressive modes for the experiment that I haven’t experienced).

Ultimately, whether drones are “polite” or “rude” doesn’t make much of a difference, says Christian Enemark, a professor at the University of Southampton who specializes in war ethics and drones and is not involved in the research. That’s because using drones is itself a “reminder that the police aren’t here, whether they don’t bother to be here or are too scared to be here,” he says.

“So maybe every encounter is fundamentally disrespectful.”

Deeper Learning

GPT-4 is coming, but OpenAI still fixes GPT-3

The internet is buzzing with excitement over the latest iteration of AI lab OpenAI’s famous GPT-3 large language model. The latest demo, ChatGPT, answers people’s questions via a back-and-forth dialog. Since its launch last Wednesday, the demo crossed over 1 million users. Read the story of Will Douglas Heaven here.

GPT-3 is a confident bullshitter and can easily be made to say toxic things. OpenAI says it has fixed many of these issues with ChatGPT answering follow-up questions, admitting its mistakes, challenging false premises, and declining inappropriate requests. It even refuses to answer some questions, like how to be evil or how to break into someone’s house.

Leave a Reply

Your email address will not be published. Required fields are marked *