Checklist 311: Your Cyberpunk Life
On this week’s Checklist:
- AI helps humans commit crimes
- Hackable digital license plates
- Robot security guards have arrived
ChatGPT, bad guys, and you
ChatGPT is a large language model chatbot developed by OpenAI based on GPT-3.5. It has a remarkable ability to interact in conversational dialogue form and provide responses that can appear surprisingly human.
The chatbot is already creating a stir: human workers are afraid it will replace them; teachers fear that students will use it to cheat on assignments; and some speculate that it could one day challenge Google’s search dominance.
Perhaps unsurprisingly, cybercriminals have seized on ChatGPT as a way to commit crimes more efficiently. SC Magazine reports that cyber threat intelligence firm Check Point Research has found bad guys using the AI to create “infostealers, encryption tools, and phishing lures to use in hacking and fraud campaigns.”
Bad actors are using the tool to build rudimentary malware and encryption/decryption programs—and they’re able to do this even when they have no prior coding experience.
More worryingly for the average computer user, ChatGPT also writes convincing (and grammatically correct) phishing emails. For years, cybersecurity experts have recommended looking for poorly written emails as a reliable way to spot a phishing attack. But ChatGPT can write phishing emails that are almost indistinguishable from a genuine message.
So what can you do to protect yourself in this brave new world?
The answer, comfortingly, is to keep doing all of the things you’re already supposed to be doing:
- Use strong, unique passwords to protect all of your accounts.
- Protect all accounts with two-factor authentication.
- Don’t reply to unsolicited emails directly—investigate the issue they raise on your own.
The trouble with electronic license plates
California is now offering drivers the option to purchase digital license plates. They sound like fun: You can digitally customize the lower portion of your car’s plate with a personalized message.
A team of security researchers managed to gain “super administrative access” into Reviver (…) That access allowed them to track the physical GPS location of all Reviver customers and change a section of text at the bottom of the license plate designed for personalized messages to whatever they wished…
To its credit, Reviver responded to the issue quickly, patching the vulnerability in less than 24 hours, and the company says that they will add additional safeguards to their technology.
But once again, it goes to show that “smart” things aren’t always the smartest choice!
Are we ready for RoboCop?
ZDNet is reporting on a recent attempt to introduce robot security guards at a San Francisco location owned by power company Pacific Gas & Electric (PG&E). The robots are made by Knightscope, a manufacturer of autonomous security technology.
PG&G’s motivation seems to have been financial. According to the ZDNet piece, switching to roboguards would have saved the company around $9 per hour. For a 24x7x365 task like security, that adds up!
Locals, however, were less than impressed, finding the robots overzealous and noisy. PG&E seems to have accepted the negative feedback. After trialing the robot guards at their site, the company decided not to move forward with the program. In a statement to the San Francisco Standard, a PG&E spokesperson said:
After some initial testing of the Knightscope unit and proactive discussions with the city on this matter, PG&E will not be continuing with plans to deploy the unit at our Folsom location.
It seems John and Jane Q. Public just aren’t willing to accept robot security yet—which is probably a hopeful sign!