Smart Pet Feeders, AI Abuse, Ransomware: November's Cybersecurity Roundup
November 2023 saw a mix of developments in the world of information security. Researchers found vulnerabilities in smart pet feeders, while Google took legal action against AI abusers. Meanwhile, a ransomware group exposed its tactics, and concerns about large language models like ChatGPT were raised.
In the realm of bizarre attack methods, researchers uncovered two security flaws in popular smart pet feeders. The first flaw involved hard-coded credentials for MQTT, allowing hackers to seize control of a single feeder and launch further attacks on other network devices or manipulate feeding schedules. The second flaw stemmed from an insecure firmware update process, leading to unauthorized code execution, device setting modifications, and theft of sensitive data, including live video feeds.
In November, a team of researchers discovered an attack method targeting ChatGPT. By prompting the model to repeat a certain word, they could extract around a gigabyte of its training dataset. However, a research report published by Sophos around the same time suggested that many threat actors are reluctant to use large language models for attacks. They expressed concerns about societal risks and fears of being scammed.
Google revealed it is taking legal action against two groups of scammers. One misused Google's AI tools to distribute malware, while another abused copyright law to remove competitors' websites. In a separate incident, the BlackCat/ALPHV group posted details of its compromise of MeridianLink to the SEC's 'Tips, Complaints, and Referrals' site, pressuring victims into paying ransom demands.
These events highlight the ever-evolving landscape of cyber threats, with attackers finding new vulnerabilities and methods, while major players like Google take action against misuse. As technology advances, so too must our understanding and protection against these threats.