Google announced on Wednesday morning that it has taken another step on the journey toward a passwordless future by rolling out support for passkey login to Android and Chrome. Passkeys, which let you use your phone or computer’s built-in authentication systems instead of a traditional password, have support from all the major tech companies, with Apple, Google, and Microsoft pledging to bring the feature to their OSes.
Essentially, passkeys are a credential stored on a device, like your phone or computer, that confirms to a website or application that you are who you say you are (though Google is still working on the passkey API for native Android apps). You verify your identity to the device, and it can then securely log in to sites and services you use without relying on a password that could be stolen, reused across multiple sites, or that you might be tricked into giving up to a fake customer service agent or using on a fake phishing site because you clicked the wrong link.
A passkey can’t be easily stolen in the same way that a password can, and because using one relies on access to a physical device, it combines the security of hardware two-factor authentication with the familiarity of smartphone use.
While the feature is currently still mostly for early adopters, the stable launch coming later this year will let people log in to supported websites using their device’s fingerprint reader or other authentication factors instead of a password.
Google made the passkey announcement in a post on the Android Developers Blog, addressed to both developers and device end users, who’ll be able to take advantage of the new feature in different ways. Now that all the platforms people use are starting to support passkeys, developers have the incentive and opportunity to make sure they actually work before the features are available to everyone.
Web developers can build support for passkey login on sites they operate by using the WebAuthn API and testing on the Chrome Canary browser or the Google Play Services beta program. For early adopters wishing to test on Android, the feature is already rolled out.
Android passkeys are stored locally on a phone, but they are also backed up to the cloud in case the device is lost. Google has an in-depth explanation of how the system works on its security blog if you want to do a deep dive.
One of the most significant aspects of the passkey system is its cross-platform compatibility. A passkey saved on a phone can be used to authorize a web login on another nearby device, which means that (as Google has been keen to point out) an Android phone owner can sign in to a passkey-supporting website from Safari on a Mac. In terms of the user experience, this will involve scanning a QR code in a pop-up shown by the desktop site and confirming on the phone that the passkey login option should be used.
This compatibility across platforms is possible because passkey technology is built on shared, underlying industry standards known as FIDO2 and Web Authentication Level 3 rather than being a proprietary technology.
Passkey logins aren’t widely implemented yet, though adoption is growing and is scheduled to roll out to the major platforms throughout this year and early next year. iOS 16 and the upcoming macOS Ventura support them, as does the Dashlane password manager. As for what you can log into using passkeys, there are a few apps and websites that support them, such as Dropbox and Best Buy, but based on our tests, you have to go out of your way to actually use the feature; it’s not the default.
Overall, Google is optimistic about bringing forward the timeline of a passwordless future. A forthcoming update will bring changes to Android that allow third-party credential managers (presumably the likes of LastPass, 1Password, and others) to support passkeys for their users.
“Google remains committed to a world where users can choose where their passwords, and now passkeys, are stored,” the blog authors write. “Today is another important milestone, but our work is not done.”
]]>As of October 6th, Twitter’s Birdwatch community moderation program has been expanded to all US users.
It’s a big step for Birdwatch, which was officially launched in beta in January 2021, and marks a step up for the platform’s efforts to reduce the spread of misinformation on the platform. But as the scheme expands, data reviewed by The Verge suggests that the most common topics being fact-checked are already covered by Twitter’s misinformation policies, raising new questions as to the overall impact of the program.
At its core, the promise of Birdwatch is to “decentralize” the process of fact-checking misinformation, putting power into the hands of the community of users rather than a tech company. But fact-checking encompasses a huge range of topics, from trivial and easily debunked rumors to complex claims that may hinge on fundamental uncertainties in the scientific process.
“It can speak to the internet’s random curiosities that pop up”
In public statements, Twitter executives involved in the program have focused on the easier decisions. In a call with reporters last month, Keith Coleman, vice president of product at Twitter, suggested that the strength of Birdwatch was in addressing statements that were not covered by Twitter’s misinformation policies or weren’t serious enough to be assigned in-house fact-checking resources. “It can speak to the internet’s random curiosities that pop up,” Gizmodo quotes Coleman as saying. “Like, is there a giant void in space? Or, is this bat actually the size of a human?”
We downloaded Birdwatch data up to September 20th. This dataset contained 37,741 notes in total, of which 32,731 were unique.
We used Python’s natural language tool kit library to parse the notes and extract the most common significant words appearing in them.
To do this, we discarded conjunction words like “and,” “but,” “there,” “which,” and “about” and excluded words that were frequently used in the process of constructing a fact check, such as “tweet,” “source,” “claims,” “evidence,” and “article.” We also ignored words inside URLs — which Twitter includes as part of the note text — and reduced plurals to their singular form (so “cars” would be counted as “car”).
The processed data gives us a good overview of topics that are commonly addressed or have context added to them using the Birdwatch system.
➡️ To explore the full data yourself, you can browse our interactive database of Birdwatch notes.
But cases from the beta phase of the program show that many Birdwatch users are attempting to tackle more serious misinformation issues on the platform and overlapping significantly with Twitter’s existing policies. Birdwatch data released by Twitter shows that COVID-related topics are by far the most common subject addressed in Birdwatch notes. What’s more, many of the accounts that posted the tweets that were annotated have since been suspended, suggesting that Twitter’s internal review process is catching content violations and taking action.
As part of its broader open-source efforts, Twitter maintains a regularly updated dataset of all Birdwatch notes is freely available to download from the project blog. The Verge analyzed this data, looking through a dataset that spanned from January 22nd, 2021, to September 20th, 2022. Using computational tools to collate and summarize the data, we can gain an insight into the major topics of Birdwatch notes that would be hard to gain from manual review.
Data shows that Birdwatch users have spent a lot of time reviewing tweets related to COVID, vaccination, and the government’s response to the pandemic. The word frequency list shows us that “COVID” is the most common subject term, with the related term “vaccine” ranking at number three on the list.
!function(){"use strict";window.addEventListener("message",(function(e){if(void 0!==e.data["datawrapper-height"]){var t=document.querySelectorAll("iframe");for(var a in e.data["datawrapper-height"])for(var r=0;r<t.length;r++){if(t[r].contentWindow===e.source)t[r].style.height=e.data["datawrapper-height"][a]+"px"}}}))}();Of these notes, the type of claims being commonly fact-checked evolves over time as public understanding of the pandemic changes. Tweets from 2021 address false narratives claiming that Dr. Anthony Fauci somehow had a personal role in creating the novel coronavirus or shedding doubt on the safety and effectiveness of vaccines as they became available.
Other Birdwatch notes from this time address unproven or dangerous treatments for COVID, like ivermectin and hydroxychloroquine.
While some of the more outlandish COVID myths are easy to fact-check — like the idea that the virus was a hoax, is mostly harmless, or gets spread by 5G towers — other claims about transmission, severity, and mortality can be harder to definitively correct.
For example, as vaccines rolled out in January 2021, one Birdwatch user tried to add context to an argument over one vaccine brand’s effectiveness at preventing hospitalization vs. preventing any infection whatsoever. New Jersey Governor Phil Murphy tweeted that trial data for the Johnson & Johnson vaccine showed “COMPLETE protection against hospitalization and death” and provoked an angry response from a statistician who linked to trial data showing only “66% efficacy” from the vaccine.
“The [tweet] author is confusing the reported efficacy of preventing hospitalization and death, with the reported overall efficacy of preventing infection,” a Birdwatch note added helpfully, referencing Bloomberg coverage that clearly distinguished between the metrics.
More questionably, another Birdwatch user attempted to fact-check a claim widely reported by mainstream news outlets, using a blog post on a prepping website as a citation. Where news outlets followed the CDC’s lead in reporting that the omicron variant made up 73 percent of new infections as of December 2021, a blog post on ThePrepared.com argued that the claim may have stemmed from an error in the CDC’s statistical modeling. The blog post was tightly argued, but without confirmation from a more reliable and vetted source, it’s hard to know whether the annotation helped the situation or simply muddied the waters.
Birdwatch users rated tweets like these as being some of the most problematic to deal with. (By filling in a survey when creating a note, users can rate tweets on four binary values qualifying how misleading, believable, harmful, and difficult to fact-check the claims are). It’s clear that accurate, accessible communication of scientific findings is a difficult task, but public health outcomes depend on surfacing accurate health advice and preventing bad advice from proliferating. Experts agree that platforms need strong, clear and coordinated standards for addressing misinformation about the pandemic, and it seems unlikely that community-driven moderation meets this bar.
Though COVID is a main topic of Birdwatch notes, it’s far from the only one.
In the word frequency list, “earthquake” and “prediction” rank highly due to a large number of identically worded notes that were attached to tweets from accounts that falsely claim to be able to predict earthquakes around the world.
There’s no evidence that earthquakes can reliably be predicted, but inaccurate earthquake predictions keep going viral online. With 48K followers at time of writing, the @Quakeprediction Twitter account is one of the worst offenders, posting a steady stream of predictions of elevated earthquake risk in California. One Birdwatch user seems to have taken it on themselves to attach a warning note to more than 1,300 tweets from this and other earthquake prediction accounts, each time linking to a debunk from the US Geological Survey explaining that scientists have never predicted an earthquake.
It’s unclear why the user focused on earthquakes, but the end result is a human reviewer ironically behaving more like automated fact-checking software: looking for a pattern in tweets and responding with an identical action each time.
The data also clearly shows ongoing efforts to contest the results of the 2020 election — a phenomenon that has plagued many other online platforms.
Further down the list of most common words are the terms “Trump,” “election,” and “Biden.” Many notes that contain these terms address claims that Donald Trump won the 2020 election or, conversely, that Joe Biden lost. Though pervasive, claims like these are easy to fact-check due to the overwhelming amount of evidence against widespread electoral fraud.
“Joe Biden won the election. This is the big lie continued,” reads a note attached to a tweet by white nationalist-linked Arizona state Senator Wendy Rogers, which falsely claims that fraud occurred in highly populated areas.
“Mail-in voting fraud is almost impossible to commit, and there is absolutely no evidence the election results of 2020 are the result of fraud,” read another note attached to a false tweet by Irene Armendariz-Jackson, a Republican candidate running for Beto O’Rourke’s former congressional seat in El Paso, Texas.
Another user wrote simply, “The election was not rigged. Trump lost.” For this note, as with many other cases, the original tweets simply can’t be reviewed: looking up the tweet ID results in a blank page and a message that the account has been suspended.
While Birdwatch users have annotated many tweets contesting the results of the 2020 election, the self-evaluation surveys rate these tweets as being less challenging to address, given the overwhelming amount of evidence supporting the Biden victory.
Given the large number of suspended accounts, it seems clear that either Twitter’s algorithms or its human moderation team are also finding it easy to flag and remove the same content.
So far, data from the Birdwatch program shows a strong community of volunteer fact-checkers who are attempting to take on difficult problems. But the evidence also suggests a large degree of overlap in the type of tweets these volunteers are addressing and content that is already covered under Twitter’s existing misinformation policies, raising questions as to whether fact-checking notes will have a significant impact. (Twitter maintains that Birdwatch should be additive on top of existing fact-checking initiatives rather than any kind of replacement for misinformation controls.)
Twitter says that preliminary results of the program look good: the company claims that people who see fact-check notes attached to tweets are 20–40 percent less likely to agree with the substance of a potentially misleading tweet than someone who sees only the tweet. It’s a promising finding, but by implication, many viewers of the tweet are still being taken in by falsehoods.
In response to The Verge’s reporting, Twitter spokesperson Tatiana Britt said that there was a distinction between tweets that might have a Birdwatch note added for context, and those that were defined as misinformation.
“Not all Tweets on topics such as COVID-19 or elections fall within our misinfo policies,” Britt said. “As you’ll see in the ‘Rated Helpful’ tab on the Birdwatch site, most Birdwatch notes that have been identified as Helpful and become visible across Twitter don’t overlap with content covered under Twitter’s misinfo policies.”
The full set of Birdwatch notes is not shown to all Twitter users, Britt said, as notes that are displayed below tweets are only those that have been rated helpful by people from different points of view.
Click here to browse our interactive database of Birdwatch notes.
Update October 10th, 3:50PM ET: Article updated with quote from Twitter.
]]>It’s been a rough year for cryptocurrency, and things are not looking any better after over half a billion dollars of cryptocurrency tokens were stolen from crypto giant Binance on Thursday night.
The exploit hit the Binance Bridge, a cross-chain bridge that allows for the transfer of tokens between two related blockchains operated by the Binance cryptocurrency exchange, and collectively known as BNB Chain. According to well-known smart contract analyst samczsun, the attacker was able to forge transactions that allowed them to withdraw two million BNB tokens from the bridge, worth roughly $570 million.
Funds estimated at around $87 million were removed from the BNB ecosystem entirely; but the remaining funds could not be immediately transferred because BNB Chain took the drastic step of halting the blockchain entirely, meaning no transactions whatsoever could be processed.
“An exploit on a cross-chain bridge, BSC Token Hub, resulted in extra BNB,” Binance CEO Changpeng Zhao said in a tweet posted soon after the attack. “We have asked all validators to temporarily suspend BSC [Binance Smart Chain].”
A tweet from the BNB Chain account said that the blockchain was running again as of the early hours of Friday morning. In an “ecosystem update,” the BNB Chain team apologized for the exploit, and said that the project would hold a series of on-chain governance votes to determine whether to freeze the hacked funds, and if a bounty should be offered for catching the hackers responsible.
“Looking at the broader picture, we have seen a series of attacks on targeting vulnerabilities in cross-chain bridges,” the blog post read. “We will openly share the details of the postmortem and all lessons on how to implement more advanced security measures to shore-up these vulnerabilities.”
In recent years, cross-chain bridges have become the most common site of ultra-high value hacks, partly because they store very large sums of cryptocurrency tokens at any given time. While the earlier era of the cryptocurrency industry was characterized by frequent attacks on exchanges, security has greatly improved, and a hacker would need to breach numerous layers of security to withdraw funds. With cryptocurrency bridges, the ability to forge one valid transaction is in some cases enough to make off with a nine-figure sum.
]]>A former Amazon employee based in Seattle has been sentenced for her role in a huge data breach that saw Capital One bank pay out more than $250 million to affected customers.
Paige Thompson, known online by her handle “erratic,” was convicted in June for the 2019 hack in which more than 100 million people in the US and Canada had their personal information stolen. On Tuesday, a US District Court in Seattle found Thompson guilty of seven counts of computer and wire fraud — punishable by up to 20 years in prison — but the software engineer received a sentence of time served plus five years of probation, to include computer monitoring.
According to a press release from the Department of Justice (DOJ), US District Judge Robert S. Lasnik said that time in prison would be particularly difficult for Thompson, as she is transgender and suffers from mental health issues. The DOJ is, apparently, unhappy with the outcome: in a statement on the case, US attorney Nick Brown said that the department understood the mitigating factors but was “very disappointed with the court’s sentencing decision.” Brown added, “This is not what justice looks like.”
Yet from the outset, the Capital One breach presented a complicated set of facts that is atypical of most large hacking and data theft incidents. Thompson did access and download a huge amount of data without authorization after using a custom software tool she built to scan for misconfigured Amazon Web Services accounts. (Thompson was reportedly employed by Amazon Web Services from 2015–2016.)
After gaining access, she leveraged the compromised accounts to download data from a number of organizations, including Capital One, and obtained vast troves of sensitive user information including Social Security numbers and bank account information. Thompson also reportedly planted cryptocurrency mining software onto some of the remote servers that she had gained access to and routed the proceeds into her own crypto accounts.
But unlike many other data breach cases, it seems that there is no evidence Thompson sought to enrich herself from the large volumes of personal information she stole. There are no allegations that she offered any of this data for sale or fraudulently used banking information to make purchases for herself. In fact, it seems that she uploaded some details of the exploit to a publicly viewable GitHub account: as CNBC reports, it was a tip about the GitHub data that led to her eventual arrest.
At trial, attorneys for the defense argued that Thompson never attempted to profit from the hack and did not release the data in a way that caused anyone’s identity information to be misused.
The Seattle Times reports that a friend of Thompson’s wrote a letter of support in the trial, arguing that the financial institutions bore responsibility for poor handling of sensitive data and that Thompson’s exploits had exposed the flaws in the system.
“Paige saw a situation where the information on which the financial system depends for its security was left utterly unguarded by its custodians,” part of the letter said.
]]>Researchers at Kaspersky have found malware hidden in a modified version of the anonymity-preserving Tor Browser, distributed in a way that specifically targets users in China.
According to details published in a blog post on Tuesday, the malware campaign reaches unsuspecting users through a Chinese-language YouTube video about staying anonymous online. During the research period, the video was the top result for the YouTube query “Tor浏览器,” which translates to “Tor browser” in Chinese. Beneath the video, one URL links to the official Tor website (which is blocked in China); another provides a link to a cloud-sharing service that hosts an installer for Tor, modified to include malicious code.
Once the file is executed, it installs a working version of Tor Browser on the user’s machine. But the browser has been modified so as to save details of browsing history and any form data entered by the user, which the genuine version of Tor Browser forgets by default.
Even more concerning, the malicious version of the browser also attempts to download a further malware payload from a remote server, which the researchers say is only installed on machines with an IP address located in China. When the second-stage malware is installed on a target machine, it retrieves details like the computer’s GUID — a unique identifying number — along with system name, current user name, and MAC address (which identifies the machine on a network).
All of this information is sent to a remote server, and according to Kaspersky’s analysis, this server can also request data on the system’s installed applications, browser history — including the fake Tor Browser — and the IDs of any WeChat and QQ messaging accounts present on the computer.
Notably, the malware seems designed to identify the user rather than steal data that could be sold for profit. “Unlike common stealers, OnionPoison implants do not automatically collect user passwords, cookies or wallets,” Kaspersky researchers note. “Instead, they gather data that can be used to identify the victims, such as browsing histories, social networking account IDs and Wi-Fi networks.”
The result is a powerful and comprehensive surveillance program targeted specifically at Chinese internet users. Together, the data obtained would be enough to build a comprehensive profile of a user’s identity and internet usage habits, even as they browsed with software that they believed would keep them anonymous.
The best protection against this kind of attack is to download software only from a trusted source — in this case, the official Tor Project portal — but China’s extensive internet censorship makes this difficult for many users in the country. By default, the Chinese government blocks access to a huge range of websites that might distribute information critical of the ruling Communist Party, including basic applications like Twitter, Instagram, and Gmail.
In censored areas, users can download Tor through the GetTor Telegram bot, or by emailing gettor@torproject.org.
]]>Sensitive information about students from the Los Angeles Unified School District (LAUSD) began to appear online on Saturday after a cybercriminal gang posted data obtained in a ransomware attack.
The publication of the data was confirmed by LAUSD superintendent Alberto M. Carvalho in a statement released by tweet on Sunday.
“Unfortunately, as expected, data was recently released by a criminal organization,” the statement said. “In partnership with law enforcement, our experts are analyzing the full extent of this data release.”
The ransomware attack that targeted LAUSD — the second-largest school district in the US — occurred four weeks ago over Labor Day weekend. Although it was not immediately attributed by official sources, many signs pointed to a ransomware gang known as Vice Society, which has specifically targeted K-12 education institutions; the hacked data has now been published on the Vice Society dark web site.
The gang issued an extortion demand to the school district on September 22nd, just over two weeks after the attack took place. At the time, Carvalho told local reporters that the information stolen by Vice Society was thought to contain student names and attendance records but “most likely lacks personally identifiable information or very sensitive health information.”
Unfortunately, this assessment may have been overly optimistic. While no details of the contents of the data leak have been officially confirmed, reporting from NBC Los Angeles cited a law enforcement source as stating that the published data included legal records, business documents, and some confidential psychological assessments of students. Bleeping Computer also reports that some of the folder names in the leaked data suggest the contents include Social Security numbers, passport information, and “Secret and Confidential” documents.
Following the advice of law enforcement, Carvalho said from the outset that the school district would not cooperate by paying a ransom. On Friday, a day before the data was released, the superintendent reiterated to the Los Angeles Times that the district would not negotiate with hackers. This statement appears to have prompted the publication of some of the data, which was released two days before the payment deadline initially given by the hackers.
If confirmed, the release of sensitive student information would be a damaging but perhaps also inevitable escalation of the ransomware attack. For any parents, staff, or students affected by the incident, LAUSD has set up a hotline to field questions or handle requests for support. The line will be accessible Monday through Friday between 6AM and 3:30PM PT at 855-926-1129.
]]>The NSA, as a rule, wants to employ people who are good at spying. But according to the FBI, one former employee tried to turn the tables on the agency and was caught in the act.
Per details released by the Department of Justice this week, a Colorado resident was arrested Wednesday and charged with attempting to transmit classified information to a representative of a foreign government.
The press release says that Jareh Sebastian Dalke, 30, was employed by the NSA from June 6th to July 1st, 2022. Between August and September 2022, feds say that he used an encrypted email account to transmit portions of three classified documents to an individual who he believed to be working for a foreign government.
In fact, that individual was an FBI agent who lured Dalke into a sting operation that eventually led to his capture.
An affidavit from the FBI in support of the case gives further details. After contacting the agent that he believed to be working for a foreign government, Dalke was said to have offered extracts from three different classified documents: the cover page of a threat assessment of a foreign government’s military capabilities; an unnamed federal agency’s plans to update a cryptographic program; and a threat assessment of sensitive US defense capabilities.
Information in the first of these documents was classified as “secret,” with the latter two classified as “top secret,” a review by the FBI concluded. Dalke apparently told the undercover agent in an email that the three excerpts were just a “small sample to what is possible.”
Dalke apparently told the undercover agent that the three excerpts were just a “small sample to what is possible”
In exchange for transmitting the full documents, the FBI says that Dalke requested a payment of $85,000 in cryptocurrency. The undercover agent sent Dalke a preliminary payment of around $4,800 in cryptocurrency, which he then transferred to an account at the Kraken exchange and converted into US dollars to withdraw.
Though the affidavit doesn’t specify the cryptocurrency used, the quantity and price listed — roughly 30 units equalling $4,800 in late August — means it was likely to be the privacy-focused coin Monero, which has emerged as a top choice for fraudulent transactions online.
After the initial transaction was made, the FBI says that Dalke requested that his contact set up a secure connection at a public location from which he could transmit the documents in their entirety. The undercover agent told him that he could find this at Union Station in Denver; but on arrival, Dalke was arrested by FBI agents and taken into custody.
He has now been charged with three violations of the Espionage Act, which makes it a crime to transmit sensitive information to a foreign power. A guilty charge carries a possible penalty of death or life imprisonment.
]]>Meta says it has disrupted a sophisticated Russian influence operation that operated across its own social platforms, Facebook and Instagram, as well as Twitter, YouTube, Telegram, and even LiveJournal.
In a new report on removing coordinated inauthentic behavior, Meta says that the influence campaign originated in Russia and involved a “sprawling” network of more than 60 fake websites. In a bid for borrowed credibility, some of those sites impersonated mainstream European news outlets like Der Spiegel, The Guardian and Bild.
Social media accounts within the network shared spoofed articles from these news outlets, mostly criticizing Ukraine and Ukrainian refugees or arguing against sanctions placed on Russia. Content in the spoofed articles was produced in English, French, German, Italian, Spanish, Russian, and Ukrainian, among other languages.
“This is the largest and most complex Russian-origin operation that we’ve disrupted since the beginning of the war in Ukraine,” Meta’s global threat intelligence lead Ben Nimmo and security engineer Mike Torrey write in the report. “It presented an unusual combination of sophistication and brute force. The spoofed websites and the use of many languages demanded both technical and linguistic investment. The amplification on social media, on the other hand, relied primarily on crude ads and fake accounts.”
The report’s authors write that networks of these fake accounts “built mini-brands” across the internet by using the same names across multiple platforms, and collectively, pages within the fake account network spent around $105,000 promoting articles and memes through Facebook and Instagram ads. On some occasions, the Facebook pages of Russian embassies in Europe and Asia even amplified content from the influence campaign.
Meta says that the campaign also used original memes created to promote pro-Russian and anti-Ukraine narratives and even included petitions launched on Change.org and Avaaz. (In one example, a Change.org petition demanded that the German government end “unacceptable generosity” toward Ukrainian refugees.)
While some aspects of the campaign were technically sophisticated, Meta says that the repetitive construction and posting patterns of the fake accounts meant that many were removed by automated systems before an in-depth investigation had even begun.
More precise details of the campaign have been shared with misinformation researchers to facilitate a better understanding of the campaign, Meta said.
Though Meta does not attribute the campaign directly to the Russian government, the Kremlin is adept at using digital influence operations as a way to project global power. Even before the Russian invasion of Ukraine in February 2022, Ukrainian officials were sounding the alarm over Russian disinformation campaigns being conducted in the country via social media.
Russia has also used similar tactics to influence discussion on other global topics of significance: as coronavirus vaccines began to roll out in early 2021, online publications linked to Russian intelligence services were caught spreading false or misleading information about vaccines.
]]>WhatsApp has published details of a “critical” vulnerability that has been patched in a newer version of the app but could still affect older installations that have not been updated.
Details were disclosed in a September update of WhatsApp’s page on security advisories affecting the app and came to light on September 23rd.
The critical bug would allow an attacker to exploit a code error known as an integer overflow, letting them execute their own code on a victim’s smartphone after sending a specially crafted video call. Remote code execution vulnerabilities are a key step in installing malware, spyware, or other malicious applications on a target system, as they give attackers a foot in the door that can be used to further compromise the machine using techniques like privilege escalation attacks.
The recently disclosed vulnerability has been assigned the identification number CVE-2022-36934 in the national vulnerability database and given a severity score of 9.8 out of 10 on the CVE scale. This equates to the highest possible threat level: “critical.”
In the same security advisory update, WhatsApp also shared details of another vulnerability — CVE-2022-27492 — that would let attackers execute code after sending a malicious video file. This vulnerability was scored 7.8 out of 10, or a severity level of “high.”
Both of these vulnerabilities are patched in recently updated versions of WhatsApp and should already be fixed in any installation of the app that is set to automatically update (the default setting on most phones). According to the security advisory, the vulnerabilities affect:
Besides protecting against possible hacking exploits, there are more reasons to keep your WhatsApp installation updated. On Monday, the company announced that it was rolling out a new feature that will let users share a one-click link to join a group call and also testing the implementation of 32-person encrypted video chats.
]]>Following one of the biggest data breaches in Australian history, the government of Australia is planning to get stricter on requirements for disclosure of cyber attacks.
On Monday, Prime Minister Anthony Albanese told Australian radio station 4BC that the government intended to overhaul privacy legislation so that any company suffering a data breach was required to share details with banks about customers who had potentially been affected in an effort to minimize fraud. Under current Australian privacy legislation, companies are prevented from sharing such details about their customers with third parties.
The policy announcement was made in the wake of a huge data breach last week, which affected Australia’s second-largest telecom company, Optus. Hackers managed to access a vast amount of potentially sensitive information on up to 9.8 million Optus customers — close to 40 percent of the Australian population. Leaked data included name, date of birth, address, contact information, and in some cases, driver’s license or passport ID numbers.
Reporting from ABC News Australia suggested the breach may have resulted from an improperly secured API that Optus developed to comply with regulations around providing users multifactor authentication options.
A person claiming to be the Optus hacker seems to have corroborated this account of the data breach in conversations with security journalist Jeremy Kirk. Per details given to Kirk by the presumed hacker, the data was downloaded by querying the API sequentially for each value of a unique identifier field labeled “contactid” and recording each user’s information one by one until the dataset of millions of records was assembled.
A post from the same person in a popular hacking forum claimed to offer the user data for sale for $150,000 and listed an extortion price of $1 million to keep the data private, to be paid in the Monero cryptocurrency. The hacker also released a number of free “sample files,” which they said contained the full address information of 10,000 Optus users.
As the situation unfolds, many Optus customers have taken to social media to express their frustration with how the hack was being handled, particularly in regard to notifying affected users that their data was at risk.
“Amazing that Optus can email me when I am a day late in paying my bill, but not when they lose all my personal info in a massive cyber hack,” tweeted Patrick Keneally, a news editor for Guardian Australia, after the data breach came to light.
]]>