Brief Notes on WhatsApp Link Previews

(All testing was performed using WhatsApp for Android v2.20.201.20 and WhatsApp Web)


Recently, we have been looking into possible security issues around how WhatsApp parses and displays preview information about hyperlinks. Basically, WhatsApp will parse some basic information from a hyperlink and display it within the body of a chat. Based on our sleuthing, it appears to be parsed from various elements in the original HTML. For Google, it looks like this:

Parsing code

This appears to be parsed from various meta tags within the original site as per the code snippet below. If those are not present, it will use the “title” tag instead. Here is some of the parsing code:

Additional Details on HTML Retrieval

From testing and review of logs, it appears that the actual call to retrieve the site happens on the Android client. There are also some additional interesting points:

  • The retrieval is cached on the client
  • If WhatsApp Web is used, the retrieval still happens on the mobile phone with the parsed results transferred to the Web version
  • If a link is forwarded, posted into a chat or group, there is no additional retrieval that happens. Instead, the parsed preview is transmitted along with the link

Here is the actual snippet of decompiled code doing the retrieval:

Future Areas for Research

We plan to research the actual parsing and retrieval of the HTML with the eye towards trying to see if any of the parsing code can be manipulated to inject content into the client or the Web version. For things like images and videos, there is potential for exploiting the underlying native code.

Exposure of Motor Vehicle Registration Data via Auto Insurance Quotes

An interesting item that we ran into recently: most US-based auto insurance companies bulk-purchase vehicle registration and driver records from state motor vehicle departments. This information is used for two separate purposes:

  • To adjust rates based on accident history, both for the state as whole and individual policy holders
  • For marketing purposes – either to proactively send marketing materials/ads, OR to make quotes easier.

The second item is interesting since it exposes vehicle registration information via web portals and mobile apps. This can be obtained by a malicious attacker by going through the online or mobile auto insurance quote process. Not clear how sensitive/private such data is or the legal implications of obtaining it this way.

YAA: An Obscure MacOS Compressed File Format


MacOS introduced a new compression archive format in High Sierra (v10.13) called “YAA”. Because this format is new, it may not be supported correctly by security tools, thus allowing malware authors a way to bypass existing controls in such tools. It is recommended that vendors add support for this format to their tools. Users should NOT accept or open YAA archives received from unknown sources.


Early this year we ran across a new file format specific to MacOS: YAA. Apparently it was originally released in the fall of 2017 as part of MacOS High Sierra (v10.13) (see this article from MacKungFu). As per this forum post, apparently it is used as part of compressing the “Content” section of signed executables for MacOS. According to another post, this is an LZFSE tool. This should not be confused with an older tool with the same name called “Yet Another Assembler”.

It is supported via a command line utility called “yaa” as well as the Archive Utility in the GUI – but in the GUI you can only uncompress the archive. For details regarding the CLI, run “yaa” or “man yaa” (also see here).

Our previous research around novel file formats found issues in how Google Chrome (here and here), and various anti-virus vendors for MacOS handle compressed files (here). Specifically, handling a novel compressed archive format is something that existing antivirus software, browsers and other utilities may not be dong well since they are not aware of the format. On the other hand, malware authors can easily package malware inside a compressed archive, which will decompressed by a user via double-clicking the file in Finder. This results from the fact that the Archive utility supports decompression of a lot more file formats that vendors maybe aware of.

For YAA, we have done some testing back in January against a handful of security tools and found some to be vulnerable. Due to lack of resources, we didn’t pursue a more extensive investigation into additional tools but did report whichever issues were found back to the relevant vendors. Because of lack of resources, we are publishing this post to increase awareness of this issue.

To replicate this issue on a particular tool, do the following:

  1. Download the EICAR test file from here into a folder.
  2. Create a YAA file archive as follows:
    • yaa -d folder -o archive.yaa
  3. Test the resulting “archive.yaa” with the security tool of your choice.
  4. To uncompress the archive, double click on it in Finder.

Vendor Responses

This section contains responses we received for this issue from specific vendors:

  • Chromium: Safe browsing pings not sent back for YAA files (issue # 1039128)
    • The default behavior has been set to FULL_PING, so unknown types (such as YAA) should now be sending pings to Safe Browsing.” (also see our blog post)
  • Google/Gmail: YAA archives are not scanned for malware by GMail (issue # 147190746)
    • “Not all file formats are supported and this is expected.”

Most A/V vendors responded either with the fact that once expanded the contents will be scanned, or that this will add this as a feature in the future.


Written by Y. Shafranovich


Majority of this research and responsible disclosure was done in January 2020. Blog post was published on June 14th, 2020.

Two vulnerabilities in Oracle’s iPlanet Web Server (CVE-2020-9315 and CVE-2020-9314)


Two vulnerabilities were discovered in the web administration console of Oracle’s iPlanet Web Server which allow for sensitive data exposure and limited injection. The first issue allows read-only access to any page within the administration console without authentication, resulting in sensitive data exposure. The second issue allows for injection of external images which can be used for phishing and social engineering.

These vulnerabilities have been reported to the vendor (Oracle) but the vendor will not be issuing security patches because the affected product is no longer supported. Users are encouraged to implement other controls to mitigate these vulnerabilities such as restricting network access to the administration console from the Internet or switching to a supported platform.

Version 7 has been tested and found to be vulnerable, however, it is unknown whether earlier versions are affected. Latest versions of Oracle Glassfish and Eclipse Glassfish application server (v5) share common code with the affected product, have been tested and do not seem to be vulnerable. MITRE has assigned CVE-2020-9315 to track the sensitive data exposure issue and CVE-2020-9314 to track the injection issue.


A vulnerability exists in the web administration console of Oracle’s iPlanet Web Server which makes it possible to read information from any page within the console without authentication. This can result in sensitive data exposure of configuration information about the server including encryption keys, JVM configuration and other data. We did not perform testing to see whether this vulnerability allows for changes to be made within the console.

This is accomplished by replacing any URL for any page within the administration console as follows:


To replicate, try the following URLs:


The “productNameSrc” parameter in the administration console allows for injection of external images. When used in combination with the “productNameHeight” and “productNameWidth” parameters, this can be used to inject an external image into a site to facilitate phishing. This is due to an incomplete fix for CVE-2012-0516. The earlier fix added validation against XSS issues but didn’t add validation to make sure an external image is not loaded.

To replicate, try the following URLs:


Both vulnerabilities have been reported to the vendor (Oracle), however the vendor doesn’t plan to issue security patches since the product is no longer supported, as per the following responses:

Oracle iPlanet Web Server 7.0.x is no longer supported. Please see the life time support document.


Thank you for your report regarding Oracle iPlanet Web Server 7.0.x, which is no longer supported by Oracle. Since Oracle no longer supports Oracle iPlanet Web Server 7.0.x, the policy is that there is no coordinated disclosure involving Oracle. Reporters who discover security vulnerabilities in products that Oracle no longer supports are free to disclose vulnerability details without Oracle participation. Oracle does not assign CVEs for products that are no longer supported. That means, if you want a CVE assigned you will need to contact Mitre.

CERT/CC concurred with the vendor’s assessment.

MITRE has assigned CVE-2020-9315 to track the sensitivity data exposure issue, and CVE-2020-9314 to track the injection issue.


Version 7 has been tested and found to be vulnerable, however, it is unknown whether earlier versions are affected. Latest versions of Oracle Glassfish and Eclipse Glassfish application server (v5) share common code with the affected product but have been tested and do not seem to be vulnerable.

Users are encouraged to implement other controls to mitigate these vulnerabilities such as restricting network access to the administration console from the Internet or switching to a supported platform.


CERT/CC ID: VU#343851
CVEs: CVE-2020-9315 and CVE-2020-9314
Oracle lifetime support documentation: see here
Related vulnerability regarding XSS: CVE-2012-0516 and advisory


We would like to thank Synack for assistance with the disclosure process. Text of the advisory was written by Y. Shafranovich.


2020-01-19: Initial discovery
2020-01-24: Initial disclosure sent to vendor; rejected since product is not supported
2020-01-24: Clarification questions sent to the vendor
2020-01-27: Report again rejected by vendor; referred to MITRE for CVE assignment
2020-01-29: CVEs requested from MITRE
2020-02-07: Initial report sent to CERT/CC
2020-02-17: CVE request rejected by MITRE, resubmitted with more data
2020-02-18: Response received from CERT/CC
2020-02-20: CVE assignments received from MITRE
2020-02-20: CVEs and disclosure plans communicated to the vendor
2020-05-10: Public disclosure

Interesting two-factor (2FA) behavior in Facebook

We recently ran across an interesting behavior with two-factor authentication in Facebook. There are two methods supported: SMS to a phone and OTP via an app such as Google Authenticator. What is interesting is that when OTP is added as an 2FA method and SMS remains as backup, every login to Facebook still sends an SMS code (even though that method is supposed to be a “backup method” only if the OTP method fails). This is in contrast with other vendors such as Google where only one 2FA method is used at any given time.

The only way to get around this, is to setup OTP as the primary 2FA method and backup codes or a security key as the backup one. If you try to setup SMS as the backup method, it reverts to the behavior described above.

This was reported to Facebook on April 27th, 2020 and rejected as a security issue. The original report # is 554696145470552.

Screen Shot 2020-04-30 at 9.42.37 PM

Exfiltrating data from remote access services via video and sound

Given the current situation, many of us are now working remotely all the time. Many of such arrangements are facilitated via tools like Citrix, RDP, VNC, LogMeIn, etc.  We have been researching some possibilities about how to exfiltrate data via such arrangements. Here are some obvious choices:

  • File connections – if enabled
  • Remote USB connections – if enabled
  • Remote printing connections – if enabled
  • Exfiltrating via email or Internet connections at the remote desktop level

Most of these have obvious controls that can be activated by an administrator, which would leave the attackers with very few channels left. The two in particular that we were interested in, are video and sound, since the user can view their remote screen and many tools allow hearing sound from their remote desktop.

For exfiltration of data via video, we originally considered encoding data with base-64 using an encoding tool such as the Windows certutil CLI command, then doing screen capture on the host and running some sort of OCR against it such as Tesseract. However, we ran across a much better tool from Pen Test Partners called PTP-RAT which flexes the pixels on the screen to transfer information (see their blog post and GitHub repo).

For exfiltration of data via sound, we originally considered using a tool that will modulate the data into a sound form that used to be used by modems back in the 1980s/1990s. However, we ran across a much better suited tool from Roman Zayde called amodem which is able to do this. While the tool is designed for exfiltrating data across a physical air gap, it should work the same way on a remote desktop by converting the data into sound via the soundcard and capturing it back on the host then decoding it.

P.S. For extra brownie points, you can also try enabling the webcam and microphone on the host, and transfer data from the host back to the remote desktop using the same mechanisms.

FLAG_SECURE and Accessibility Services (a11y)

About four years ago, we first blogged about FLAG_SECURE in Android applications and how it can be used to protect sensitive screens from being captured in screenshots. One of the reasons that prompted our original blog post is the fact that FLAG_SECURE was not intended for security of screenshots but is rather a side effect of DRM.

Since then we have reported the lack of FLAG_SECURE as an issue to multiple Android app vendors. Our most recent blog post was about the fact that Google’s 2FA application (“Google Authenticator”) lacked FLAG_SECURE. We disclosed this specific issue (“Google Authenticator”) because of recent reports of malware attacking such applications using Android’s accessibility services (a11y). Yanick Fratantonio (who had a lot more experience with these services) went through the trouble of putting together a POC, as well as writing a truly excellent blog post about how FLAG_SECURE does NOT protect against attacks using these services.

After reviewing both our research and disclosures published by others, there seem to have been an assumption going around for a while that FLAG_SECURE should protect against such attacks (a11y) but no actual data to back this up. To be clear, FLAG_SECURE should still protect against malware using the MediaProjection API to record video or take screenshots, but it DOES NOT protect against attacks using Android’s accessibility services (a11y).

We amended our earlier posts to make this distinction clear and would like to thank Yanick Fratantonio for bringing this up.

Google Authenticator for Android Allows Screen Capture

Google offers an application for Android called “Google Authenticator” which is used to setup two-factor authentication (2FA). This application is used to generate standard OTP codes usually used for 2FA.

It appears that Google Authenticator allows screenshots to be taken of OTP codes. The implication is that if a user’s device ends up running a rogue app, that app can capture all generated OTP codes as they are shown by the app, and thus break two factor authentication.

[EDITED: 2020-03-23: This is only true for rogue apps with screenshot permissions (MediaProjection) BUT not those using accessibility (a11y) permissions. This is especially true since many such rogue apps use Android accessibility to scrape screenshots from running apps. However, using FLAG_SECURE may prevent that behavior even via accessibility permissions, although more research is needed to confirm that.]

UPDATE (2020-03-03): Disclosed publicly because of recent media reports

UPDATE #2 (2020-03-04): Multiple people noted that Microsoft Authenticator has the same issue. We blogged about that back in 2018 and the issue remains unfixed.

UPDATE #3 (2020-03-23): Although FLAG_SECURE may protect against malicious apps using the MediaProjection APIs, HOWEVER, as per the comment below from Yanick Fratantonio and his blog post, FLAG_SECURE doesn’t protect against attacks using accessibility services. See our follow-up post here.

Steps to Replicate

To replicate, try the following:

  1. Open the application.
  2. Add an account.
  3. Press Power + Volume Down at any sensitive screen and observe a screenshot being taken.

The underlying reason is because the app is not using “FLAG_SECURE” for such screens (more information on FLAG_SECURE can be found in our earlier blog post). By contrast, many Android apps with higher security requirements use it.

Vendor Response

We filed a bug report with the vendor (Google) and the vendor filed an internal bug. The vendor never informed us whether the bug was fixed. Testing on the most recent version reveals that the bug is still present.

Screen Shot 2020-03-03 at 10.00.33 PM


  • GitHub issue filed by someone else – see here
  • Google Play link to the app – see here
  • Google Security Case # 8-2193000017345
  • Our earlier blog post about FLAG_SECURE on Android – see here
  • ZDNet report regarding Cerberus malware attacking this app – see here


  • 2014-10-10: GitHub issue filed by someone else
  • 2017-05-10: Issue filed with the vendor, triaged and bug filed
  • 2017-05-11: Follow-up discussion regarding other vendor apps
  • 2017-05-12: Response regarding bounty received
  • 2020-02-27: Media story regarding malware targeting this app
  • 2020-03-03: Public disclosure
  • 2020-03-04: Added comment regarding Microsoft Authenticator
  • 2020-03-23: Added clarification regarding screenshot permissions and accessibility permissions


Another Tale of Personal Data Harvesting: Alsid, and Simpler Apps

After reading a recent post by Antoine Neuenschwander, we wanted to share a similar experience from one of our consultants regarding the sale and use of their personal data, featuring many of the same players as Antoine’s post.

Part 1 – The Sales Call from Alsid

One of the many scourges of modern work is the fact that salespeople try to reach you all the time. In this particular case, our consultant was called on their work phone, via email and LinkedIn by someone from a French cybersecurity company called Alsid. Then to their surprise a call from France rang on their personal, US-based, cell phone and left a voicemail. That got them curious – how did this company get a hold of a personal cell phone number? Being that the company is French it is subject to GDPR so they asked for a copy of their data held by the company. The company did provide a fairly extensive GDPR response but the cell phone number wasn’t in it! After follow-up questioning, they eventually dug it out – their salesperson got it from

Screen Shot 2020-02-10 at 11.07.33 PM

There are several privacy concerns here:

  • Why was the number not provided in the initial GDPR request?
  • Did the caller check to make sure the number wasn’t a wireless phone AND wasn’t on the Do Not Call List (since it is illegal to place telemarketing calls to such numbers in the US)?

Part 2 – The B2B Contact Enrichment Tool – provides a set of plugins that can enrich LinkedIn profiles – so if you are looking at someone’s profile, they can supplement that with the person’s phone number or email from another sources. The website, marketing materials and privacy policy are pretty explicit about this:

Screen Shot 2020-02-10 at 11.12.56 PM

And (emphasis added):

Our Services are designed to help Users and vendors (e.g. HR professionals, B2B partners, sales platforms) validate and verify contact information and to find business profiles they seek in order to interact with relevant Contacts (as defined below), through access to business profiles retained in Lusha’s database (“Lusha Database”). 

A data request sent to resulted in the response below. Note the language around Simpler – specifically the last paragraph. It seems that Simpler provides mobile apps to be used for “verification”, then those apps slurp up the user’s contacts and share them back with

Screen Shot 2020-02-10 at 11.25.52 PM

Excerpt of the text appears below:

Simpler also offers its users the opportunity to contribute to a collaborative security effort, meant to assist in authenticating the identifying attributes of an individual. This effort can assist in establishing a trusted channel of communication for online and offline interactions.

If a Simpler user consents to contribute to this effort, basic contact information (name and phone number) found within such user’s contacts may be shared with Lusha, which implements the security solution.

If you dig deeper into the materials, a lot of similar language appears there as well.  Instead of a discussion of B2B contact data, it suddenly becomes a matter of “security“, “trust” and a “collaborative security effort“. When you look at their data page (emphasis added), note that the language quickly changes from “lead enrichment” or B2B data” to a “collaborative security effort”:

Lusha’s core purpose is to make online engagement safe and efficient. In today’s fast-paced and multi-layered world, one of the main challenges to online users is trust. A major risk in online interactions is the risk of encountering fraud, whether by phishing attempts or by identity theft. Widespread fraud can lead to the loss of customer trust, extra costs of time and money required to manage fraud incidents, damages to the reputation of individuals and institutions, possible legal costs and many more negative outcomes.

Lusha’s unique solution is based on a collaborative security effort, effectively utilizing information to verify online identities. The Lusha service provides its users with valuable insights and assists in authenticating the identity of individuals in the online sphere.

There are several privacy concerns here:

  • Why is a tool that claims to provide B2B information obfuscated behind being “a security solution”?
  • How can “security” and “trust” justify essentially taking users’ address books from their mobile devices and selling that data for marketing?

Part 3 – The Mobile Address Books from Simpler

The link provided by Lusha actually leads to the Google Play Store. This leads to two apps – Simpler Caller ID and Dialer, both apps with more than 5 million installations each. The link on the store listings leads to the company’s website where third app is listed – EasyBackup, a contacts backup manager (only for iOS). That one is owned by a different company called “Top Floor” which also makes an app called “Cleaner Pro” (for iOS) which claims to remove duplicate contacts. Mailing addresses for both companies go to co-working spaces: one in Brooklyn, NY and the other in Los Angeles, CA.

Here are the apps:

The Simpler Apps website still lists all of these apps as being theirs:

Screen Shot 2020-02-11 at 11.08.27 PM

A request was sent to Simpler for copy of the data they collected, and the following response was received – no data. A follow-up response was sent and a response is still pending:

Screen Shot 2020-02-11 at 11.01.27 PM

At this point we are at a dead end with Simpler, but further research reveals the following nuggets buried in the terms of use and privacy policy:

By using our Services, you acknowledge and agree that we will share contact information with other users our affiliates and business partners for the purpose of ensuring that their current contact information is up to date. You acknowledge that you have the rights and permissions required to allow us to share such contact information.


We do not share your Personal Information with third parties except:

  • The Services are based on contact management and Caller ID (if applicable), therefore, we will use your number and contact for this purpose. This disclosure is also important to individuals that are not our users which may be identified by the caller ID. We enable an easy opt-out in the event you no longer wish to be identified, for more information see the User Right section below.

  • We may also share Personal Information with (i) our subsidiaries and business partners if needed; (ii) subcontractors and other third party service providers (e.g. payment processors, server, cloud management, etc.); and (iii) any potential purchasers or investors of the Company.

There are several privacy concerns here:

  • Why is a tool that claims to provide tools for making sure contact information is correct, selling data for marketing?
  • What is the connection between Simpler and

Part 4 – Tying It All Together: and Simpler

At this point, it is fairly clear what happened – a bunch of mobile apps slurp app contacts from their users’ address books and provide it to to be used by marketers and recruiters. This is being presented as a “collaborative security solution” while it is essentially just selling personal data, albeit with an opt-out available. What is frustrating about this, is that regular users who are friends with the people being targeted are installing these apps, thinking it is just a simple utility while all of their contacts are actually being sold behind their back. At the same time, is claiming to be a security solution while they are clearly not.


But, there is more … a set of simple Google searches shows that one of the co-founders of (“Yoni Tserruya”) is actually the original app developer for all four of these apps (here, here, here and here). Furthermore, if you download the Android apps provided by Simpler and look at the signing keys via jadx, they are issued to the same person as seen below:

Screen Shot 2020-02-11 at 7.40.10 AMScreen Shot 2020-02-11 at 7.40.25 AM

Now these apps are being published by companies other than – Simpler Apps and Top Floor, but are they subsidiaries or related to If they are, then the privacy policies seem to allow them to transfer data from these apps back to Overall, the arrangement may be legal but perhaps unethical.

Bottom line: this example highlights yet another way personal data is harvested, sold and re-used for commercial purposes.

Vendor Responses

We reached out for comment to all of the companies mentioned in the article and will update the blog post with feedback or comments.

Mapping the Jan 2020 Java Security Patches Back to the Original Source Code Changes

(UPDATE [01/16/2020]: Added clarification as to the source of the information for the source commits)

Earlier today Oracle releases patches for various flavors of Java. At the same time, the open source version of Java – OpenJDK – released their patches as well. The problem is that it is virtually impossible to figure out what the actual bugs are. Here are partial screenshots from Oracle’s critical patch update advisory, text version and the OpenJDK advisory:

Screen Shot 2020-01-15 at 10.38.47 PM Screen Shot 2020-01-15 at 10.40.43 PM Screen Shot 2020-01-15 at 10.41.46 PM

The actual CVE descriptions are also cryptic – here is an example of one (CVE-2020-2583):

Vulnerability in the Java SE, Java SE Embedded product of Oracle Java SE (component: Serialization). Supported versions that are affected are Java SE: 7u241, 8u231, 11.0.5 and 13.0.1; Java SE Embedded: 8u231. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise Java SE, Java SE Embedded. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of Java SE, Java SE Embedded.

Note: This vulnerability applies to Java deployments, typically in clients running sandboxed Java Web Start applications or sandboxed Java applets (in Java SE 8), that load and run untrusted code (e.g., code that comes from the internet) and rely on the Java sandbox for security. This vulnerability can also be exploited by using APIs in the specified Component, e.g., through a web service which supplies data to the APIs.

CVSS v3.0 Base Score 3.7 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L). 

The problem is that not enough information is being provided in order to make an educated decision, especially in cases where there are limited resources for patching. How do we figure out what the vulnerability actually is?

It is possible to derive the source code changes from the Debian package updates OpenJDK release notes that map the CVEs to the actual source code changesets in the OpenJDK repository (HG and GitHub). For example as seen below, “CVE-2020-2583” links to the source code changes # 8224909:

Screen Shot 2020-01-15 at 11.33.13 PM

Looking at the HG repository for OpenJDK and the GitHub mirror it is pretty easy to figure out:

Screen Shot 2020-01-15 at 11.32.20 PM

Screen Shot 2020-01-15 at 11.31.47 PM

And the actual changes can now be examined in detail:

Screen Shot 2020-01-15 at 11.34.46 PM Screen Shot 2020-01-15 at 11.34.31 PM

Based on those sources, we can now reconstruct the actual bugs and their source code changes for JDK 11 (Debian mesage here based on the OpenJDK release notes here):

CVE ID Component CVSSv3 Commit Description
CVE-2020-2604 core-libs/
7.1 8231422 (HG / GitHub) Better serial filter handling
CVE-2020-2601 security-libs/
6.8 8229951 (HG / GitHub) Better Ticket Granting Services
CVE-2020-2655 security-libs/
4.8 8231780 (HG / GitHub) Better TLS messaging support
CVE-2020-2593 core-libs/
4.8 8228548 (HG / GitHub) Normalize normalization for all
CVE-2020-2654 security-libs/
3.7 8234037 (HG / GitHub) Improve Object Identifier Processing
CVE-2020-2590 security-libs/
3.7 8226352 (HG / GitHub) Improve Kerberos interop capabilities
CVE-2020-2583 client-libs/
3.7 8224909 (HG / GitHub) Unlink Set of LinkedHashSets

Additionally for CVE-2020-2604, the Oracle release notes include this (JDK bug # corresponds to the commit # above:

Screen Shot 2020-01-15 at 11.57.13 PM


Written by Y. Shafranovich