Local Denial of Service in Nissan Leaf EV (2018) Head Unit Display (CVE-2021-1000008)

Summary

The head unit display in the Nissan Leaf electric vehicle (EV) has a local denial of service vulnerability that can be used to lock up the screen. Once locked, the car remains drivable but the display can no longer be used (even if the car is turned off and on). The only way to unlock the screen is by removing and re-inserting the SD card containing the mapping data.

This was tested on the 2018 SV model of the Nissan Leaf, other Leaf models/trims and other Nissan models with similar SOS functionality may also be affected.

This issue has been reported to the vendor (Nissan), NHTSA and ICS-CERT. Since the vulnerability is low risk there is minimal impact on end users. The vendor has confirmed the issue, but no patch is currently available.

Details

The Nissan Leaf is an electric car which contains a head unit with a touch screen interface in the middle of the dashboard. This panel is used for entertainment and navigation functions such as playing music/radio, navigation and interface with cell phone operating systems such as Android Auto and Apple Play. This panel (#3) is separate from the meters and gauges screen (#2) used to display information regarding the operation of the vehicle itself (as seen below – from the owner’s manual):

panel

Additionally, the Nissan Leaf just like many other Nissan models includes an SOS button located on the roof of the car above the passenger seat and is intended to summon help in case of an emergency. This button is paired with the Nissan app and can be seen below (screenshots from Nissan’s video and manual):

Screen Shot 2020-02-11 at 11.44.23 PMScreen Shot 2020-02-11 at 11.45.58 PM

The display has a denial of service vulnerability that can be used to lock up the screen. Once locked, the car remains drivable but the display can no longer be used (even if the car is turned off and on). The only way to unlock the screen is by removing and re-inserting the SD card containing the mapping data. The vulnerability seems to be the result of interaction between the SOS functionality and the rest of the software operating the head unit.

To replicate:

  1. The car being tested needs to be paired with the Nissan mobile app, and have the NissanConnect subscription enabled.
  2. Turn on the car, verify that NissanConnect with SOS functionality is enabled by checking that the little light on the SOS button is lit.
  3. Press the SOS button to trigger an emergency call.
  4. Immediately, press and hold the SOS button to cancel the call while turning off the car.
  5. The SOS call will lock the head unit, and will stay that way until the SD card is removed and re-inserted which reboots the display panel.

This was tested on the 2018 SV model of the Nissan Leaf, other Leaf models/trims and other Nissan models with similar SOS functionality may also be affected. If a NissanConnect subscription is not enabled on a particular vehicle, then it is probably not vulnerable because the SOS functionality is disabled.

Vendor Response and Mitigation

This issue has been reported to the vendor (Nissan), NHTSA and ICS-CERT. Once the report was routed to the correct team, the vendor responded quickly and confirmed the issue. Since the vulnerability is low risk there is minimal impact on end users. No patch is currently available.

A CVE will not be issued for this vulnerability by MITRE since MITRE doesn’t “assign CVE IDs for Local Denial of Service”. A CVE was issued by the Distributed Weakness Filing (DWF) project instead.

References

CVE (DWF): CVE-2021-1000008

ICS-CERT ticket # ICS-VU-984522
NHTSA case # 11308645
Nissan Information Security (IS) Case # 233758
Nissan Leaf (2018) manual: see here

Credits

The original discoverer of this issue is a minor and their full name cannot be disclosed for privacy reasons.

Timeline

2019-09-24: Initial report to the vendor
2020-01-01: Second report to the vendor, automated reply received
2020-01-27: Follow-up email sent to the vendor, no response
2020-01-28: Initial report to ICS-CERT
2020-02-08: Follow-up communication with ICS-CERT
2020-02-11: Draft advisory sent to both the vendor and ICS-CERT
2020-02-12: Reported to NHTSA
2020-02-12: CVE requested from MITRE
2020-02-16: CVE response received from MITRE
2020-02-16: Response from the vendor received (initial reports were misrouted)
2020-02 through 2021-03: Multiple phone and email communications with the vendor
2021-03-14: Public disclosure

2021-04-08: CVE assigned via DWF

Network Vulnerability in Oracle Database – CVE-2021-2018

Summary

Vulnerability in the Advanced Networking Option component of Oracle Database Server can lead to an MITM attack. Supported versions that are affected are 18c and 19c (Windows platform only).

Details

Difficult to exploit vulnerability allows unauthenticated attacker with network access via Oracle Net to compromise Advanced Networking Option. Successful attacks require human interaction from a person other than the attacker and while the vulnerability is in Advanced Networking Option, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of Advanced Networking Option.

CVSS 3.1 Base Score 8.3 (Confidentiality, Integrity and Availability impacts)

CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:C/C:H/I:H/A:H)

References

CVE Database: CVE-2021-2018

Vendor advisory: see here

Brief Notes on WhatsApp Link Previews

(All testing was performed using WhatsApp for Android v2.20.201.20 and WhatsApp Web)

Introduction

Recently, we have been looking into possible security issues around how WhatsApp parses and displays preview information about hyperlinks. Basically, WhatsApp will parse some basic information from a hyperlink and display it within the body of a chat. Based on our sleuthing, it appears to be parsed from various elements in the original HTML. For Google, it looks like this:

Parsing code

This appears to be parsed from various meta tags within the original site as per the code snippet below. If those are not present, it will use the “title” tag instead. Here is some of the parsing code:

Additional Details on HTML Retrieval

From testing and review of logs, it appears that the actual call to retrieve the site happens on the Android client. There are also some additional interesting points:

  • The retrieval is cached on the client
  • If WhatsApp Web is used, the retrieval still happens on the mobile phone with the parsed results transferred to the Web version
  • If a link is forwarded, posted into a chat or group, there is no additional retrieval that happens. Instead, the parsed preview is transmitted along with the link

Here is the actual snippet of decompiled code doing the retrieval:

Future Areas for Research

We plan to research the actual parsing and retrieval of the HTML with the eye towards trying to see if any of the parsing code can be manipulated to inject content into the client or the Web version. For things like images and videos, there is potential for exploiting the underlying native code.

Exposure of Motor Vehicle Registration Data via Auto Insurance Quotes

An interesting item that we ran into recently: most US-based auto insurance companies bulk-purchase vehicle registration and driver records from state motor vehicle departments. This information is used for two separate purposes:

  • To adjust rates based on accident history, both for the state as whole and individual policy holders
  • For marketing purposes – either to proactively send marketing materials/ads, OR to make quotes easier.

The second item is interesting since it exposes vehicle registration information via web portals and mobile apps. This can be obtained by a malicious attacker by going through the online or mobile auto insurance quote process. Not clear how sensitive/private such data is or the legal implications of obtaining it this way.

YAA: An Obscure MacOS Compressed File Format

Summary

MacOS introduced a new compression archive format in High Sierra (v10.13) called “YAA”. Because this format is new, it may not be supported correctly by security tools, thus allowing malware authors a way to bypass existing controls in such tools. It is recommended that vendors add support for this format to their tools. Users should NOT accept or open YAA archives received from unknown sources.

Details

Early this year we ran across a new file format specific to MacOS: YAA. Apparently it was originally released in the fall of 2017 as part of MacOS High Sierra (v10.13) (see this article from MacKungFu). As per this forum post, apparently it is used as part of compressing the “Content” section of signed executables for MacOS. According to another post, this is an LZFSE tool. This should not be confused with an older tool with the same name called “Yet Another Assembler”.

It is supported via a command line utility called “yaa” as well as the Archive Utility in the GUI – but in the GUI you can only uncompress the archive. For details regarding the CLI, run “yaa” or “man yaa” (also see here).

Our previous research around novel file formats found issues in how Google Chrome (here and here), and various anti-virus vendors for MacOS handle compressed files (here). Specifically, handling a novel compressed archive format is something that existing antivirus software, browsers and other utilities may not be dong well since they are not aware of the format. On the other hand, malware authors can easily package malware inside a compressed archive, which will decompressed by a user via double-clicking the file in Finder. This results from the fact that the Archive utility supports decompression of a lot more file formats that vendors maybe aware of.

For YAA, we have done some testing back in January against a handful of security tools and found some to be vulnerable. Due to lack of resources, we didn’t pursue a more extensive investigation into additional tools but did report whichever issues were found back to the relevant vendors. Because of lack of resources, we are publishing this post to increase awareness of this issue.

To replicate this issue on a particular tool, do the following:

  1. Download the EICAR test file from here into a folder.
  2. Create a YAA file archive as follows:
    • yaa -d folder -o archive.yaa
  3. Test the resulting “archive.yaa” with the security tool of your choice.
  4. To uncompress the archive, double click on it in Finder.

Vendor Responses

This section contains responses we received for this issue from specific vendors:

  • Chromium: Safe browsing pings not sent back for YAA files (issue # 1039128)
    • The default behavior has been set to FULL_PING, so unknown types (such as YAA) should now be sending pings to Safe Browsing.” (also see our blog post)
  • Google/Gmail: YAA archives are not scanned for malware by GMail (issue # 147190746)
    • “Not all file formats are supported and this is expected.”

Most A/V vendors responded either with the fact that once expanded the contents will be scanned, or that this will add this as a feature in the future.

Credits

Written by Y. Shafranovich

Timeline

Majority of this research and responsible disclosure was done in January 2020. Blog post was published on June 14th, 2020.

Exfiltrating data from remote access services via video and sound

Given the current situation, many of us are now working remotely all the time. Many of such arrangements are facilitated via tools like Citrix, RDP, VNC, LogMeIn, etc.  We have been researching some possibilities about how to exfiltrate data via such arrangements. Here are some obvious choices:

  • File connections – if enabled
  • Remote USB connections – if enabled
  • Remote printing connections – if enabled
  • Exfiltrating via email or Internet connections at the remote desktop level

Most of these have obvious controls that can be activated by an administrator, which would leave the attackers with very few channels left. The two in particular that we were interested in, are video and sound, since the user can view their remote screen and many tools allow hearing sound from their remote desktop.

For exfiltration of data via video, we originally considered encoding data with base-64 using an encoding tool such as the Windows certutil CLI command, then doing screen capture on the host and running some sort of OCR against it such as Tesseract. However, we ran across a much better tool from Pen Test Partners called PTP-RAT which flexes the pixels on the screen to transfer information (see their blog post and GitHub repo).

For exfiltration of data via sound, we originally considered using a tool that will modulate the data into a sound form that used to be used by modems back in the 1980s/1990s. However, we ran across a much better suited tool from Roman Zayde called amodem which is able to do this. While the tool is designed for exfiltrating data across a physical air gap, it should work the same way on a remote desktop by converting the data into sound via the soundcard and capturing it back on the host then decoding it.

P.S. For extra brownie points, you can also try enabling the webcam and microphone on the host, and transfer data from the host back to the remote desktop using the same mechanisms.

FLAG_SECURE and Accessibility Services (a11y)

About four years ago, we first blogged about FLAG_SECURE in Android applications and how it can be used to protect sensitive screens from being captured in screenshots. One of the reasons that prompted our original blog post is the fact that FLAG_SECURE was not intended for security of screenshots but is rather a side effect of DRM.

Since then we have reported the lack of FLAG_SECURE as an issue to multiple Android app vendors. Our most recent blog post was about the fact that Google’s 2FA application (“Google Authenticator”) lacked FLAG_SECURE. We disclosed this specific issue (“Google Authenticator”) because of recent reports of malware attacking such applications using Android’s accessibility services (a11y). Yanick Fratantonio (who had a lot more experience with these services) went through the trouble of putting together a POC, as well as writing a truly excellent blog post about how FLAG_SECURE does NOT protect against attacks using these services.

After reviewing both our research and disclosures published by others, there seem to have been an assumption going around for a while that FLAG_SECURE should protect against such attacks (a11y) but no actual data to back this up. To be clear, FLAG_SECURE should still protect against malware using the MediaProjection API to record video or take screenshots, but it DOES NOT protect against attacks using Android’s accessibility services (a11y).

We amended our earlier posts to make this distinction clear and would like to thank Yanick Fratantonio for bringing this up.

Another Tale of Personal Data Harvesting: Alsid, Lusha.co and Simpler Apps

After reading a recent post by Antoine Neuenschwander, we wanted to share a similar experience from one of our consultants regarding the sale and use of their personal data, featuring many of the same players as Antoine’s post.

Part 1 – The Sales Call from Alsid

One of the many scourges of modern work is the fact that salespeople try to reach you all the time. In this particular case, our consultant was called on their work phone, via email and LinkedIn by someone from a French cybersecurity company called Alsid. Then to their surprise a call from France rang on their personal, US-based, cell phone and left a voicemail. That got them curious – how did this company get a hold of a personal cell phone number? Being that the company is French it is subject to GDPR so they asked for a copy of their data held by the company. The company did provide a fairly extensive GDPR response but the cell phone number wasn’t in it! After follow-up questioning, they eventually dug it out – their salesperson got it from Lusha.co:

Screen Shot 2020-02-10 at 11.07.33 PM

There are several privacy concerns here:

  • Why was the number not provided in the initial GDPR request?
  • Did the caller check to make sure the number wasn’t a wireless phone AND wasn’t on the Do Not Call List (since it is illegal to place telemarketing calls to such numbers in the US)?

Part 2 – The B2B Contact Enrichment Tool – Lusha.co

Lusha.co provides a set of plugins that can enrich LinkedIn profiles – so if you are looking at someone’s profile, they can supplement that with the person’s phone number or email from another sources. The website, marketing materials and privacy policy are pretty explicit about this:

Screen Shot 2020-02-10 at 11.12.56 PM

And (emphasis added):

Our Services are designed to help Users and vendors (e.g. HR professionals, B2B partners, sales platforms) validate and verify contact information and to find business profiles they seek in order to interact with relevant Contacts (as defined below), through access to business profiles retained in Lusha’s database (“Lusha Database”). 

A data request sent to Lusha.co resulted in the response below. Note the language around Simpler – specifically the last paragraph. It seems that Simpler provides mobile apps to be used for “verification”, then those apps slurp up the user’s contacts and share them back with Lusha.co:

Screen Shot 2020-02-10 at 11.25.52 PM

Excerpt of the text appears below:

Simpler also offers its users the opportunity to contribute to a collaborative security effort, meant to assist in authenticating the identifying attributes of an individual. This effort can assist in establishing a trusted channel of communication for online and offline interactions.

If a Simpler user consents to contribute to this effort, basic contact information (name and phone number) found within such user’s contacts may be shared with Lusha, which implements the security solution.

If you dig deeper into the Lusha.co materials, a lot of similar language appears there as well.  Instead of a discussion of B2B contact data, it suddenly becomes a matter of “security“, “trust” and a “collaborative security effort“. When you look at their data page (emphasis added), note that the language quickly changes from “lead enrichment” or B2B data” to a “collaborative security effort”:

Lusha’s core purpose is to make online engagement safe and efficient. In today’s fast-paced and multi-layered world, one of the main challenges to online users is trust. A major risk in online interactions is the risk of encountering fraud, whether by phishing attempts or by identity theft. Widespread fraud can lead to the loss of customer trust, extra costs of time and money required to manage fraud incidents, damages to the reputation of individuals and institutions, possible legal costs and many more negative outcomes.

Lusha’s unique solution is based on a collaborative security effort, effectively utilizing information to verify online identities. The Lusha service provides its users with valuable insights and assists in authenticating the identity of individuals in the online sphere.

There are several privacy concerns here:

  • Why is a tool that claims to provide B2B information obfuscated behind being “a security solution”?
  • How can “security” and “trust” justify essentially taking users’ address books from their mobile devices and selling that data for marketing?

Part 3 – The Mobile Address Books from Simpler

The link provided by Lusha actually leads to the Google Play Store. This leads to two apps – Simpler Caller ID and Dialer, both apps with more than 5 million installations each. The link on the store listings leads to the company’s website where third app is listed – EasyBackup, a contacts backup manager (only for iOS). That one is owned by a different company called “Top Floor” which also makes an app called “Cleaner Pro” (for iOS) which claims to remove duplicate contacts. Mailing addresses for both companies go to co-working spaces: one in Brooklyn, NY and the other in Los Angeles, CA.

Here are the apps:

The Simpler Apps website still lists all of these apps as being theirs:

Screen Shot 2020-02-11 at 11.08.27 PM

A request was sent to Simpler for copy of the data they collected, and the following response was received – no data. A follow-up response was sent and a response is still pending:

Screen Shot 2020-02-11 at 11.01.27 PM

At this point we are at a dead end with Simpler, but further research reveals the following nuggets buried in the terms of use and privacy policy:

By using our Services, you acknowledge and agree that we will share contact information with other users our affiliates and business partners for the purpose of ensuring that their current contact information is up to date. You acknowledge that you have the rights and permissions required to allow us to share such contact information.

And:

We do not share your Personal Information with third parties except:

  • The Services are based on contact management and Caller ID (if applicable), therefore, we will use your number and contact for this purpose. This disclosure is also important to individuals that are not our users which may be identified by the caller ID. We enable an easy opt-out in the event you no longer wish to be identified, for more information see the User Right section below.

  • We may also share Personal Information with (i) our subsidiaries and business partners if needed; (ii) subcontractors and other third party service providers (e.g. payment processors, server, cloud management, etc.); and (iii) any potential purchasers or investors of the Company.

There are several privacy concerns here:

  • Why is a tool that claims to provide tools for making sure contact information is correct, selling data for marketing?
  • What is the connection between Simpler and Lusha.co?

Part 4 – Tying It All Together: Lusha.co and Simpler

At this point, it is fairly clear what happened – a bunch of mobile apps slurp app contacts from their users’ address books and provide it to Lusha.co to be used by marketers and recruiters. This is being presented as a “collaborative security solution” while it is essentially just selling personal data, albeit with an opt-out available. What is frustrating about this, is that regular users who are friends with the people being targeted are installing these apps, thinking it is just a simple utility while all of their contacts are actually being sold behind their back. At the same time, Lusha.co is claiming to be a security solution while they are clearly not.

data_flow

But, there is more … a set of simple Google searches shows that one of the co-founders of Lusha.co (“Yoni Tserruya”) is actually the original app developer for all four of these apps (here, here, here and here). Furthermore, if you download the Android apps provided by Simpler and look at the signing keys via jadx, they are issued to the same person as seen below:

Screen Shot 2020-02-11 at 7.40.10 AMScreen Shot 2020-02-11 at 7.40.25 AM

Now these apps are being published by companies other than Lusha.co – Simpler Apps and Top Floor, but are they subsidiaries or related to Lusha.co? If they are, then the privacy policies seem to allow them to transfer data from these apps back to Lusha.co. Overall, the arrangement may be legal but perhaps unethical.

Bottom line: this example highlights yet another way personal data is harvested, sold and re-used for commercial purposes.

Vendor Responses

We reached out for comment to all of the companies mentioned in the article and will update the blog post with feedback or comments.

Mapping the Jan 2020 Java Security Patches Back to the Original Source Code Changes

(UPDATE [01/16/2020]: Added clarification as to the source of the information for the source commits)

Earlier today Oracle releases patches for various flavors of Java. At the same time, the open source version of Java – OpenJDK – released their patches as well. The problem is that it is virtually impossible to figure out what the actual bugs are. Here are partial screenshots from Oracle’s critical patch update advisory, text version and the OpenJDK advisory:

Screen Shot 2020-01-15 at 10.38.47 PM Screen Shot 2020-01-15 at 10.40.43 PM Screen Shot 2020-01-15 at 10.41.46 PM

The actual CVE descriptions are also cryptic – here is an example of one (CVE-2020-2583):

Vulnerability in the Java SE, Java SE Embedded product of Oracle Java SE (component: Serialization). Supported versions that are affected are Java SE: 7u241, 8u231, 11.0.5 and 13.0.1; Java SE Embedded: 8u231. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise Java SE, Java SE Embedded. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of Java SE, Java SE Embedded.

Note: This vulnerability applies to Java deployments, typically in clients running sandboxed Java Web Start applications or sandboxed Java applets (in Java SE 8), that load and run untrusted code (e.g., code that comes from the internet) and rely on the Java sandbox for security. This vulnerability can also be exploited by using APIs in the specified Component, e.g., through a web service which supplies data to the APIs.

CVSS v3.0 Base Score 3.7 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L). 

The problem is that not enough information is being provided in order to make an educated decision, especially in cases where there are limited resources for patching. How do we figure out what the vulnerability actually is?

It is possible to derive the source code changes from the Debian package updates OpenJDK release notes that map the CVEs to the actual source code changesets in the OpenJDK repository (HG and GitHub). For example as seen below, “CVE-2020-2583” links to the source code changes # 8224909:

Screen Shot 2020-01-15 at 11.33.13 PM

Looking at the HG repository for OpenJDK and the GitHub mirror it is pretty easy to figure out:

Screen Shot 2020-01-15 at 11.32.20 PM

Screen Shot 2020-01-15 at 11.31.47 PM

And the actual changes can now be examined in detail:

Screen Shot 2020-01-15 at 11.34.46 PM Screen Shot 2020-01-15 at 11.34.31 PM

Based on those sources, we can now reconstruct the actual bugs and their source code changes for JDK 11 (Debian mesage here based on the OpenJDK release notes here):

CVE ID Component CVSSv3 Commit Description
CVE-2020-2604 core-libs/
java.io:serialization
7.1 8231422 (HG / GitHub) Better serial filter handling
CVE-2020-2601 security-libs/
java.security
6.8 8229951 (HG / GitHub) Better Ticket Granting Services
CVE-2020-2655 security-libs/
javax.net.ssl
4.8 8231780 (HG / GitHub) Better TLS messaging support
CVE-2020-2593 core-libs/
java.net
4.8 8228548 (HG / GitHub) Normalize normalization for all
CVE-2020-2654 security-libs/
java.security
3.7 8234037 (HG / GitHub) Improve Object Identifier Processing
CVE-2020-2590 security-libs/
org.ietf.jgss
3.7 8226352 (HG / GitHub) Improve Kerberos interop capabilities
CVE-2020-2583 client-libs/
java.beans
3.7 8224909 (HG / GitHub) Unlink Set of LinkedHashSets

Additionally for CVE-2020-2604, the Oracle release notes include this (JDK bug # corresponds to the commit # above:

Screen Shot 2020-01-15 at 11.57.13 PM

Credits

Written by Y. Shafranovich

Behavior Change in Chrome’s Download Protection Service Affecting Privacy

The Download Protection service in Chrome protects users from downloading malware/viruses by blocking files that are known to be unsafe by the Google’s Safe Browsing service. The way this works is that Chrome sends some basic information about the file being downloaded including the checksum to the Safe Browsing service and receives a verdict.

Historically, not all file types were checked – instead a list of extensions to be checked or ignored is defined in the Chrome source code and only those extensions would be checked. Any extensions not on the list are ignored. These lists can be found in the following file (“download_file_types.asciipb”).

Apparently, this has been changed recently. As per information provided from a recent bug report (https://crbug.com/1039128), Chrome now checks ALL extensions except for the ones on the whitelist. That means when you download almost any file, the checksum and some other information about the file are sent back to Google. It is not clear how this impacts privacy. Since other browsers use this API, it may impact them as well.

You can disable this behavior by disabling the Safe Browsing API entirely but that will also disable checks for malicious URLs. As of now, there doesn’t seem to be an option to disable download protection separately from safe browsing for URLs. The Safe Browsing API does include an option to download a list of dangerous files locally but it doesn’t appear that Chrome uses it yet.

You can see this information by typing the following in your Chrome’s URL bar and checking the “Download Protection” tab.

Example of a MRC file being downloaded from the Internet Archive:

Screen Shot 2020-01-12 at 10.30.18 AM

For some files (primarily ZIPs) a hash is sent as well. You can see this when trying to download the EICAR test file (standard test file for antivirus software):

{
"archive_directory_count": 0,
"archive_file_count": 1,
"archive_valid": true,
"archived_binary": [ {
"digests": {
"sha256": "2546DCFFC5AD854D4DDC64FBF056871CD5A00F2471CB7A5BFD4AC23B6E9EEDAD"
},
"download_type": 6,
"file_basename": "eicar_com.zip",
"length": 184
} ],
"download_type": 5,
"file_basename": "eicarcom2.zip",
"length": 308,
"referrer_chain": [ {
"ip_addresses": [ "89.238.73.97" ],
"is_retargeting": false,
"is_subframe_referrer_url_removed": false,
"is_subframe_url_removed": false,
"main_frame_url": "",
"maybe_launched_by_external_application": false,
"navigation_initiation": "RENDERER_INITIATED_WITH_USER_GESTURE",
"navigation_time_msec": 1.628252679503e+12,
"referrer_main_frame_url": "",
"referrer_url": "https://www.eicar.org/?page_id=3950",
"server_redirect_chain": [ ],
"type": "EVENT_URL",
"url": "https://secure.eicar.org/eicarcom2.zip"
}, {
"ip_addresses": [ "89.238.73.97" ],
"is_retargeting": false,
"is_subframe_referrer_url_removed": false,
"is_subframe_url_removed": false,
"main_frame_url": "",
"maybe_launched_by_external_application": false,
"navigation_initiation": "RENDERER_INITIATED_WITH_USER_GESTURE",
"navigation_time_msec": 1.628252674199e+12,
"referrer_main_frame_url": "",
"referrer_url": "https://www.google.com/search?q=antivirustst+file&oq=antivirustst+file",
"server_redirect_chain": [ ],
"type": "LANDING_PAGE",
"url": "https://www.eicar.org/?page_id=3950"
}, {
"ip_addresses": [ "142.250.65.196" ],
"is_retargeting": false,
"is_subframe_referrer_url_removed": false,
"is_subframe_url_removed": false,
"main_frame_url": "",
"maybe_launched_by_external_application": false,
"navigation_initiation": "BROWSER_INITIATED",
"navigation_time_msec": 1.62825267089e+12,
"referrer_main_frame_url": "",
"referrer_url": "",
"server_redirect_chain": [ ],
"type": "LANDING_REFERRER",
"url": "https://www.google.com/search?q=antivirustst+file&oq=antivirustst+file"
} ],
"request_ap_verdicts": false,
"url": "https://secure.eicar.org/eicarcom2.zip"
}

Credits

Written by Y. Shafranovich.