Behind The Scenes: Yarix Approach to Mobile Security
TLDR: This article highlights the Yarix Red Team’s daily challenges and internal work done to improve the quality of our outcomes. We will explore the topic by taking the Mobile Security field as a case: we will start with the common reporting problems every red team faces day after day, as well as those arising from the gaps in the industry standards (e.g. OWASP, MITRE, etc.), to finish with what lies behind our Mobile Security assessment outcomes. Although the start and the end may sound totally unrelated, they are interconnected through the new version of the OWASP Mobile Application Security project.
Introduction
Many teams all over the world are engaged in ethical hacking activities, investing a large amount of time in projects such as red teaming, penetration testing, security assessments, bug bounty, and security research. These efforts are constantly followed by detailed reports that provide an in-depth overview of the work done and the vulnerabilities identified. The global community has consistently aimed to enhance reporting, ensuring that the outcome is clear and suitable for its intended audience.
Security teams must consider not only the technical aspects of their outcomes, but also the descriptive and theoretical elements while reporting. From my friends, colleagues, and my own experience, security teams often build and develop their internal knowledge base over the years to shape their reporting uniqueness and distinctiveness. Building this internal knowledge base requires dedication to ensure it remains valuable, relevant over time, and, more crucially, a core asset within the team. Keeping up with the constant updates and breakthroughs in the ethical hacking world and security standards, like MITRE and OWASP, is essential to achieve these objectives.
Recently, OWASP has made huge advancements in Mobile Security, releasing new updates and standards to improve security practices. The community's contributions have been amazing, and I cannot express my gratitude enough.
The purpose of this article is to highlight how Yarix addresses these aspects, especially in Mobile Security, sharing the behind-the-scenes approaches that help to consistently improve the Yarix Red Team's (YRT) outcomes. But first, we will be looking at the OWASP Mobile Application Security project's latest 2024 update.
OWASP Mobile Security Application Refactoring 2024
Before delving into the topic, it is useful to look at the evolution of the OWASP Mobile Application Security (MAS) project to better understand its strengths and limitations throughout time.
Anyone in mobile security knows the OWASP MAS project is a must-read and valuable resource. In my opinion, no better project covers all the technical security concerns of a mobile application as the OWASP project. It is not only well-documented but also exceptionally organized, at least now.
Over the years, this open-source project has provided comprehensive information on mobile app security, addressing storage, networking, platform usage, code development, resilience, and more. Importantly, the project has diversified into what I would call subprojects, such as the Mobile Application Security Verification Standard (MASVS), which was created in 2016, and the Mobile Security Testing Guide (MSTG), which was released in 2019. The drive followed the conventional OWASP approach: starting with more theoretical and abstract documents (MASVS) and moving to more practical and technical ones (MSTG).
Despite ongoing updates and revisions, the industry standards have faced challenges and shortcomings over the years. As a result, security teams often couldn't rely solely on the common standards, frameworks, and tools - not limited to OWASP but also including MITRE, CVSS, CWE, and others in the industry. They had to fill certain gaps using their own knowledge, expertise, or the limited information available online.
If you have ever been part of a security team, you might have encountered situations where it was complex calculating the CVSS score because the impact was not clear for example, or you couldn’t exploit the vulnerability but still felt it needed to be reported. Portswigger mentioned CVSS system failures, highlighted by JFrog, here. Still, you might have struggled to fit a vulnerability neatly into a specific category or CWE. Recently, a famous web security expert Tib3rius talked about how the evolution of OWASP Top Ten over the years has created confusion on specific topics.
These are common daily challenges, and there’s no one right way to handle them. The point here isn’t about solving them but acknowledging that they’re part of the job. Often, we are asked to fit vulnerabilities into specific categories, even if they are not an appropriate match. This isn't only about the constraints of the standards, but also how we use them, especially in Vulnerability Management. Plus, facing the theoretical and "managerial" aspects of vulnerabilities can be tedious and, therefore, done inadequately because of the lack of willingness to do it properly.
Addressing the shortcomings of previous OWASP project versions has led to new developments like the Mobile Application Security Weakness Enumeration (MASWE). This project specifically replaces the poor list of mobile vulnerabilities in the CWE (MITRE) database - a godforsaken place.
Today, the OWASP Mobile Security Application project provides developers and security testers with a wealth of invaluable resources:
- MASWE allows the assignment of one or more CWE-based values to mobile application vulnerabilities that were previously non-existent or poorly represented.
- MASVS provides a framework for classifying vulnerabilities into specific areas of mobile application security, refined over the years to deliver a high standard of quality.
- MSTG offers guidance on testing, many mobile application vulnerabilities, a list of available tools, steps for reproducing proof of concepts, demos, testing techniques, and more.
Even though this project has been evolving incredibly, it is not suitable to be used as the only core base of your mobile security assessments. This explains the existence of the next chapters.
Yarix Methodology: A Unique Approach
At Yarix, we're always trying to improve our internal knowledge base to address the gap highlighted. We aim for better quality, flexibility, and critical thinking. However, like any team, we face challenges along the way.
Creating a knowledge base is a continuous process that requires attention, revision, and dedication over time. It is often challenging to achieve the desired level of quality because of time constraints. Furthermore, short deadlines, low priority, and a lack of interest in specific tasks can complicate the process and make it harder to accomplish our objectives.
Despite these challenges, the work carried out by the Yarix Red Team over the years has allowed us to develop an internal knowledge base that’s continuously evolving in terms of quality. To overcome these problems, we’ve started to develop and adopt a methodology - more precisely, a set of criteria and rules. Only defining such a methodology may not be directly impacted by the time constraints (you do it once or twice a year), however, its implementation and application often are. For this reason, we have created methods, strategies, and tools (AI-based whenever possible and efficient) to support this effort.
As Roberto Chiodi highlighted in his article, we place great importance on the quality of our reports and the accuracy of the information we provide. Thanks to the latest OWASP refactoring on Mobile Application Security, we’ve been able to update and improve our internal knowledge base, completely reshaping the way we define vulnerabilities.
Goals and Requirements
The definition of vulnerability is a crucial aspect that requires particular attention. Over the years, this process has been increasingly refined through the team’s expertise and the introduction of new or updated security standards. We noticed some key objectives to achieve when defining and implementing an internal vulnerability database:
- Flexibility: the database structure should be flexible enough to adapt over time, as changes can often be numerous and unexpected.
- Simplicity: the content must delineate the theory part of the vulnerability clearly and straightforwardly to present this to people who are not familiar with it. We tend to assume understanding that reports’ audiences (stakeholders and developers) may not have. Many times, vulnerabilities’ descriptions are presented in a too vague way, overly general, or even too specific – and, in rare cases, entirely “incorrect” compared to what is reported. At Yarix, we noticed it and developed a strategy to overcome this problem (no spoiler yet).
- In-Depth Accuracy: the specifics should include an in-depth explanation of the steps required to replicate the vulnerability, as well as all its technical prerequisites. This is critical since the proof of concept (PoC) is also intended to be read and replicated by your coworkers in the future (during a recheck, for example), in addition to being read and reproduced by the developer soon after the security team completes their work. It is not always possible “to template” the specifics because they are scenario-related and need to be written while reporting. However, you can create a methodology to structure them.
- Variety: The database content should be able to distinguish between vulnerabilities unambiguously: many vulnerabilities have some sort of variation that could represent atomic instances of the security issue. A perfect example is Cross-Site Scripting (XSS), which has multiple variants such as Self XSS, Reflected, Stored, DOM-based, Universal, etc. Having such a database structure offers an easy selection of the vulnerability’s variant, additionally allowing us to report a general description inherent to the main security issue and a specific one related to its variant. Furthermore, having different records for a vulnerability’s category prevents duplicates, which is very common when the team works on the same database.
- Versioning: The database should be versioned: a vulnerability disclosed several years ago may still have the same description, but other elements - such as category, risk, standard values (i.e. OWASP), or other characteristics - may have changed. Let’s take the OWASP Top Ten as an example: a new version is released every four years. That means a vulnerability may be reclassified. This is only one common example, but there are many more: risk changes because of newer built-in protections by default, new weaknesses (CWE) related to vulnerabilities are released, and so on.
- Consistency: Each record should be in line with the others, especially in terms of terminology. Applying specific guidelines for naming vulnerabilities, formatting descriptions, remediation, proofs-of-concept, risks, and other tasks significantly improves the outcome, making it more uniform and consistent. It could be the final report, but also some internal and external studies and research on teamwork to benefit from the consistency.
A case study: the Mobile Security Vulnerability Database
In this chapter, we will try to take you through the creation and reporting process of vulnerabilities in Yarix by looking at several practical examples.
Attribute Assignment
The MASVS refactoring mentioned above (version 2) has raised the quality of the standard by removing some redundancies and gaps from previous versions. Its accuracy highlights a concrete abstract distinction of the areas related to Mobile Application Security. Thanks to this clean fragmentation, it is very linear and simple to create an internal database that can represent and group efficiently all vulnerabilities in the mobile field. The fragmentation into macro-areas (MASVS-AUTH, MASVS-STORAGE, etc.) has allowed grouping the atomic vulnerabilities (or better, weaknesses) in different areas, thus giving the possibility to indicate precisely where a vulnerability falls within the security scope of a mobile application.
As mentioned in the previous chapter, we tend to map a single vulnerability to a specific category. Well, this could be a stretch, so it would be correct to use a more flexible mapping to achieve better coherence with the standards. A concrete example is the Insecure Deep Links
vulnerability (other examples in the figure below), which concerns the deep links implementations received as input from the external. If they are not properly validated, an attacker can crash the application, steal sensitive information, or even perform something more impactful.
Trying to associate this vulnerability to a single macro area of the MASVS would be a forcing, because the vulnerability encloses weaknesses included in MASVS-PLATFORM-1 (because it concerns deep links, an IPC component), MASVS-STORAGE-2 (a data leak could occur) and MASVS-CODE-4 (the input received is not validated and sanitized).
The same concept might and should be applied to other standard properties, such as CWE, CAPEC, MASWE, and MSTG-TEST-ID. Some pre-made databases (such as the Issues Definitions in Burp Suite) already correlate each vulnerability with several CWEs.
Assigning risk to a vulnerability is always a fundamental and critical process. It delineates the follow-up steps to take once the security assessment ends. There are many strategies for risk assignment:
- Common Vulnerability Scoring System (CVSS)
- Common Weakness Scoring System (CWSS)
- Common Weakness Risk Analysis Framework (CWRAF)
- OWASP Risk Rating Methodology
- Microsoft DREAD
- Classic Risk Matrix
The strategy typically changes from one team to another, but it could also be dictated by the customer himself. In Yarix, except for this last case, we prefer to adopt a strategy targeted to the application and client domain, still founded on the basic notions and rules of the standards listed above. Each vulnerability can be assigned a typical risk, but it could vary depending on the context in which the vulnerability is spotted. To achieve an accurate and consistent risk assignment methodology in line with the targeted application context, we produce a list addressing the requirements and situations impactful during the assignment. Let's take, for example, the Systemwide General Pasteboard Used for Sensitive Data vulnerability. In the iOS context, this vulnerability is often assigned to the low level due to the challenging attack scenario required, especially since iOS systems >= 14 always show a warning to the user when an application is copying data from the Pasteboard. However, the risk assignment could vary in cases where the mobile application supports iOS versions < 14 (no warning is shown), or even iOS < 9, where the copy could also happen in the background!
Terminology
The OWASP MAS Refactoring has led to a revolution in vulnerability names. Commonly, vulnerabilities' names vary. How many times the same security issue has been called in different ways: people make up a cool name, use CWE nomenclatures (Insertion of Sensitive Information into Externally-Accessible File or Directory), too specific names (OAuth access token not encrypted in Shared Preferences), use or combine the impact with the most common name of the vulnerability (Account Takeover or Account Takeover via OAuth Access Token Leakage) or use other general names (Improper Token Storage). Although the choice could be subjective, I believe that the naming structure of a vulnerability should follow a standard form when contextualized in an internal database. This way, the outcome comes out more structured, clean, and coherent. The feeling reading the report is different (change my mind).
Think of it as if you have to report many vulnerabilities that lead to exposure of sensitive information. Would not it be better to report a consistent terminology list of vulnerabilities?
Obviously, vulnerabilities need a description. During my experience, I read a lot of vulnerabilities in blogposts, penetration test reports, websites, scanners, and wherever they were reported on. Many times you need to "template" the description in order to automate and speed up the reporting. Guess what? There are challenges to deal with it to obtain an good result; copying and pasting a general googled description is not the Yarix way. A description should satisfy predefined criteria to introduce the vulnerability to the report audience and developers, which should fix it. Every team could define its own criteria to give the uniqueness they want to offer; we have created them at Yarix. Each vulnerability’s description follows a predefined structure and terminology, making the final report more coherent and synchronized along the document. That requires more effort, because it takes time every time you need to add a new vulnerability to the internal database, but it's worth the hassle. That said, we have been also developed AI-based tools to create, assist us, improve, and validate the result.
Let’s take the following examples for the HTTP Request Smuggling vulnerability (slide the images).
Beyond OWASP: Thread-based Testing
The OWASP MAS project provides a wild wealth of helpful information, covering many aspects of an application's security. Although an application subjected to an OWASP-based security assessment has a high level of security, there is always the chance to move on to a subsequent advanced step that, unfortunately, OWASP cannot cover.
A Threat-based approach is the way (I just coined the term to make the chapter clear). In Yarix, during a mobile assessment, we do not limit ourselves to OWASP testing alone; we like to delve deeper into the application's security by reviewing how it is able to protect itself from the main malware threats. As mentioned above, OWASP rarely covers them, if not only after years with the new versions of the standard released. These tests are fickle over time, and a standard like OWASP (community-oriented) cannot cover and detail them in the immediate future.
A classic example of today is the Accessibility Services Threats, exploited by Android malware to steal sensitive information or perform actions without the user's knowledge. Taking a close look at the OWASP MAS standard reveals there is no information about it (at least at the time of this writing).
Since iOS is always a bit neglected compared to Android (my feeling 👀), I also bring an example for the Cupertino operating system: Shortcuts. Their scripting language is quite powerful and promising for malware. Furthermore, although the iOS system is more closed than Android, it will soon no longer be because of the introduction of sideloading, the process of installing applications from App Store alternatives.
Although this kind of malware exploits native components of the operating system, mobile applications must be able to protect themselves from any threats. A similar concept is applied when they verify the environment where they run on through Reverse Engineering checks such as Root, Emulator, Jailbreak, Reverse Engineering Tools detection, and more.
At Yarix, we constantly keep our skills and internal tooling up to date. Many times, we develop various proofs of concept as a demonstration of whether the apps tested are or are not resilient against the main mobile threats. Citing some common examples we talk about Accessibility Services Threats, StrandHogg Attacks (v1 and v2), Overlay Attacks, Man-in-the-app (MITA) Attacks, Screen Scrapers, App Cloning, Clipboard / Notification / Log Monitor Attacks, and more. In addition, we also create malware PoCs that could target the mobile application by exploiting specific features of the app itself or the weaknesses/vulnerabilities it suffers from.
For the reasons mentioned, you cannot rely exclusively on OWASP-based testing to achieve total security coverage, but you need the right expertise and competence to go beyond and fully cover the application security of a mobile app.
Conclusion
We have taken you behind the scenes of the mobile security landscape within Yarix, highlighting the daily challenges and significant achievements we've accomplished in refining our approach. We have addressed the common challenges faced by security teams and set new enhancements in vulnerability reporting; by closely relying on the latest OWASP releases and innovatively expanding beyond them.
Our efforts are driven by a commitment to overcome the failures of industry standards and our ability to combine strict adherence to these standards with a critical, original approach that goes beyond mere compliance. This approach addresses the typical lack of context seen in industry security standards, allowing us to deliver detailed and accurate vulnerability assessments that significantly contribute to enhancing our outcomes and, consequentially, the security posture of our clients.
We constantly develop and refine our internal methodologies and tools to not just maintain it but also elevate the quality and relevance of our work. There is a good chance that the approaches applied in the context of mobile security may find wider use, such as web security, where similar challenges prevail. That’s why we are currently working on it.
Author
Paolo Serra is a Red Team member at Yarix, specializing in application security. He brings a hands-on approach to ethical hacking, often working closely with web and mobile frameworks and technologies.