<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Badarinath Venkatnarayansetty on Medium]]></title>
        <description><![CDATA[Stories by Badarinath Venkatnarayansetty on Medium]]></description>
        <link>https://medium.com/@badrinathvm?source=rss-ddcf3cc4a563------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 09 May 2026 12:42:55 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@badrinathvm/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Building On-Device AI Machine Learning]]></title>
            <link>https://badrinathvm.medium.com/building-on-device-ai-machine-learning-1524f6636d3e?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/1524f6636d3e</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[ai-on-device]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Tue, 26 Aug 2025 15:28:59 GMT</pubDate>
            <atom:updated>2025-08-26T15:28:59.772Z</atom:updated>
            <content:encoded><![CDATA[<p>Swift MLX provides the capability to load the pre-trained models from the hugging face hub in to your application and can execute the inference to generate the result. The benefit of OnDevice Model is private, the data stays within in the device, can execute in offline and can inference offline without an internet connection.</p><p>Here are Libraries required:</p><ul><li><strong>MLX Swift Core Library</strong> : Package providing the Swift bindings for MLX ( <a href="https://github.com/ml-explore/mlx-swift">https://github.com/ml-explore/mlx-swift</a>)</li><li>mlx-community on Hugging Face: A collection of models already converted to the MLX format, ready to use (<a href="https://huggingface.co/mlx-community">https://huggingface.co/mlx-community</a>)</li><li>Useful packages like <strong>MLXLLM</strong> and <strong>MLXVLM</strong> makes loading and running large language and vision language models easily.</li></ul><p>Pre-trained Model contains several interconnected pieces that work together apart from the core neural network architecture.</p><ul><li><strong>Model Architecture &amp; Weights</strong> — This is the core neural network and the learned parameters (weights) which captures the model’s knowledge. The weights are usually stored in the .safetensors file - is a special file format designed to store these model weights securely and it is optimized for quick loading.</li><li><strong>Configuration file</strong> (config.json) — This contains the metadata of the model like `model_type`, `layer dimensions` , `vocabulary size`. This could be like a Info.plist file or any Key value files.</li><li><strong>Tokenizer — </strong>This component breaks down the text in to smaller pieces called “tokens” and converts them in to numerical ID’s or vectors and vice-versa. It’s like a special ‘JSONDecoder’ and ‘JSONEncoder’ rolled in to one. It has it’s own configuration ( tokenizer_config.json ) and vocabulary files. - Defines special tokens that convey structural information or control the signals to the model.</li><li>EOS ( End of Sequence) , BOS (Beginning of Sequence) , UNK ( Unknown) , PAD(Padding) , Chat/Roles Tokens for eg: <strong>&lt;iam_start|&gt;system</strong>, <strong>&lt;iam_start|&gt;user</strong> , <strong>&lt;iam_start|&gt;assistant</strong></li><li><strong>Processor ( for Multimodal Models)</strong> — Vision Language Models handling images or videos requires an additional component, a processor to prepare the visual input in to the format that model expects.</li></ul><h4>Anatomy of the Model</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*4vrYri8VgfnHOWv4.jpeg" /></figure><h4>How They Work Together:</h4><ol><li><strong>Loading Process</strong>: config.json tells the system what type of model to create.</li><li><strong>Text Processing</strong>: Tokenizer files convert human text in to numbers or vectors the model understands.</li><li><strong>Inference</strong>: Model weight process the tokenized input to generate predictions.</li><li><strong>Output</strong>: Results are converted back to human readable text using the tokenizer.</li></ol><p><strong>Quantization: Making Models Device Friendly</strong></p><p>Deploying multi gigabyte models directly onto mobile devices is often infeasible. Quantization is a key technique to address this, making models smaller and more efficient. It reduces the precision of the model weights instead of using standard 32 bit floating point(FP32), Weights are converted to lower precision formats like 16 bit floats (FP16), 8- bit integers (INT8) or 4 bit integers(INT4)</p><p>Using a Pre-defined Configurations like below which is provided by MLXLLM — LLM registry</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/605/0*gETP3F6-GmUWVg0r.png" /></figure><p><strong>Creating a Configuration Directly</strong>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/742/0*rNu7Pzsc4-xQVJhk.png" /></figure><h4>Loading a Model</h4><ul><li><strong>Model Source Detection</strong>: The ModelFactory starts with a ModelConfiguration shown above to determine whether the model is remote (Hugging Face Hub ID) or local (directory URL), then uses HubApi to handle downloads if needed.</li><li><strong>Intelligent Caching System</strong>: HubApi maintains a local cache (~/.cache/huggingface/hub) with metadata checking (ETags, commit hashes) to avoid unnecessary downloads, using a blob/snapshot structure to minimize disk usage across model versions.</li><li><strong>Model Instantiation</strong>: The factory reads <strong>config.json </strong>to identify the model type (e.g., “qwen3”), uses ModelTypeRegistry to call the appropriate Swift initializer, creating a model object without weights initially.</li><li><strong>Weight Loading and Processing</strong>: <strong>Load.loadWeights</strong> finds .safetensors files, loads parameters, applies model-specific preprocessing via sanitize(weights:), handles quantization if specified in the config.json <strong>MLXNN.quantize</strong> is used</li><li>to apply quantization to the appropriate layers and populates the model with final weights using <strong>model.update(parameters:verify:)</strong></li><li><strong>Complete Context Assembly</strong>: The factory loads the Tokenizer and UserInputProcessor (for VLMs) from their respective configuration files, then packages everything (ModelConfiguration, LanguageModel, Tokenizer, UserInputProcessor) into a final ModelContext.</li></ul><p>Below is the snippet to load the model on to device. MLX caches it in Device support folders no need to load it again for subsequent launches.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/739/0*EapEKW8v-tDMFhev.png" /></figure><h4>How to create a Custom Model</h4><p><strong>Step 1: Set up your development environment</strong></p><p>Open a new folder in VSCode, Cursor, PyCharm, or any Python environment tool of your choice.</p><pre>python --version</pre><p><strong>Step 2: Create a Python virtual environment</strong></p><p>Create a virtual environment to isolate your project dependencies:</p><pre>python -m venv mlx-env<br>source mlx-env/bin/activate</pre><p><strong>Step 3: Install the required dependencies</strong></p><p>Install the necessary packages for model conversion:</p><pre>pip install mlx-lm<br>pip install -U &quot;huggingface_hub[cli]&quot;</pre><p><strong>Step 4: Login to Hugging Face</strong></p><p>Authenticate with your Hugging Face account:</p><pre>huggingface-cli login</pre><p><strong>Step 5: Execute the MLX convert command</strong></p><p>Convert your model using the MLX conversion tool:</p><pre>mlx_lm.convert --hf-path mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.0 -q --upload-repo XXXX/deepseek-r1-redistill-qwen-1.5b-mlx</pre><blockquote><strong><em>Note</em></strong><em>: Replace </em><em>XXXX with your actual Hugging Face username or organization name in the upload repository path.</em></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/532/0*oGWX5rfsqdEl6GX8.png" /></figure><h4>Run Inference</h4><p>Use the modelContainer’s <strong>perform</strong> method to access the modelContext and <strong>MLXLMCommon.generate</strong> to produce the text which can be streamed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tiJ34cyzOo_CjLKz.png" /></figure><p>Experience will look like</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/400/0*aDzgsgOLJv0yqCbd.gif" /></figure><p><strong>Tool Calling Experience</strong></p><p>Tool allows LLM to interact with external functions you define as part of its generation process. Instead of just relying on static training data, tool enabled LLM can do the following steps.</p><ol><li><strong>Analyze</strong> — Determine if external tools are needed</li><li><strong>Select</strong> — Choose the appropriate tool</li><li><strong>Request</strong> — Format parameters (JSON)</li><li><strong>Execute</strong> — Pause generation, run tool</li><li><strong>Respond</strong> — Use tool output in final answer</li></ol><p>Define a tool protocol which has the <strong>name</strong> , <strong>description</strong> and display name and one for the arguments which needs to be passed to tool</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*JKyC0Ytho2PLLy4d.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/838/0*d-xCsHQuQiGB7X33.png" /></figure><p>Define another protocol for handling the Tool execution</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ODDCxK8qfq2-3LoR.png" /></figure><p>See the implementation of <a href="https://github.com/badrinathvm/OnDeviceML/blob/main/OnDeviceML/Data/Tools/WeatherToolHandler.swift">WeatherTool Handler </a>conforming to ToolHandlerProtocol</p><p>Inference Implementation for tool is available <a href="https://github.com/badrinathvm/OnDeviceML/blob/0b753cfa30577fa3db667bf7233b5ebb20807f48/OnDeviceML/Data/Service/AskMeService.swift#L77">here</a>. Below is how the experience of tool calling for weather with OnDevice Model.</p><p>I’ve open-sourced the complete implementation on <a href="https://github.com/badrinathvm/OnDeviceML.git">GitHub</a>, built with Clean Architecture principles to keep the code maintainable, reusable, scalable and extensible. Try it out, share your thoughts, and feel free to contribute — let’s build better on-device AI together!</p><p>#OnDeviceML #Swift #iOS #ML</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1524f6636d3e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Swift Dev Tools MCP]]></title>
            <link>https://badrinathvm.medium.com/swift-dev-tools-mcp-578e0d4e9073?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/578e0d4e9073</guid>
            <category><![CDATA[ai-tools]]></category>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[macos]]></category>
            <category><![CDATA[mcps]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Tue, 26 Aug 2025 15:07:57 GMT</pubDate>
            <atom:updated>2025-08-26T15:07:57.665Z</atom:updated>
            <content:encoded><![CDATA[<p>Bridge your AI assistants with Swift development environment</p><p><strong>MCP (Model Context Protocol) </strong>— is open protocol that standardizes how applications provide context to LLM’s. In simple it’s like a USB -C port for connecting to AU models to different data sources and tools.</p><p>It is intended for integrations across wide range of tools such as AI code editors (Cursor, VSCode, Claude Code etc..) and other applications. By leveraging natural language, MCP allows seamless interaction with multiple tools and data sources.</p><p>Thanks for reading Badarinath’s Substack! Subscribe for free to receive new posts and support my work.</p><p>In this article, we will create a server in Swift that can perform various swift Dev tools operations.</p><h4>Creating a server</h4><pre>mkdir swift-dev-tools-mcp<br>cd swift-dev-tools-mcp<br>swift package init --type executable</pre><p>Next add the official Swift SDK for Model Context Protocol servers and clients. Update the <strong>Package.swift </strong>file to include the SDK as dependency</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/738/0*YxKQ69i7WbhLqyIY.png" /></figure><p>Open <strong>main.swift </strong>and add the following code</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/440/0*624VuQ7pNQ4SqF3d.png" /></figure><p>Here we are creating a server with name and version. The capabilities parameters is used to define the capabilities of the server, In this case we are defining only the tools capability.</p><p>To start the server , we are using the <strong>StdioTransport </strong>class, which is a transport layer for the server. It allows the server to communicate with the client via standard input and output.</p><p>Next, we need to implement tools. The server will have multiple tools that return the following development environment information:</p><h4>1. Swift Development Environment</h4><ul><li><strong>swift_version</strong> — Returns the current Swift compiler version, build information, and target architecture</li><li><strong>system_architecture</strong> — Returns the system architecture (arm64 for Apple Silicon, x86_64 for Intel)</li></ul><h4>2. Xcode Development Environment</h4><ul><li><strong>xcode_version</strong> — Returns the installed Xcode version and build number</li><li><strong>xcode_sdks</strong> — Returns all available Xcode SDKs for iOS, macOS, watchOS, tvOS, visionOS, and DriverKit</li></ul><h4>3. Device and Simulator Management</h4><ul><li><strong>list_simulator</strong> — Returns all available iOS and iPad simulators with their current status (Booted/Shutdown)</li><li><strong>connected_devices</strong> — Returns all connected physical devices (iPhone, iPad, Apple Watch, Mac) and their connection status</li></ul><h4>4. System Information</h4><ul><li><strong>macos_version</strong> — Returns the current macOS version, product name, and build version</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/480/0*s5I0elljCug2IkxR.png" /></figure><p>Register all the tools with server.withMethodHandler(ListTools.self)</p><p>Note: Tool names should be in snake case and tools may have input schema for input parameters, it’s optional by specification.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/449/0*lVpLgKa_afqfie0f.png" /></figure><p>To make server respond to tool, we need to implement the .withMethodHandler(CallTool.self) function. It will be called when the client requests the tool. The method should return a <strong>Result </strong>object with the result of the tool, as server can contain multiple tools, we can switch on tool name</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/723/0*hAUPCxXFef7RyvmE.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/873/0*8UBPJ2AkKUKxq1Na.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/834/0*9V3OQkOR6mhHPWNn.png" /></figure><p>In the end of the file, add a call to .waitUntilCompleted() to keep the server running.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/281/0*F9h9t3_FUXo4nHDV.png" /></figure><p>Now , it’s time to build the package to generate the executable.</p><pre>swift build</pre><p>You will see the terminal output like below</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*bzZIr89hFe9GoujW.png" /></figure><h4>Using MCP Servers in Cursor</h4><p>Go to Cursor Settings → Tools &amp; Integration, You will see a <strong>mcp.json</strong></p><pre>&quot;mcpServers&quot;: {<br>    &quot;swift-dev-tools-server&quot;: {<br>        &quot;type&quot;: &quot;stdio&quot;,<br>       &quot;command&quot;: &quot;&lt;Path of the project&gt;/swift-dev-tools-mcp/.build/arm64-apple-macosx/debug/swift-dev-tools-mcp&quot;<br>    }<br>  }<br>}</pre><p>Below are few natural language questions you can try with the tool.</p><p><strong>“What Swift version am I running?”</strong></p><p><strong>“Show me all available iOS simulators”</strong></p><p><strong>“What Xcode version do I have installed?”</strong></p><p><strong>“List all my connected devices”</strong></p><p><strong>“What SDKs are available for development?”</strong></p><p><strong>“What macOS version am I on?”</strong></p><p><strong>“Am I running on Apple Silicon or Intel?”</strong></p><p><strong>“What’s my complete development environment setup?”</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/718/0*ZKo_5h6Y7lin8WRG.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/651/0*qFtwGlpa9kMFMV42.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/708/0*lJxybsF2u5krI_G0.png" /></figure><p>As next step we can extend this capability to accessible remotely via HomeBrew</p><h3>Remote MCP Server via Home Brew</h3><p>Execute the below command to configure the `swift-dev-tools-mcp` which could be used remotely.</p><p>brew tap badrinathvm/tap</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*zR8R-gIaFrTdI1u-.png" /></figure><p>brew install swift-dev-tools-mcp</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*taYD7taDsqNtD6xr.png" /></figure><p>Now we can use below json in the MCP clients</p><pre>{<br>  &quot;mcpServers&quot;: {<br>    &quot;swift-version-server&quot;: {<br>      &quot;type&quot;: &quot;stdio&quot;,<br>      &quot;command&quot;: &quot;swift-dev-tools-mcp&quot;<br>    }<br>  }<br>}</pre><blockquote><em>We welcome your contributions — just follow the guidelines when adding new tools. The code is available on 📦 </em><a href="https://github.com/badrinathvm/homebrew-tap"><em>Homebrew Tap</em></a><em> and 🛠️ </em><a href="https://github.com/badrinathvm/swift-dev-tools-mcp"><em>Main Project</em></a></blockquote><p>Happy Coding!!</p><p>Thanks for reading Badarinath’s Substack! Subscribe for free to receive new posts and support my work.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=578e0d4e9073" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[iOS Share and Action Extensions]]></title>
            <link>https://badrinathvm.medium.com/ios-share-and-action-extensions-0781df151d78?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/0781df151d78</guid>
            <category><![CDATA[action-exte]]></category>
            <category><![CDATA[swift]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[share-extension]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Wed, 17 Apr 2024 17:19:09 GMT</pubDate>
            <atom:updated>2024-04-17T17:19:09.524Z</atom:updated>
            <content:encoded><![CDATA[<p>Share/Action extensions enables the users to share content via third party apps to your app. These extensions give users a convenient way to share content with other entities such as social websites, upload services etc..</p><h3><strong>How does share extension looks like ?</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/724/1*SZwP4NeVrNP5-_clje5X2A.png" /></figure><p>You can create <strong>Share extension target</strong> by clicking on “+” at the bottom and choose a template like below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jsRs_cf1ibyJCswuQFDv0Q.png" /></figure><p>Select the share extension from the scheme section. On click of Run Choose an app to execute few examples include <strong>Photos</strong>, <strong>Safari</strong>, <strong>Calendar</strong>, <strong>Contacts</strong> etc..</p><p>Once the target is created, Since i do not work with storyboard, i remove MainInterface.storyboard from the share extension folder and replace the key NSExtensionMainStoryboard from the extension’s Info.plist with the key NSExtensionPrincipalClass inside the NSExtension dictionary having a value of $(PRODUCT_MODULE_NAME).ShareViewController .</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/67ecccaf0e8cd29af55be725a5b7ccad/href">https://medium.com/media/67ecccaf0e8cd29af55be725a5b7ccad/href</a></iframe><p>Next i will use ShareViewController.swift as mentioned below.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6e02ade82b3f70eeb7cd208b9e4f54d3/href">https://medium.com/media/6e02ade82b3f70eeb7cd208b9e4f54d3/href</a></iframe><h3><strong>How does action extension will look like ?</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/590/1*h65dqumoqN4z8XsC8D8UZA@2x.jpeg" /></figure><p>Below are few examples of action targets.</p><p><strong>Print with Epson </strong>— <em>Opens the Epson app, deeplinks the selected file for printing it.</em></p><p><strong>Save to Dropbox</strong> — <em>Opens the Dropbox app, deeplinks the selected file for saving to desired folder of your choice.</em></p><p><strong>Save to Files</strong> — <em>Opens the Files app, deeplinks the selected file for saving to desired folder of your choice.</em></p><p>You can create <strong>action extension target</strong> by clicking on “+” at the bottom and choose a template like below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*P3Sqb9mC2_ktQDqUTjeh1A.png" /></figure><p>Just like Share extension target , make sure to update the info.plist for NSExtensionPrincipalClass inside the NSExtension dictionary having a value of $(PRODUCT_MODULE_NAME).ActionViewController</p><p>Below is the code with regards to <strong>ActionViewController</strong></p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c23c0c7b92f80486ce123760bad94174/href">https://medium.com/media/c23c0c7b92f80486ce123760bad94174/href</a></iframe><p>Types of UTI’s (Uniform Type Identifiers)</p><ol><li><strong>public.text </strong>— Represents the plain text data. This action extension handles text-based content, such as copying text from webpage or a document.</li><li><strong>public.image </strong>— Represents the image data. This action extension handles images.</li><li><strong>public.url — </strong>Represents URL’s. This action extension handles URLs such as sharing links from web browser or a messaging app.</li><li><strong>public.audio</strong> — Represents the audio data. This is required if your action extension handles audio files or audio recordings.</li><li><strong>public.movie</strong> — Represents the video data. You might need this if the action extension handles video files or video recordings.</li><li><strong>public.data</strong> — Represents generic binary data. This is required if your action extension handles any other types of data that are not covered by the more specific UTI’s mentioned above.</li></ol><p>based on the needs , corresponding UTI can be configured in the Info.plist for NSExtensionActivationRule key</p><h3>Debugging</h3><p>Debugging an app extension is not straightforward but possible, regardless. While you cannot debug the app and its share extension at the same time, you can determine the target to be debugged. In XCode, navigate to <em>Debug</em>, <em>Attach to process</em>, and select the app extension’s process. Now, breakpoints should be triggered as you are used to with the containing app.</p><h3>Code Sharing</h3><p>A share extension is a separate target, meaning that it does not share any source code with the corresponding app. If you still want to share source code between the app and the share extension, you have to change the target membership of the source code to be shared. In XCode, you can easily select the target membership by ticking the corresponding boxes under <em>Target Membership</em> from the inspector on the right-hand side after opening the desired source code file.</p><p>We can also make use of App Groups to share the data between the main app the target extensions.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0781df151d78" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Arguments Lists too long]]></title>
            <link>https://badrinathvm.medium.com/arguments-lists-too-long-cdc4c18add?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/cdc4c18add</guid>
            <category><![CDATA[xcode-12]]></category>
            <category><![CDATA[arguments]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Wed, 18 Nov 2020 21:51:54 GMT</pubDate>
            <atom:updated>2020-11-18T21:51:54.630Z</atom:updated>
            <content:encoded><![CDATA[<p>This issue usually happens for the Xcode projects involving Cocoapods with more dependencies during [CP] Check Pods Manifest.lock script execution.</p><pre>PhaseScriptExecution [CP]\ Check\ Pods\ Manifest.lock /Users/xxx/Library/Developer/Xcode/DerivedData/xxx/Build/Intermediates.noindex/xxx.build/Debug\ (Staging)-iphonesimulator/OV.build/Script-05BB3CCD2FBD4842F78B0F4F.sh (in target &#39;xxx&#39; from project &#39;xxx&#39;)<br>    cd /Users/xxx/Documents/Projects/&lt;project-name&gt;<br>    /bin/sh -c /Users/xxx/Library/Developer/Xcode/DerivedData/xxx/Build/Intermediates.noindex/xxx.build/Debug\\\ \\\(Staging\\\)-iphonesimulator/xxx.build/Script-05BB3CCD2FBD4842F78B0F4F.sh<br><br>error: unable to spawn process (Argument list too long) (in target &#39;xxx&#39; from project &#39;xxx&#39;)</pre><blockquote>The direct solution to fix this problem is setiing the build system to Legacy Build Sysrtem via ‘File -&gt; Workspace Settings’</blockquote><p>Legacy Build System could work till Xcode 11.x versions. However, with Xcode 12.x the above approach won&#39;t work as the Legacy Build system will be deprecated in future versions of Xcode.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/504/1*wreC0yRnvmsfSbi4sYo5RQ.png" /></figure><p>Bummer !!! What’s the solution? How can we fix it?</p><p>The solution is to inspect the below Build Setting properties.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/463/1*RuW5aFchEq48eZJ2s84WFw.png" /></figure><p>Make sure these Custom compiler Flags contain only compiler related items</p><blockquote><strong>OTHER_CFLAGS=”-fprofile-arcs -ftest-coverage”</strong></blockquote><blockquote><strong>OTHER_CPLUSPLUSGLAGS=-g ‘-std=c++14’ -Wno-objc-designated-initializers -fobjc-arc -g</strong></blockquote><blockquote>However, with pod install these fields will be appended with $(inherited) values of the pods listed in the podfile creating huge number of enytries which is causing “Arguments list too long (Unable to spwan the process /bin/sh)</blockquote><p>Hope this helps, Feel free to comment for any more clarifications. Happy Coding !!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cdc4c18add" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Sharing UITests via Cocoapods]]></title>
            <link>https://badrinathvm.medium.com/sharing-uitests-via-cocoapods-583c6b9f3cb2?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/583c6b9f3cb2</guid>
            <category><![CDATA[integration-testing]]></category>
            <category><![CDATA[cocoapods]]></category>
            <category><![CDATA[ios]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Fri, 31 Jul 2020 20:24:04 GMT</pubDate>
            <atom:updated>2020-07-31T20:24:04.855Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/314/1*RVWhS2zz_ucHPQUgep3gYw@2x.jpeg" /></figure><p>Writing tests for any piece of code is important for maintaining the quality of software which brings more confidence to product delivery. In today’s world most of the code are modularized and bundled to a library/pod making it easier to share, reuse across, managing deployment and release activities etc.. Writing unit tests for modularized code could be achieved comfortably along side with feature development by adopting techniques like TDD. However, the main challenge is how to share integration/ui tests to hosting apps for modularized code. Luckily, there’s a way to do it with “test-specs” in Podfile via Cocoapods.</p><p>Cocoapods can extend the capability of providing UI test specifications, a convenient way of shipping UI tests along with it’s sources to any of the hosting apps, so that app’s consuming this library no longer need to write any UI tests for the features delivered.</p><p>Follow below steps for setup</p><ol><li>Add below code in the <strong>podspec</strong> file</li></ol><pre>Pod::Spec.new do |spec|<br>  spec.name         = &#39;<strong>SampleLibrary</strong>&#39;<br>  spec.version      = &#39;1.0&#39;</pre><pre>  spec.test_spec &#39;<strong>Tests</strong>&#39; <strong>do</strong> |test_spec|<br>     test_spec.requires_app_host = <strong>true<br>     </strong>test_spec.test_type = <strong>:ui</strong><br>     test_spec.source_files = <strong>&#39;SampleApp/SampleApp-UITests/SampleTest.swift&#39;</strong><br><strong>end</strong></pre><p>2. Bring the test specification to <strong>Podfile, </strong>tie it with <strong>:testspecs, </strong>the<strong> </strong>name should match with the one added in podspec file</p><pre>target &#39;<strong>SampleApp</strong>&#39; do<br>  use_frameworks!<br>  pod &#39;<strong>SampleLibrary</strong>&#39;, &#39;~&gt; 1.0&#39;, :testspecs =&gt; [&#39;<strong>Tests</strong>&#39;] <br>end</pre><p>3. Perform <strong>pod install , </strong>examine<strong> </strong>Pods.xcodeproj<strong> , </strong>a new target for UI test will be generated with name<strong> SampleLibrary-UI-Tests, </strong>if you notice test target name would be set to<strong> AppHost-SampleLibrary-UI-Tests in </strong>Build Settings like below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/617/1*TGvDsoUk6oX1REMt_HDb_w.png" /></figure><p>4. Next, we need to set the update the <strong>TEST_TARGET_NAME</strong> to “SampleApp” to execute the UI tests shared as part of <strong>:testspec, </strong>in order to achieve this, add <strong>post_install </strong>step in podfile<strong> </strong>to update the build setting configuration and then perform “ pod install” which regenerates the Pods.xcodeproj with proper target setting.</p><pre>post_install <strong>do</strong> |installer|<br>  installer.pods_project.targets.each <strong>do</strong> |target|<br>    target.build_configurations.each <strong>do</strong> |config|<br>       <strong>if</strong> target.name == &quot;<strong>SampleLibrary-UI-Tests</strong>&quot;<br>           config.build_settings[&#39;TEST_TARGET_NAME&#39;] = &quot;<strong>SampleApp</strong>&quot;<br>       <strong>end<br>     end<br>   end<br>end</strong></pre><p>5. To execute tests, first enable the scheme by pressing “Command + 6” -&gt; Right Click on “SampleLibrary-UI-Tests” -&gt; <strong>Enable “SampleLibrary-UI-Tests”, once it is enabled click on play button.</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/669/1*dvlB8M-8BqHN8pBLvsSkAA.gif" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=583c6b9f3cb2" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Step Indicator in SwiftUI]]></title>
            <link>https://badrinathvm.medium.com/step-indicator-in-swiftui-104e0486d133?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/104e0486d133</guid>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[stepperview]]></category>
            <category><![CDATA[swiftui]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Mon, 13 Apr 2020 03:23:47 GMT</pubDate>
            <atom:updated>2020-05-02T00:22:59.132Z</atom:updated>
            <content:encoded><![CDATA[<p>This SwiftUI iOS cocoapods library is used for indicating step actions for series of steps involved for any task. For eg: if you wanna illustrate the steps for collecting cash from an ATM , steps involved for any loan application. etc..</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/909/1*fQiKj62ei7KlJ5dVFhqDpA.png" /><figcaption>center, top, bottom alignments.</figcaption></figure><h4><a href="https://badrinathvm.github.io/StepperView/"><strong>Visual conception of the library</strong></a></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gCpL8lRrtsdfxiy4bauwOQ.png" /></figure><p>Here’s the library which does the work with just couple of lines which accepts below View Modifiers</p><pre>.addSteps(_ steps: [View]) : array of views to be rendered closer to indicator<br><br>.alignments(_ alignments: [StepperAlignment]) : optional defaults to .center, available with custom options either .top, .center, .bottom sections<br><br>.indicatorTypes(_ indicators:[StepperIndicationType]): enum provides the options to use .circle(color, width) , .image(Image, width) , .custom(AnyView)<br><br>.lineOptions(_ options: StepperLineOptions): color, thickness line customization.<br><br>.spacing(_ value: CGFloat): spacing between each of the step views.<br><br>.stepIndicatorMode(_ mode: StepperMode): vertical, horizontal display modes.</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/358/1*d4Fu3Rx6mT4FvTpoM1Hlpw.png" /><figcaption>Horizontal Step Indicator</figcaption></figure><pre>var body: some View {<br>         StepperView()<br>            .addSteps([Text(&quot;Account&quot;), Text(&quot;MemberShip&quot;))<br>            .indicators([.center,.center])<br>            .stepIndicatorMode(StepperMode.horizontal)<br>            .spacing(50)<br>            .lineOptions(StepperLineOptions.custom(1, Colors.blue(.teal).rawValue))<br>}</pre><p>This library handles top, center and bottom alignments. Above are some variation screenshots.</p><p><strong>References</strong>:</p><p>More Details : <a href="https://badrinathvm.github.io/StepperView/">https://badrinathvm.github.io/StepperView/</a></p><p><strong>cocoapods</strong>:</p><p><a href="https://cocoapods.org/pods/StepperView">StepperView</a></p><p><strong>Github</strong>: <a href="https://github.com/badrinathvm/StepperView">https://github.com/badrinathvm/StepperView</a></p><p>Feel free to request any new features or create a pull request.</p><p>Happy Coding..!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=104e0486d133" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Progress Animation in SwiftUI]]></title>
            <link>https://badrinathvm.medium.com/progress-animation-in-swiftui-15b067cc3993?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/15b067cc3993</guid>
            <category><![CDATA[swiftui]]></category>
            <category><![CDATA[progress-bar]]></category>
            <category><![CDATA[combine]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Thu, 02 Apr 2020 05:31:03 GMT</pubDate>
            <atom:updated>2020-04-02T05:44:55.694Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/341/1*fI4r_uvLa8VHAVsLumJj7Q.gif" /></figure><blockquote>Implement a struct with single Rectangle with index parameter for each view to start and stop the animation programmatically via publisher with PassthroughSubject is passed as a parameter — which sends stream of values from their origin to it’ subscribers. The PassthroughSubject&#39;s subscribers can use this information to update the state of the UI or to handle the occurred event. In .onReceive will have access to the value sent by the publisher to act on to update the UI state.</blockquote><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/93a9bcc51ab856806bb13d1fb9616cac/href">https://medium.com/media/93a9bcc51ab856806bb13d1fb9616cac/href</a></iframe><p>@State Variable animate is set to true or false based on the value received from the publisher to control the opacity of the RectangleView</p><p>Next, create ProgressView( see the references link below) with @state currentIndex,publisher variables. In it’s .onAppear pass the value via publisher to the it’s subscriber RectangleView in this case</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/33a899b13dc939be0b850128e7d49eac/href">https://medium.com/media/33a899b13dc939be0b850128e7d49eac/href</a></iframe><p>PassthroughSubject takes AnimationStatus enum to start, stop at particular index and completely everything.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6cee0df493a063949710f257c6dd5c26/href">https://medium.com/media/6cee0df493a063949710f257c6dd5c26/href</a></iframe><p><strong>References : </strong><a href="https://gist.github.com/badrinathvm/13b79f1d6dfd30ea54a106dc1d805953">https://gist.github.com/badrinathvm/13b79f1d6dfd30ea54a106dc1d805953</a></p><p>Happy Coding !!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=15b067cc3993" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Packing UIKit’s ViewController in SwiftUI]]></title>
            <link>https://badrinathvm.medium.com/packing-uikits-viewcontroller-in-swiftui-327e3180ad7f?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/327e3180ad7f</guid>
            <category><![CDATA[uikit]]></category>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[swiftui]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Tue, 31 Mar 2020 17:19:07 GMT</pubDate>
            <atom:updated>2020-03-31T17:26:34.949Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/686/1*1VgmV4uzJhmOvWhaSeHcRQ.png" /></figure><p>Let’s say if we have any UIViewController to be used on any SwiftUI based app we have to follow below steps</p><blockquote>Extend the protocol UIViewControllerRepresentable</blockquote><blockquote>Implement <strong>makeUIViewController</strong> and <strong>updateUIViewController</strong></blockquote><p>However writing a wrapper provides easier way to use it inline for any of the viewControllers.</p><p>Here two closures one each for the requirement of UIViewControllerRepresentable.</p><p><strong>@autoclsore</strong> which will enable us to keep the conventions of UIViewControllerRepresentable and create our views lazily without requiring any additional syntax during calling.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2P-zDeHziowBSHrIGHvI6w.png" /></figure><p>When accessing a ViewController there might be some cases where accessing Context might be required , so let’s added some convenience initializers with just ViewController and without Context</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hZVAWfk2AGguUZwPVot8nQ.png" /></figure><p>Here’s the usage of it, let’s say u have EntryViewController</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/612/1*er0SHWbIfBwjNhCvnhdLyg.png" /></figure><p>References : <a href="https://gist.github.com/badrinathvm/aa88ec0b66b99d4acb85f7069ae40625">https://gist.github.com/badrinathvm/aa88ec0b66b99d4acb85f7069ae40625</a></p><p>Happy Coding..!!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=327e3180ad7f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Integrating SwiftUI in UIKit and Build Settings to Xcode.]]></title>
            <link>https://badrinathvm.medium.com/integrating-swiftui-in-uikit-and-build-settings-to-xcode-f9457bd67e5c?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/f9457bd67e5c</guid>
            <category><![CDATA[swiftui]]></category>
            <category><![CDATA[uikit]]></category>
            <category><![CDATA[cocoapods]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Sat, 18 Jan 2020 04:03:45 GMT</pubDate>
            <atom:updated>2020-03-17T06:08:20.687Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/563/1*WPEYngXmU0YeMNgX0_zAow@2x.jpeg" /></figure><p>Follow below steps</p><ol><li>Make SwiftUI View ready</li></ol><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/fe3591c55e0dfdfca630d033a6b3d5bf/href">https://medium.com/media/fe3591c55e0dfdfca630d033a6b3d5bf/href</a></iframe><p>2. Include it in UIKit Code like below.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4229ae798b1d59163032718d23eac5f5/href">https://medium.com/media/4229ae798b1d59163032718d23eac5f5/href</a></iframe><p>3. If the project is supporting version ≤= iOS 12.0 , Update the Build Settings of Xcode like below.</p><p>Expand <strong>Link Binary With Libraries</strong> then Click on + and then Add SwiftUI.framework and make it Optional</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kAE0cOWgWYaPVc6LtiWInw.png" /></figure><p>4. In case, if you have added SwiftUI View as part of Development pod, make sure to update the podspec file with below config.</p><p><strong>spec.xcconfig = { ‘OTHER_LDFLAGS’ =&gt; ‘-weak_framework “SwiftUI”’ }</strong></p><p>Then perform <em>pod install</em> , make to validate the other Linker Flags should like below.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PPew2epy-opTw92-puqVQA.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f9457bd67e5c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Expandable Button in Swift]]></title>
            <link>https://badrinathvm.medium.com/expandable-button-in-swift-a4701e7dcc61?source=rss-ddcf3cc4a563------2</link>
            <guid isPermaLink="false">https://medium.com/p/a4701e7dcc61</guid>
            <category><![CDATA[ios]]></category>
            <category><![CDATA[button]]></category>
            <category><![CDATA[swift]]></category>
            <dc:creator><![CDATA[Badarinath Venkatnarayansetty]]></dc:creator>
            <pubDate>Mon, 02 Sep 2019 19:58:51 GMT</pubDate>
            <atom:updated>2019-09-02T20:07:59.034Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/301/1*szbYKEvRJQDURghCy6sfng.gif" /></figure><p>Idea is to have a stack view of button aligned horizontally or vertically as per specification.</p><p>Follow below steps:</p><ol><li>Construct mainStackview for UIImageButton</li><li>rootStackView contains series of buttons. ( like $,$$, $$$, etc.. )</li><li>Pin both mainStackView &amp; rootStackView inside the UIView to establish corner radius functionality.</li><li>Have key variables like widthConstraint, buttonsAreHidden ,updateButtonImage. These variables values will be altered during expanding /unexpanding the buttons via didSet</li></ol><blockquote>performAnimation is called when button is toggled, this method performs below functionalities</blockquote><blockquote>Hides the rootStackView</blockquote><blockquote>update the button image to dotted_filled</blockquote><blockquote>hiding the sub layers left borders while animation is happening</blockquote><blockquote>update the width Constraint</blockquote><blockquote>remove constraints if any</blockquote><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7b0716aea492f874c553765066403af8/href">https://medium.com/media/7b0716aea492f874c553765066403af8/href</a></iframe><p>Complete source code available in <a href="https://github.com/badrinathvm/PriceButton">Github</a></p><p>Any feedback is really appreciated. Happy Coding. !!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a4701e7dcc61" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>