<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[neusta mobile solutions - Medium]]></title>
        <description><![CDATA[neusta mobile solutions — apps. handcrafted. - Medium]]></description>
        <link>https://medium.com/neusta-mobile-solutions?source=rss----4eaa9b078a01---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 17:43:20 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/neusta-mobile-solutions" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Master Real-Time Frequency Extraction in Flutter to Elevate Your App Experience]]></title>
            <link>https://medium.com/neusta-mobile-solutions/master-real-time-frequency-extraction-in-flutter-to-elevate-your-app-experience-f5fef9017f09?source=rss----4eaa9b078a01---4</link>
            <guid isPermaLink="false">https://medium.com/p/f5fef9017f09</guid>
            <category><![CDATA[audio-processing]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[dart]]></category>
            <category><![CDATA[apple-intelligence]]></category>
            <category><![CDATA[apps]]></category>
            <dc:creator><![CDATA[Florian Vögtle]]></dc:creator>
            <pubDate>Wed, 13 Nov 2024 09:57:34 GMT</pubDate>
            <atom:updated>2024-11-13T14:08:54.672Z</atom:updated>
            <cc:license>https://creativecommons.org/licenses/by-nd/4.0/</cc:license>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hu6BZx8M1PXEzWwr_LlLzA.png" /></figure><p>If the title caught your eye, you’re probably wondering what this is all about. I’ve been fascinated by the latest trend from Apple and Samsung — integrating shaders directly into apps to create a fluid, dynamic user experience (See Video 1). These shaders often sync with real-time audio and, when combined with AI, give the interface a natural, organic feel. The result? AI feels like a seamless part of the entire system rather than an add-on. I wanted to bring that same experience to my own project — so here’s how I made it happen.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FSLpBl0BdccU%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fshorts%2FSLpBl0BdccU&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FSLpBl0BdccU%2Fhq2.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="640" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/aa6ea7a3b2524066dfee576a60f96b20/href">https://medium.com/media/aa6ea7a3b2524066dfee576a60f96b20/href</a></iframe><p>I set out to build a real-time, sound-reactive shader in Flutter, aiming to have different frequency ranges in a voice independently alter specific parts of the shader. For example, the bass frequencies would influence one part of the shader, while the mid and high frequencies would affect other parts. The final result looks like this:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*16cfLFtqsOs2GZxCYdtESA.gif" /></figure><p>Let’s examine the code that enables frequency extraction from a real-time audio signal:</p><pre>class VoiceApi {<br>  void getFrequencies(void Function(List&lt;({<br>    FrequencySpectrum spectrum, <br>    double value,<br>    })&gt; data,<br>  ) onData,<br>  ) {<br>    final _flutterAudioCapture = FlutterAudioCapture();    <br>    _flutterAudioCapture.start(<br>      (data) {<br>        final buffer = data;<br>        final fft = FFT(buffer.length);<br><br>        final freq = fft.realFft(buffer);<br>        final freqList = freq.discardConjugates().magnitudes().toList();<br>        final frequencies = [<br>          FrequencySpectrum(0, 20),<br>          FrequencySpectrum(20, 25),<br>          FrequencySpectrum(25, 31),<br>          FrequencySpectrum(31, 40),<br>          FrequencySpectrum(40, 50),<br>          FrequencySpectrum(50, 63),<br>          FrequencySpectrum(63, 80),<br>          FrequencySpectrum(80, 100),<br>          FrequencySpectrum(100, 125),<br>          FrequencySpectrum(125, 160),<br>          FrequencySpectrum(160, 200),<br>          FrequencySpectrum(200, 250),<br>          FrequencySpectrum(250, 315),<br>          FrequencySpectrum(315, 400),<br>          FrequencySpectrum(400, 500),<br>          FrequencySpectrum(500, 630),<br>          FrequencySpectrum(630, 800),<br>          FrequencySpectrum(800, 1000),<br>          FrequencySpectrum(1000, 1250),<br>          FrequencySpectrum(1250, 1600),<br>          FrequencySpectrum(1600, 2000),<br>          FrequencySpectrum(2000, 2500),<br>          FrequencySpectrum(2500, 3150),<br>          FrequencySpectrum(3150, 4000),<br>          FrequencySpectrum(4000, 5000),<br>          FrequencySpectrum(5000, 6300),<br>          FrequencySpectrum(6300, 8000),<br>          FrequencySpectrum(8000, 10000),<br>          FrequencySpectrum(10000, 12500),<br>          FrequencySpectrum(12500, 16000),<br>          FrequencySpectrum(16000, 22000),<br>        ];<br>        List&lt;({FrequencySpectrum spectrum, double value})&gt; frequencyValues =<br>            frequencies.map((e) {<br>          final min = fft.indexOfFrequency(e.min.toDouble(), 44000);<br>          final max = fft.indexOfFrequency(e.max.toDouble(), 44000);<br><br>          return (<br>            spectrum: e,<br>            value: freqList<br>                .sublist(min.floor(), max.ceil())<br>                .reduce((a, b) =&gt; a + b),<br>          );<br>        }).toList();<br>        onData(frequencyValues);<br>      },<br>      (e) {<br>        debugPrint(e);<br>      },<br>      sampleRate: 44000,<br>      bufferSize: 256,<br>    );<br>  }<br>}<br><br>// A frequency spectrum<br>class FrequencySpectrum {<br>  FrequencySpectrum(this.min, this.max);<br><br>  final int min;<br>  final int max;<br>}</pre><p>This might not look like a lot of code, but it brings a high level of complexity with it. To really understand how this works, we need to dig (somewhat) deep into audio processing. This is what I will do in this article.</p><h3>Implementation Details and Methodology</h3><p>Before delving into the code, it’s essential to first understand what an audio frequency is, in order to effectively extract it from audio signals.</p><h4><strong>Audio Frequencies</strong></h4><p>What exactly are audio frequencies? A frequency is a periodic vibration that is audible to the human ear¹. The unit of a frequency is called Hertz (Hz) and refers to how often the wave of the audio signal repeats complete cycles within a second, i.e. 5 repetitions equal 5 Hz (See Figure 2).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/630/0*btNfkVC5GWjWinb8.png" /><figcaption>Figure 2: frequency (f), amplitude (A), and period (T). The orange portion of the wave signifies a complete cycle (from 0–0.2 seconds).²</figcaption></figure><p>We, as humans, link a small Hz number with a low tone, e.g. bass or the lowest “a” key on the piano, while a larger Hz number is linked to a high tone, such as whistling or the highest “a” key on the piano. Generally, the human ear can recognize frequencies between 20–20000Hz (20kHz). Therefore, every audible sound that a human can recognize, is a combination of the different frequencies with different levels in this frequency range (See Figure 4).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*_YzaQIrxebjzUAdCDc5N7A.gif" /><figcaption>Figure 4: An audio signal is reproduced using FFT.⁹</figcaption></figure><p>Now that we understand what frequencies are, how are these extracted from an audio signal?</p><h4><em>Microphone Signal</em></h4><p>Before we perform any extraction, the captured audio signal from the microphone is required first. There are packages on <a href="http://pub.dev">pub.dev</a> that take care of that. In this case, the <a href="https://pub.dev/packages/flutter_audio_capture"><em>flutter_audio_capture</em></a> is used, but feel free to change to any other package.</p><p>To begin using this package, we must first initialize the <em>flutter_audio_capture</em> before listening to the audio signal.</p><pre>final  _flutterAudioCapture = FlutterAudioCapture().initialize();</pre><p>Once initialized, we can begin listening to the audio signal directly from the microphone:</p><pre>_flutterAudioCapture.start(<br>  (data) { ... },<br>  (e) { ... },<br>  sampleRate: &lt;sample_rate&gt;,<br>  bufferSize: &lt;buffer_size&gt;,<br>);</pre><p>Before applying any logic on the microphone data, let’s first examine the following two parameters sampleRate and bufferSize :</p><h4>Sample Rate</h4><p>The sample rate refers to the number of sample that are captured per second (See Figure 4). To capture high frequencies from an audio signal, a higher sampling rate is required. For instance, a typical phone call has a sampling rate of only 8000 Hz (8 kHz), which limits the capture of higher frequencies, resulting in a muffled sound quality and a loss in information.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*HMtU0cxQjlPKve-m.png" /><figcaption>Figure 4: <a href="https://www.google.com/url?sa=i&amp;url=https%3A%2F%2Fwww.hollyland.com%2Fblog%2Ftips%2Fwhat-is-sample-rate-in-audio&amp;psig=AOvVaw3Qj48sB6B7JenwYVo6BVHD&amp;ust=1729604057313000&amp;source=images&amp;cd=vfe&amp;opi=89978449&amp;ved=0CBcQjhxqFwoTCNihmK7Ln4kDFQAAAAAdAAAAABAE">T</a>he sample rate determines which frequencies can be captured¹⁰</figcaption></figure><p>Therefore, it would be logical to use a sampling rate of approximately 20 kHz; however, the actual sampling rate must be at least 40 kHz. This requirement is based on the Nyquist–Shannon sampling theorem.</p><h4>Nyquist–Shannon sampling theorem</h4><p>Audio consists of pressure waves, which, when visualized, exhibit an oscillating pattern, i.e. they go up and down. To accurately capture the full range of an audio wave, it is essential to record the points where the wave reaches its lowest and highest levels, ensuring that the wave has completed a full cycle. Otherwise, this can lead to incorrect signal capture, as illustrated in Figure 5 A and C. This phenomenon is called aliasing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/460/0*I-6u4QYn4t23HnkF" /><figcaption>Figure 5: Only having a sampling rate equal to the frequency (e.g. both 40 Hz), will result in loss of data. Instead, the sampling rate should be double the actual frequency (e.g. 80 Hz for 40 Hz frequency)⁷.</figcaption></figure><p>The wagon wheel problem describes a similar problem, where the wheel of a car spinning 24 times a second appears stationary when filmed with a 24 frames per second (FPS) camera. With more than 24 rotations, the wheel can even look like it is going reverse.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*zUFLhW4MINwcQSj2newpKw.gif" /><figcaption>Figure 6: Cameras can have aliasing with fast moving objects⁶</figcaption></figure><p>That is the reason why, according to the Nyquist–Shannon sampling theorem, the Hz of the sample rate must be twice the Hz of the actual frequency that should be measured/reconstructed. This requirement ensures that the original waveform can be accurately and precisely reconstructed.</p><p>So for 20000 Hz, the upper frequency bound the human ear can hear, this equals to 40000 Hz. So this means that audio should be recorded at a sampling rate of 40000 Hz to capture the full spectrum from 20 to 20000 Hz.</p><p>However, the most common sampling rates are 44.1 kHz and 48 kHz and not 40kHz. The reason for that is, that we do not want to record any audio above the 40kHz, but it is not directly possible to cut off any signals above a certain frequency. Instead the so called anti-aliasing filters, which are used for cutting off frequencies above or below a certain threshold, feature a transition band, i.e. it takes from frequency 2<em>0000Hz </em>to frequency <em>22000Hz</em> to have all frequencies above the threshold of <em>20000Hz</em> removed (See Figure 7). Within this range (e.g. <em>20000Hz</em>–<em>22000Hz</em>), the frequencies gradually decrease but are still present until they are completely absent.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q8juBrSh_1mh37GxUAJanQ.png" /><figcaption>Figure 7: Anti-Aliasing filter require a transition band to remove all frequencies higher than a given threshold⁵</figcaption></figure><p>This is the reason, why the sample rate should not be 40 kHz, but should have a buffer to allow for the transition band. Therefore, going back to our code example, we can choose a sample rate of 48000.</p><h4>Buffer Size</h4><p>The buffer size defines the number of samples that should be buffered before processing. The higher the number of data points we buffer, the more delay will be in the signal. The lower the number is, the less time we have to apply any calculations to prevent any stuttering, e.g. the buffer size has 53ms of data (buffer size/sample rate) and our calculation takes 59ms, there will be a stuttering/no sound of 6ms when playing back the data in real-time.</p><p>We will go with a buffer size of 256, i.e. 256 data points are buffered, but feel free to change to a lower or higher number if necessary.</p><p>With both the sample rate and buffer size in place, we can finally take a look at the code for extracting the frequencies from our audio signal:</p><pre>_flutterAudioCapture.start(<br>(data) {<br>  final buffer = data;  <br>  &lt;...&gt;<br>},</pre><p>So what is the data that we get from the package?</p><h4>PCM Data</h4><p>The data the package returns is called PCM data. Basically, this is the <strong>raw</strong> audio data representing sound wave amplitudes over time. In more detail, PCM stands for Pulse Code Modulation and is the method for transforming analog signals to digital ones. Only with the raw/uncompressed audio data, it is possible to extract the frequencies as desired.</p><p>How can the frequencies be extracted? Are they directly accessible in the data? No, the data must first be transformed. This transformation is performed in the following code lines using the <a href="https://pub.dev/packages/fftea">fftea</a> package:</p><pre>final fft = FFT(buffer.length);<br>final freq = fft.realFft(buffer);<br>final freqList = freq.discardConjugates().magnitudes().toList();</pre><p>What is happening here? To fully understand this, we must first grasp the underlying concept.</p><h4>Time vs Frequency Domain</h4><p>Why can we not extract the frequency information directly? Because the PCM Data is in the time domain, i.e. the data represents audio over time and not per frequency. But luckily, having the data in time domain also means that we have the data in the frequency domain but just can not access it directly (See Figure 1). Instead, it’s required to apply an Fast-Fourier-Transform (FFT). The FFT is used to transform the data from time domain to frequency domain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/850/1*eNPru7MPbIN3iqpYagPREA.png" /><figcaption>Figure 1: Audio data in the time domain can be transformed into frequency domain and the other way around.⁸</figcaption></figure><h4>Fast-Fourier-Transform: Transforming Time to Frequency Domain</h4><p>How does the Fast Fourier Transform (FFT) work? In the ‘Audio Frequencies’ section, it was explained that every sound is composed of various frequencies at different levels. Therefore, it should be possible to reconstruct any sound by combining and adding these different frequencies. That’s correct, and the fascinating part is, that when reconstructing a sound using different frequencies, the level of each frequency needed to reproduce the waveform is determined (See Figure 4). This is exactly what is required: The level of each frequency aka. the frequency domain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*_YzaQIrxebjzUAdCDc5N7A.gif" /><figcaption>Figure 4: An audio signal is reproduced using FFT.⁹</figcaption></figure><p>That is what the FFT is all about: It reengineers the audio signal to extract the frequency domain by determining the level of each frequency required. This is the process that occurs in the FFT formula:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/406/0*kFbaD5avqfQ1Wrmc.jpg" /></figure><p>Basically, this is what is happening behind the scenes when executing the following code:</p><pre>final freq = fft.realFft(buffer);</pre><h4>Symmetrical Result</h4><p>The output of the FFT consists of pairs of complex numbers, which can be used to calculate values such as the magnitude. We then apply two functions to this output. The first is fftOutput.discardConjugates(), which removes half of the values from the FFT. This is possible because the FFT produces a symmetrical, mirrored result for real signals (See Figure 9), with one half representing negative frequencies and the other representing positive frequencies.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*G7kDibIigKvigmJB3NAImQ.png" /><figcaption>Figure 9: The result of a FFT visualized in a graph³</figcaption></figure><p>Both the positive and negative frequency components are required to fully reconstruct a signal. However, since the use case is not about full reconstruction but about extracting frequencies, the negative frequencies are not needed and can be discarded, effectively reducing the output size by half.</p><h4>Magnitude</h4><p>The second function we call is fftOutput.magnitudes() .</p><p>The FFT outputs us pairs of complex numbers. These are not in a useable format. What we want, is the maximum absolute number for a frequency, known as the <em>magnitude</em>. To calculate the <em>magnitude,</em> the Pythagorean theorem (a² + b² = c²) is used, where <em>a</em> and <em>b </em>equal the complex numbers from the FFT output and <em>c</em> equals the <em>magnitude</em>.</p><p>This calculation does not come out of nowhere. Instead the values for <em>a</em> and <em>b</em> represent sine and cosine values within a circle and together with the third value, i.e. the <em>magnitude,</em> form a triangle within a circle (See Figure 8).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*f28X8dRijCoTy-WYs1L4Qg.png" /><figcaption>Figure 8: It is possible to apply the Pythagorean theorem on the result of a FFT to get the magnitude, because the three values form a triangle within a circle when visualized⁴</figcaption></figure><p>Therefore, we can calculate the <em>magnitude</em> by calculating the missing side of a triangle using the Pythagorean theorem.</p><h4>Frequency Spectrums</h4><p>Finally, we will group various frequencies into broader frequency spectrums to reduce the number of data points for visualization. This step is optional.</p><p>For this we will use a FrequencySpectrum model, which includes a minimum and maximum frequency value:</p><pre>class FrequencySpectrum {<br>  FrequencySpectrum(this.min, this.max);<br><br>  final int min;<br>  final int max;<br>}</pre><p>The frequencies will be grouped by octaves. Within an octave the frequency doubles, e.g. 40–80 Hz is one octave and the following octave would go from 80–160 Hz. Interestingly, humans perceive these two areas as qualitatively identical, with the latter merely being at a higher pitch, still the distance between the start and end frequency being twice as far. All octaves, starting with 0 — 20 Hz and ending with 16kHz-20kHz, will be put into a list:</p><pre>final frequencies = [<br>  FrequencySpectrum(0, 20),<br>  FrequencySpectrum(20, 25),<br>  FrequencySpectrum(25, 31),<br>  FrequencySpectrum(31, 40),<br>  FrequencySpectrum(40, 50),<br>  FrequencySpectrum(50, 63),<br>  FrequencySpectrum(63, 80),<br>  FrequencySpectrum(80, 100),<br>  FrequencySpectrum(100, 125),<br>  FrequencySpectrum(125, 160),<br>  FrequencySpectrum(160, 200),<br>  FrequencySpectrum(200, 250),<br>  FrequencySpectrum(250, 315),<br>  FrequencySpectrum(315, 400),<br>  FrequencySpectrum(400, 500),<br>  FrequencySpectrum(500, 630),<br>  FrequencySpectrum(630, 800),<br>  FrequencySpectrum(800, 1000),<br>  FrequencySpectrum(1000, 1250),<br>  FrequencySpectrum(1250, 1600),<br>  FrequencySpectrum(1600, 2000),<br>  FrequencySpectrum(2000, 2500),<br>  FrequencySpectrum(2500, 3150),<br>  FrequencySpectrum(3150, 4000),<br>  FrequencySpectrum(4000, 5000),<br>  FrequencySpectrum(5000, 6300),<br>  FrequencySpectrum(6300, 8000),<br>  FrequencySpectrum(8000, 10000),<br>  FrequencySpectrum(10000, 12500),<br>  FrequencySpectrum(12500, 16000),<br>  FrequencySpectrum(16000, 20000),<br>];<br>       </pre><p>For each frequency spectrum, the combined magnitude is calculated afterwards:</p><pre><br>List&lt;({FrequencySpectrum spectrum, double value})&gt; frequencyValues =<br>  frequencies.map((e) {<br>    final min = fft.indexOfFrequency(e.min.toDouble(), 44000);<br>    final max = fft.indexOfFrequency(e.max.toDouble(), 44000);<br>    <br>    return (<br>      spectrum: e,<br>      value: freqList<br>          .sublist(min.floor(), max.ceil())<br>          .reduce((a, b) =&gt; a + b),<br>    );<br>}</pre><p>Finally, these frequencyValues are passed to theonData function, which can then be used for visualizing the data in the UI.</p><p>We can now extract frequencies from a real-time audio signal, allowing us to visualize, animate, and implement various creative applications, e.g. integrating different frequencies into an apple intelligence clone like I did here:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/florian_voegtle/status/1843191750627885448&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/7263136691e3e3b7fda26c2e371219bd/href">https://medium.com/media/7263136691e3e3b7fda26c2e371219bd/href</a></iframe><h4>Summary</h4><p>In this article, we explored how frequency data can be extracted from real-time audio signals using Dart/Flutter to create a responsive visualization. To achieve this, raw PCM data is captured first, then a Fast Fourier Transform (FFT) is applied to convert the data from time domain to frequency domain. By analyzing this transformed data, we can group various frequencies to drive different parts of a shader, allowing specific audio frequency spectrums to influence separate parts of an animation.</p><p>We also covered essential concepts such as <strong>sample rate</strong> and <strong>buffer size</strong>, explaining how they affect real-time audio capture and processing. The sample rate should be high enough to prevent aliasing as suggested by the Nyquist–Shannon sampling theorem, which requires a rate at least double the highest frequency in the signal. Buffer size, meanwhile, controls the latency and processing delay, balancing responsiveness with smooth performance.</p><p>By following these steps, you can successfully integrate audio-responsive visualizations in Flutter, aligning with the latest trends set by companies like Apple and Google.</p><p>The full source code, <strong>including the visualization with the shader and Equalizer</strong>, is available under:</p><p><a href="https://github.com/vgtle/shader_studio/tree/intelligence">GitHub - vgtle/shader_studio at intelligence</a></p><p>If you enjoyed this guide and would like to see a Part 2, let me know! Clap for this article, leave a comment, or share it with others who might find it useful. Your feedback will help me create even more in-depth guides on different Flutter topics!</p><p>You can connect with me on <a href="https://x.com/florian_voegtle">X (formerly Twitter)</a> or <a href="https://www.linkedin.com/in/florian-v%C3%B6gtle/">LinkedIn</a> if you’re interested in Flutter and want to explore more in-depth topics.</p><h3>Sources</h3><p>[1] Pilhofer, Michael (2007). <a href="https://books.google.com/books?id=CxcviUw4KX8C"><em>Music Theory for Dummies</em></a>. For Dummies. p. 97. <a href="https://en.wikipedia.org/wiki/ISBN_(identifier)">ISBN</a> <a href="https://en.wikipedia.org/wiki/Special:BookSources/9780470167946">9780470167946</a>.</p><p>[2] <a href="https://vru.vibrationresearch.com/lesson/introduction-sine/">https://vru.vibrationresearch.com/lesson/introduction-sine/</a> (15.10.24)</p><p>[3] Chowdhury, Mehdi &amp; Cheung, Ray C.C.. (2019). Reconfigurable Architecture for Multi-lead ECG Signal Compression with High-frequency Noise Reduction. Scientific Reports. 9. 10.1038/s41598–019–53460–3.</p><p>[4] <a href="https://www.youtube.com/watch?v=rUtz-471LkQ">https://www.youtube.com/watch?v=rUtz-471LkQ</a> (06.11.24)</p><p>[5] Kapić, Aladin &amp; Sarić, Rijad &amp; Lubura, Slobodan &amp; Jokic, Dejan. (2021). FPGA-based Implementation of IIR Filter for Real-Time Noise Reduction in Signal. Journal of Engineering and Natural Sciences. 3. 10.14706/JONSAE2021316.</p><p>[6] <a href="https://www.youtube.com/watch?v=VNftf5qLpiA">https://www.youtube.com/watch?v=VNftf5qLpiA</a> (06.11.2024)</p><p>[7] <a href="https://www.ni.com/de/shop/data-acquisition/measurement-fundamentals/analog-fundamentals/acquiring-an-analog-signal--bandwidth--nyquist-sampling-theorem-.html">https://www.ni.com/de/shop/data-acquisition/measurement-fundamentals/analog-fundamentals/acquiring-an-analog-signal--bandwidth--nyquist-sampling-theorem-.html</a> (06.11.2024)</p><p>[8] Mastriani, Mario. (2018). Quantum-Classical Algorithm for an Instantaneous Spectral Analysis of Signals: A Complement to Fourier Theory. Journal of Quantum Information Science. 08. 52–77. 10.4236/jqis.2018.82005.</p><p>[9] <a href="https://www.jezzamon.com/fourier/">https://www.jezzamon.com/fourier/</a> (06.11.2024)</p><p>[10] <a href="https://www.hollyland.com/blog/tips/what-is-sample-rate-in-audio">https://www.hollyland.com/blog/tips/what-is-sample-rate-in-audio</a> (12.11.2024)</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f5fef9017f09" width="1" height="1" alt=""><hr><p><a href="https://medium.com/neusta-mobile-solutions/master-real-time-frequency-extraction-in-flutter-to-elevate-your-app-experience-f5fef9017f09">Master Real-Time Frequency Extraction in Flutter to Elevate Your App Experience</a> was originally published in <a href="https://medium.com/neusta-mobile-solutions">neusta mobile solutions</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Magical RGB Split Distortion Effect in Flutter]]></title>
            <link>https://medium.com/neusta-mobile-solutions/the-magical-rgb-split-distortion-effect-in-flutter-e080c106494a?source=rss----4eaa9b078a01---4</link>
            <guid isPermaLink="false">https://medium.com/p/e080c106494a</guid>
            <category><![CDATA[app-development]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[shaders]]></category>
            <category><![CDATA[ui]]></category>
            <category><![CDATA[flutter-app-development]]></category>
            <dc:creator><![CDATA[Florian Vögtle]]></dc:creator>
            <pubDate>Fri, 26 Jul 2024 18:47:26 GMT</pubDate>
            <atom:updated>2024-07-26T18:47:26.489Z</atom:updated>
            <content:encoded><![CDATA[<p>Daniel Kuntz (<a href="https://x.com/dankuntz">https://x.com/dankuntz</a>) recently posted about a beautiful effect he implemented in Swift, and I was inquisitive about how the effect works and whether Flutter is also capable of implementing this behavior.</p><p>The short answer to this is: Yes, Flutter is absolutely capable of doing this:</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/i/status/1813917551069438061&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/051cface9268c87505db6833ff430b0f/href">https://medium.com/media/051cface9268c87505db6833ff430b0f/href</a></iframe><h3>So how does it work?</h3><p>Well the answer to this is not that easy. For the purpose of this post, this is split up into 4 Chapters:</p><ul><li>The shader</li><li>Integration of the shader into Flutter</li><li>Animation Behavior</li><li>Ticker</li></ul><p>So let’s get started!</p><h3>The Shader</h3><h4>Introduction</h4><p>Shaders are an incredible tool to manipulate the pixels drawn on the screen. The benefit? Each pixel is manipulated independently, which allows parallel processing on the GPU for great performance. Sadly, this can’t be done within Flutter for now, but requires the shader to be written in the OpenGL Shading Language. BUT, it is not that complicated. Trust me on that.</p><h4>The RGB Distortion Effect</h4><p>As I said, the language itself is not difficult and the code is not a lot (See below), but it is all about the details and a little math. So let’s break it down!</p><pre>#include &lt;flutter/runtime_effect.glsl&gt;<br><br>uniform vec2 u_size;<br>uniform vec2 u_location;<br>uniform vec2 u_velocity;<br>uniform sampler2D u_texture;<br><br>out vec4 frag_color;<br><br>void main() {<br>  vec2 l = u_location;<br>  vec2 v = u_velocity;<br>  vec2 p = FlutterFragCoord().xy;<br><br><br>  vec2 m = -v * pow(clamp(1.0 - length(l - p) / 190, 0.0, 1.0), 2) * 1.5 ;<br>  <br>  vec3 c = vec3(0.0);<br><br>  for (int i = 0; i &lt; 10; i++) {<br>    float s = 0.175 + 0.005 * i;<br>      c += vec3(<br>          texture(u_texture, (p + s * m) / u_size).r, <br>         texture(u_texture, (p + (s + 0.035) * m) / u_size).g, <br>          texture(u_texture, (p + (s + 0.06) * m) / u_size).b<br>      );<br>    }<br>    <br>    frag_color = vec4(c / 10.0, To apply the RGB Split, we first need to receive the pixel according pixel of the rendered image from Flutter:);<br>}</pre><p>We start very simple. Since we are adjusting single independent pixels, the script needs to output a pixel, more precisely a pixel in the RGBA format, i.e it consists of the red, green, blue and alpha channel values. We can declare that the following way:</p><pre>out vec4 frag_color;</pre><p>Not that hard, right? So, let us continue! Next, input variables are required, allowing control of the behavior of the shader from outside. These are called <strong>uniforms</strong> and work as the bridge between Flutter and the shader for passing information. They will allow us to animate the shader later on. Again these uniforms are not hard to declare:</p><pre>uniform vec2 u_size;<br>uniform vec2 u_location;<br>uniform vec2 u_velocity;<br>uniform sampler2D u_texture;</pre><ul><li><em>u_size</em>: The screen size (width x height) that we need to keep the correct aspect ratio.</li><li><em>u_location</em>: The position on the screen that is touched by the user, used to trigger the effect.</li><li><em>u_velocity</em>: The speed of the pan gesture of the user on the device, used to control the strength of the effect.</li><li><em>u_texture</em>: This is the rendered image from Flutter, allowing for manipulation of the pixels rendered by Flutter.</li></ul><p>Now, there is shader input, i.e. multiple <em>uniform</em> variables, and a shader output, i.e. an <em>out</em> variable. So, only the logic for transforming the image is missing.</p><p>This is done in the <em>main()</em> function:</p><pre>void main() {<br>  vec2 l = u_location;<br>  vec2 v = u_velocity;<br>  vec2 p = FlutterFragCoord().xy;<br><br>  vec2 m = -v * pow(clamp(1.0 - length(l - p) / 190, 0.0, 1.0), 2) * 1.5 ;<br>  <br>  vec3 c = vec3(0.0);<br><br>  for (int i = 0; i &lt; 10; i++) {<br>    float s = 0.175 + 0.005 * i;<br>      c += vec3(<br>          texture(u_texture, (p + s * m) / u_size).r, <br>         texture(u_texture, (p + (s + 0.035) * m) / u_size).g, <br>          texture(u_texture, (p + (s + 0.06) * m) / u_size).b<br>      );<br>    }<br>    <br>    frag_color = vec4(c / 10.0, 1.0);<br>}</pre><p>Still, these are only 18 lines of code: A lot is happening. So, let’s go through this step by step:</p><ol><li><strong>Assigning Variables</strong></li></ol><pre>vec2 l = u_location;<br>vec2 v = u_velocity;<br>vec2 p = FlutterFragCoord().xy;</pre><p>For shorter code, <em>u_location</em> and <em>u_velocity</em> are reassigned. In addition, <em>FlutterFragCoord().xy, </em>i.e. the actual current pixel that should be manipulated, is also reassigned.</p><p>2. <strong>Calculating the amplitude of the effect</strong></p><p>The next step is to calculate how strongly the effect should be applied to the individual pixels. This is based on the following conceptual idea: the further away a pixel (<em>p</em>) is from the touched pixel on the screen (<em>l</em>), the smaller the effect should be.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/198/1*m3zhamM84sOyXtFbKG_IIQ.png" /><figcaption>Visualization of the area the effect should be applied to</figcaption></figure><p>The actual implementation also takes into account the velocity (<em>v</em>), i.e. the dragging speed and the direction of the user’s gesture, and applies some modifiers to make the function non-linear:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/284/1*922nub3Ci1NHaTOa2ri0-g.png" /></figure><p>This results in the following line of code:</p><pre>vec2 m = -v * pow(clamp(1.0 - length(l - p) / 190, 0.0, 1.0), 2) * 1.5 ;</pre><p>Playing around with different parameters for this function will lead to interestingly different results. So feel free to play around with it.</p><p>3. <strong>The RGB Split Effect</strong></p><p>Finally, something more interesting: The RGB Split Effect!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/172/1*JUwnzo7KNwDryJMwKYLw3Q.png" /><figcaption>The RGB Split Effect on the whole screen</figcaption></figure><p>To calculate the RGB split, we must first obtain the corresponding pixel of the rendered image from Flutter, i.e. the pixel that belongs to the position in the variable <em>p</em>. This can be done with the following code:</p><pre>texture(u_texture, p)</pre><p>The idea behind the RGB split is that each color channel, i.e. the red, green and blue channels, is translated in different directions:</p><pre>vec3 pixel_red = texture(u_texture, p - 0.02);<br>vec3 pixel_green = texture(u_texture, p + 0.035);<br>vec3 pixel_blue = texture(u_texture, p + 0.06);<br>vec4 new_pixel = vec4(pixel_red, pixel_green, pixel_blue, 1)</pre><p>Playing around with these translation numbers will again turn into different results:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8Xa4l7INBeo4k3I-y0Rxnw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hAYfC9GvwVHflQpu2_6rAQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YbZ-Vk0gI0E9mQ-iW0k0DA.png" /></figure><p>Obviously, this is not the final result we expect to get. There is no real distortion effect visible. So let’s add this now.</p><p>4. <strong>The Distortion Effect</strong></p><p>The distortion effect contains of two different effects:</p><ul><li>Translation of Pixels</li><li>Blur Effect</li></ul><p>The translation of pixels is done by applying the same translation value to each pixel:</p><pre>float distortion = 10;<br>vec3 pixel_red = texture(u_texture, p - 0.02 + distortion);</pre><p>This results in the following:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8Xa4l7INBeo4k3I-y0Rxnw.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*okIfIoT8EsGRXlCkPDB3Pw.png" /><figcaption>Left: The non translated pixels, Right: The translated pixels</figcaption></figure><p>This again does not look even close to what we want to achieve. To ensure the correct behavior, the previously calculated amplitude (<em>m</em>) is integrated into the current logic by multiplying the RGB split value and the distortion ratio by the amplitude:</p><pre>vec3 pixel_red = texture(u_texture, p - (0.02 + distortion) * m);</pre><p>This results in the following, which looks way closer to the expected behavior:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/246/1*xVfjnKa3DQpAaD9dDxvOiQ.png" /></figure><p>Still missing is the blur effect. This effect is again generated by iteratively translating the image in different directions and layering these on top of each other. The following shows the blur effect with just 2 iterations:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/278/1*amVOJmu2Z306kSgUg2yw1g.png" /></figure><p>For the actual effect, an iteration of 10 times is required to gain the expected result:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/258/1*JpmEsV_ri9-nwj7pEznV0w.png" /></figure><p>This results in the following code:</p><pre>vec4 new_pixel = vec4(0);<br><br> for (int i = 0; i &lt; 10; i++) {<br>   float distortion = 0.175 + 0.005 * i;<br>   vec3 pixel_red = texture(u_texture, p - (0.02 + distortion) * m);<br>   vec3 pixel_green = texture(u_texture, p + (0.035 + distortion) * m);<br>   vec3 pixel_blue = texture(u_texture, p + (0.06 + distortion) * m);<br><br>   new_pixel += vec4(pixel_red, pixel_green, pixel_blue, 1)<br>}<br>new_pixel = new_pixel / 10;</pre><p>and the final result of the effect:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/306/1*37CQWSrP-Z5Uj5Y1g3s49Q.png" /></figure><p>With this result, it is finally time to jump into Flutter to integrate this effect.</p><h3>Integration of the shader into Flutter</h3><p>The integration of shaders is pretty straight forward and generally explained in many posts (For more information take a look at <a href="https://docs.flutter.dev/ui/design/graphics/fragment-shaders">https://docs.flutter.dev/ui/design/graphics/fragment-shaders</a>).</p><p>The first thing to add is the path to the shader in your <em>pubspec.yaml:</em></p><pre>flutter:<br>  shaders:<br>    - &lt;path&gt;/&lt;name&gt;.frag</pre><p>To use the shader in the application, just add this single line of code in your <em>main()</em> function to load that specific shader:</p><pre>FragmentProgram.fromAsset(&#39;&lt;path&gt;/&lt;name&gt;.frag&#39;); </pre><p>Now, the shader can be used within every <em>CustomPainter </em>by assigning the shader to the <em>Paint</em> object<em>:</em></p><pre>class ShaderPainter extends CustomPainter {<br>  @override<br>  void paint(Canvas canvas, Size size) {<br>    var shader = _program.fragmentShader();<br>    final paint = Paint()..shader = shader;<br>    canvas.drawRect(<br>      Rect.fromLTWH(0, 0, size.width, size.height),<br>      paint,<br>    );<br>  }<br><br>  @override<br>  bool shouldRepaint(covariant CustomPainter oldDelegate) {<br>    return false;<br>  }<br>}</pre><p>So now we can render any shader, but how do we adjust the variables we defined for the shader in the last section? Well, it’s a bit odd, but once understood, it just works. The idea behind this is to use indices to fill the values of our variables, e.g. 0, 1 and 2. But, there is one more clue, that is essential to understand. When working with vectors, each value within the vector has its own index. For example:</p><pre>uniform vec4 pixel<br>uniform vec2 position</pre><p>will result in the following:</p><pre>shader.setFloat(0, pixel.r);<br>shader.setFloat(1, pixel.g);<br>shader.setFloat(2, pixel.b);<br>shader.setFloat(3, pixel.a);<br>shader.setFloat(4, position.x);<br>shader.setFloat(5, position.y);</pre><p>The last parameter that must be passed to the shader is the image of the rendered widget by calling <em>shader.setImageSampler</em>. For this purpose, another package is used called <em>flutter_shaders</em> (<a href="https://github.com/jonahwilliams/flutter_shaders">https://github.com/jonahwilliams/flutter_shaders</a>). While it is possible to receive the image without any additional packages, the package brings the <em>AnimatedSampler. </em>It<em> </em>allows for quick access to the image of the rendered Flutter Element the shader should be applied to and for easier animation support.</p><p>So finally, we can put together our shader integration:</p><pre>AnimatedSampler(<br>              (image, size, canvas) {<br>                final shader = _program.fragmentShader();<br>                shader.setFloat(0, size.width);<br>                shader.setFloat(1, size.height);<br>                shader.setFloat(2, position.dx);<br>                shader.setFloat(3, position.dy);<br>                shader.setFloat(4, velocity.dx);<br>                shader.setFloat(5, velocity.dy);<br>                shader.setImageSampler(0, image);<br>                canvas.drawRect(<br>                  Rect.fromLTWH(0, 0, size.width, size.height),<br>                  Paint()..shader = shader,<br>                );<br>              },<br>child: &lt;your Flutter Widget that the shader should be applied to&gt;<br> ),</pre><p>Maybe you are wondering where the position and velocity value are coming from? This will be answered in the next section, where the animation is added.</p><h3>Animation Behavior</h3><p>The position and velocity are the two values still missing. The position can easily be obtained by the <em>GestureDetector</em> provided by Flutter:</p><pre>GestureDetector(<br>            onPanStart: (details) {<br>              setState(() {<br>                position = details.globalPosition;<br>              });<br>            },<br>            onPanUpdate: (event) {<br>              setState(() {<br>                position = details.globalPosition;<br>              });<br>            },<br>            onPanEnd: (details) =&gt; setState(() {<br>position = null; }),<br>            onPanCancel: () =&gt; setState(() {<br><br>position = null; <br>}),</pre><p>This results in the following effect with a static velocity:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*B7ZGPsJOAW9irBdls9o2tw.gif" /></figure><p>The velocity is the most tricky part of the animation. For this. the following behavior needs to be implemented:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*RE6OeIHmiFwFhu-y8wr78g.gif" /></figure><p>With this behavior, the green point is the actual position of the user’s finger on the screen. The red point follows the green point but in a lazy way by slowly getting closer to the green point. With the help of these two points the velocity is calculated. The idea is that the distance between both points describes the velocity. This results in the following math function to calculate the velocity <em>v</em>, with <em>p</em> being the position of the green point, <em>l</em> being the position of the red point and <em>dmax</em> being the distance between <em>p </em>and<em> l </em>where the pull effect should be the strongest:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/947/1*eIAwEx0L0X4acuCGLa-pVA.png" /></figure><p>This looks very complicated, but lets break it down and take a look at the code. It is not as difficult to understand as it seems. First, the behavior can be simplified to show the general concept, which drastically reduces the complexity of the function while retaining the core concept:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/218/1*CEG8YaM0aGtSQwrYanbbjg.png" /></figure><p>The idea here is to take the vector between <em>p </em>and<em> l </em>and reduce the length to 20%. This is our velocity <em>v. </em>And obviously, the corresponding code is not that complicated too:</p><pre>final distance = desiredPosition - position;<br>      velocity = distance * 0.2;</pre><p>This results in the following behavior:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*VJAzIoxpLi5htYC6ZXzIEQ.gif" /></figure><p>This is close to the wanted behavior, but the actual animation should have a more non linear behavior, i.e. the further the red point is away from the green point, the faster the red point should move.</p><p>For those who know about animations, you probably know about the Curves class and the different transformer functions it provides for transforming linear animations from <em>0–1</em> to a more curved animation:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/464/1*uF68haJmcpEqga62wKKQOA.gif" /><figcaption><a href="https://api.flutter.dev/flutter/animation/Curves-class.html">https://api.flutter.dev/flutter/animation/Curves-class.html</a></figcaption></figure><p>This is basically just an implementation of the following math function:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/275/1*Z8_sEUNphjn06VRSCbM0Lg.png" /></figure><p>This is exactly what we need to make the animation a little more dynamic. So let’s put that in some code:</p><pre>final distance = desiredPosition - position;<br>final amplitude = 1 - max(0, 1000 - distance.distance) / 1000;<br>velocity = distance * (0.02 + 0.2 * Curves.easeOutQuart.transform(amplitude));</pre><p>In the code, there is a new variable <em>amplitude,</em> which maps the distance of the red point (<em>position</em>) and the green point (<em>desiredPosition</em>) into a value between <em>0</em> and <em>1</em>. The variable is then used to calculate the new <em>velocity</em>, with a minimum speed of 0.02, i.e. 2% of the distance, which goes up to 0.22, i.e. 22% of the distance between both points. To get the dynamic animation behavior the ease out curve transformation is applied. Finally, the velocity can be added to the position of the red point:</p><pre>position += velocity;</pre><p>Now, the velocity is calculated and the whole logic just needs to be added somewhere in the Flutter code to actually trigger the animation. So how do we do that?</p><h3>Ticker</h3><p>The answer is the <em>Ticker</em> class. While there is the <em>AnimationController, </em>which is ususally<em> </em>used for any kind of animation, we do not want any kind of duration or animated value. Instead, we want to have a continuous loop that is called over and over again, allowing for modification of values frame by frame. This is exactly what the <em>Ticker</em> class provides us with.</p><p>Creating a <em>Ticker</em> is easy. First, we need to add the <em>SingleTickerProviderMixin</em> to our state class. The <em>Mixin</em> now offers the <em>createTicker</em> function.</p><pre>ticker = createTicker(onUpdate)..start();</pre><p>Now all you have to do is call the <em>createTicker</em> function with a callback that will be called for each frame and start the ticker.</p><p>The callback should contain our velocity update logic:</p><pre>void onUpdate(Duration elapsed) {<br>    final delta = ((elapsed.inMicroseconds - lastTime.inMicroseconds) / Duration.microsecondsPerSecond) * 60;<br>    lastTime = elapsed;<br>      final distance = desiredPosition - position;<br>      final amplitude = 1 - max(0, 1000 - distance.distance) / 1000;<br>    setState(() {<br>      velocity = distance * (0.02 + 0.2 * Curves.easeOutQuart.transform(amplitude));<br>      position += velocity * delta;<br>    });<br>  }</pre><p>One thing that is added here is the <em>delta</em> value, i.e. the time difference between the frames, that takes care that our animation always runs with the same speed regardless of the FPS of the running device, e.g. 30 FPS, 60 FPS or even 120 FPS.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*yy9I2xQMzEd9y4AitRHfiQ.gif" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*3_vFHC44U1ldQbPX21znTQ.gif" /></figure><p>Left: With the <em>delta </em>value<em> </em>, Right: Without the <em>delta </em>value</p><p>With this final step, the effect is finally implemented:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*tBD0mzQLYAUuTC9IQdwK0w.gif" /></figure><h3>Contact</h3><p>Whether you have any questions left or you are just interested in more Flutter related topics, you can find me on X:</p><p><a href="https://x.com/florian_voegtle">x.com</a></p><h3>Source Code</h3><p>The source code of this project can be found here:</p><p><a href="https://github.com/vgtle/rgb_split_distortion_shader_example">GitHub - vgtle/rgb_split_distortion_shader_example: This is the source code for the rgb split distortion effect</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e080c106494a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/neusta-mobile-solutions/the-magical-rgb-split-distortion-effect-in-flutter-e080c106494a">The Magical RGB Split Distortion Effect in Flutter</a> was originally published in <a href="https://medium.com/neusta-mobile-solutions">neusta mobile solutions</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>