<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Martin W. Hansen on Medium]]></title>
        <description><![CDATA[Stories by Martin W. Hansen on Medium]]></description>
        <link>https://medium.com/@martin_wh?source=rss-b6e8d2ae06b1------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 19:21:34 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@martin_wh/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Kalman Filtering and Bayesian updating in Global Leadership]]></title>
            <link>https://medium.com/@martin_wh/kalman-filtering-and-bayesian-updating-in-global-leadership-a0791831a398?source=rss-b6e8d2ae06b1------2</link>
            <guid isPermaLink="false">https://medium.com/p/a0791831a398</guid>
            <dc:creator><![CDATA[Martin W. Hansen]]></dc:creator>
            <pubDate>Tue, 23 Sep 2025 13:58:57 GMT</pubDate>
            <atom:updated>2026-02-01T17:21:44.802Z</atom:updated>
            <content:encoded><![CDATA[<p>We all know that cross-border projects gets more and more popular. Take for instance Oslo as a geogrphical starting point.</p><p>Being a manager in Oslo and managing teams in say, India, will require cultural knowledge, respect and inquiry. By respect, we must and should always consider how work-life, career development, academia will differ greatly between countries<br>By inquiry, is to inform and to consult more knowledge in the ways your own work culture will overlap but most importantly contrast to other work cultures around the world.</p><p>And here, much like our brains naturally process sensory information, effective cross-cultural leadership relies on sophisticated filtering mechanisms that separate signal from noise. This is a process beautifully captured by combining <strong>Bayesian brain inference theory</strong> with <strong>Kalman filtering</strong>.</p><h3>BayesLEAD</h3><p>After having studied and developed my concept of triangulization, and looking to cutting-edge neurology in the perspective of global leadership, I deviced a concept which I named BayesLEAD, a 4 year R&amp;D process. Bayesian inference is at the heart of this concept.</p><p><strong>L</strong> stands for <em>likelihood</em> (i.e. what chance is it that what you think is the opposite of the truth) ? <br><strong>A</strong> stands <em>assess</em>. <br><strong>E</strong> stands for <em>Empathy</em>, and to be considerate and setting yourself into your colleagues&#39; position and values. <br><strong>D </strong>stands for <em>Deliver / Deploy</em>, which is to integrate all these into a process. Shortly put, it is to step back a bit, and to challenges one&#39;s own immediate assessments.</p><p>This management/neurological framework of mine will also be a crucial foundation of forthcoming <a href="http://www.withavenworks.com">Cross-Cultural Intelligence Solutions at Withaven Works</a>.</p><h3>The Bayesian Leader&#39;s Mindwork</h3><p>Our brains operate as sophisticated <strong>Bayesian inference engines</strong>, constantly updating beliefs based on new evidence[1]. In leadership contexts, this means we maintain <strong>prior beliefs</strong> about different cultures — some familiar (those we’ve worked with extensively), others less so (new cultural contexts we’re encountering). The mathematical foundation follows Bayes’ theorem:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nK4yOQKYUcGbhJw2jYSNJA.png" /></figure><p>When leading teams from familiar cultures, our priors are strong and well-calibrated. However, encountering unfamiliar cultural contexts introduces <strong>uncertainty. </strong>That is precisely where Bayesian updating becomes crucial[3]. Our brains naturally adjust cultural models based on observed team behaviours, communication patterns, and feedback loops.</p><h3>Kalman Filtering: Separating Signal from “Cultural Noise”</h3><p>Cross-cultural communication inevitably introduces noise. Misinterpretations, assumptions, and biased perceptions are factors that cloud us, and makes it difficult to sense what truly are authentic cultural signals[4].</p><p>The Kalman filter provides an elegant mathematical framework for optimal state estimation amid uncertainty:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KQl79TSjktzpjZxGg17QxQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*O2tbkdqhPWAEC7NzGGUqoQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Z_geG0fe9zxy2kD0k7qX1A.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rgw0YidkoOXkQbbX5Xu3NA.jpeg" /></figure><h4>But what does all of this mean? Does it add any value?</h4><p>In leadership terms, this translates to <strong>continuously refining our understanding</strong> of team dynamics whilst filtering out cultural misconceptions and stereotypes[5]. The Kalman gain determines how much weight we give to new cultural observations versus our existing mental models.</p><h3><strong>Practical Applications</strong></h3><p><strong>Precision-Weighted Leadership</strong>: Just as the brain assigns different precision weights to sensory inputs[7], effective leaders calibrate their responses based on cultural familiarity. High-precision interactions with familiar cultures require minimal adjustment, whilst unfamiliar contexts demand greater <strong>active inference, </strong>thus deliberately seeking additional information to reduce uncertainty[2].</p><p><strong>Noise Reduction Strategies</strong>: Leaders can implement practical Kalman-inspired approaches by establishing <strong>feedback loops</strong> with team members from different cultures, regularly updating assumptions, and maintaining healthy scepticism about initial cultural interpretations[8].</p><p><strong>Dynamic Belief Updating</strong>: Rather than rigid cultural stereotypes, Bayesian leaders maintain <strong>probabilistic models</strong> that evolve with new evidence — recognising that individual team members may not conform to broader cultural patterns[9].</p><p>Research demonstrates that effective cross-cultural leaders exhibit enhanced <strong>neuroplasticity</strong> — their brains actively rewire neural pathways when encountering new cultural contexts[8]. This mirrors the brain’s natural tendency to minimise <strong>free energy</strong> (surprise) by continuously updating predictive models of the world[10].</p><p>The integration of emotional and rational processing becomes particularly crucial when cultural uncertainty is high. Leaders must balance <strong>System 1</strong> (intuitive cultural reactions) with <strong>System 2</strong></p><h3>The Neuroscience of Cultural Adaptation</h3><p>Research demonstrates that effective cross-cultural leaders exhibit enhanced <strong>neuroplasticity.</strong> This implies that our brains actively rewire neural pathways when encountering new cultural contexts[8]. Furthermore, this will actually mirror the brain’s natural tendency to minimise <strong>free energy</strong> (surprise) by continuously updating predictive models of the world[10].</p><p>The integration of emotional and rational processing becomes particularly crucial when cultural uncertainty is high. Leaders must therefore balance their intuitive cultural reactions with a deliberate cultural analytical mindset. Only then can we use precision-weighted attention to focus on the most informative cultural signals whilst filtering out noise[11].</p><p><strong>Key Takeaway</strong>: Like the Bayesian brain processing sensory information, exceptional cross-cultural leaders continuously update their cultural models through evidence-based learning whilst maintaining robust filtering mechanisms to separate authentic cultural insights from misleading noise. This mathematical approach to leadership transforms cultural complexity from a barrier into a competitive advantage.</p><p><em>What cultural “noise” might you be overlooking in your current leadership context? Consider implementing your own Bayesian-Kalman approach to team dynamics.</em></p><p>Sources:</p><p>[1] The history of the future of the Bayesian brain — PMC <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC3480649/">https://pmc.ncbi.nlm.nih.gov/articles/PMC3480649/</a></p><p>[2] The Bayesian Brain <a href="https://www.fil.ion.ucl.ac.uk/bayesian-brain/">https://www.fil.ion.ucl.ac.uk/bayesian-brain/</a></p><p>[3] Modeling other minds: Bayesian inference explains human … <a href="https://www.science.org/doi/10.1126/sciadv.aax8783">https://www.science.org/doi/10.1126/sciadv.aax8783</a></p><p>[4] Barriers in Cross-Cultural Communication <a href="https://www.mbaknol.com/business-communication/barriers-in-cross-cultural-communication/">https://www.mbaknol.com/business-communication/barriers-in-cross-cultural-communication/</a></p><p>[5] Implementation of Kalman Filter Approach for Active Noise … <a href="https://arxiv.org/abs/2402.06896">https://arxiv.org/abs/2402.06896</a></p><p>[6] Kalman Filter Equations — Summary <a href="https://www.kalmanfilter.net/multiSummary.html">https://www.kalmanfilter.net/multiSummary.html</a></p><p>[7] Bayesian brain theory: Computational neuroscience of belief <a href="https://www.sciencedirect.com/science/article/abs/pii/S0306452224007048">https://www.sciencedirect.com/science/article/abs/pii/S0306452224007048</a></p><p>[8] The Neuroscience of Great Leadership <a href="https://hortoninternational.com/neuroscience-of-great-leadership/">https://hortoninternational.com/neuroscience-of-great-leadership/</a></p><p>[9] Fear-Free Cross-Cultural Communication: Toward a More … <a href="https://www.frontiersin.org/journals/communication/articles/10.3389/fcomm.2020.00014/full">https://www.frontiersin.org/journals/communication/articles/10.3389/fcomm.2020.00014/full</a></p><p>[10] “Surprise” and the Bayesian Brain: Implications for Psychotherapy Theory and Practice <a href="https://www.frontiersin.org/article/10.3389/fpsyg.2019.00592/full">https://www.frontiersin.org/article/10.3389/fpsyg.2019.00592/full</a></p><p>[11] Commentary: A Computational Theory of Mindfulness Based Cognitive Therapy from the “Bayesian Brain” Perspective <a href="https://www.frontiersin.org/articles/10.3389/fpsyt.2021.575150/full">https://www.frontiersin.org/articles/10.3389/fpsyt.2021.575150/full</a></p><p>Compendium:<br><strong>B — Control Matrix</strong></p><p>Think of <strong>B</strong> as your “steering wheel matrix”. If you’re driving a car and press the accelerator (control input <strong>u</strong>), matrix <strong>B</strong> tells you how that action affects your position and velocity. It transforms your deliberate actions into changes in the system state.</p><p><em>Intuitive meaning</em>: “How much does my deliberate input change the system?”</p><p><strong>F : State Transition Matrix</strong></p><p><strong>F</strong> is your “time machine” — it predicts where your system will be in the next time step based on where it is now. If you know a car’s current position and velocity, <strong>F</strong> uses physics (like Newton’s laws) to predict where it’ll be one second later.</p><p><em>Intuitive meaning</em>: “If I do nothing, how does my system naturally evolve over time?”</p><p><strong>H :Observation Matrix</strong></p><p><strong>H</strong> is your “measurement translator”. Your sensors might only measure position, but your state includes both position and velocity. <strong>H</strong> maps from your full state (what you want to know) to what your sensors actually measure.</p><p><em>Intuitive meaning</em>: “Given my full system state, what should my sensors read?”</p><p>K: Kalman Gain**</p><p><strong>K</strong> is the “trust arbitrator” — the most crucial element. It decides whether to trust your prediction more or your new measurement more. High <strong>K</strong> means “trust the sensor”, low <strong>K</strong> means “trust the prediction”.</p><p><em>Intuitive meaning</em>: “How much should I adjust my prediction based on this new measurement?”</p><p><strong>x - State Vector</strong></p><p><strong>x</strong> is &quot;what you actually care about&quot; - the true system state you&#39;re trying to estimate[1]. For a moving object, this might be [position, velocity, acceleration]. The hat notation <strong>x̂</strong> means &quot;our best estimate of <strong>x</strong>&quot;.</p><p><em>Intuitive meaning</em>: &quot;The real values I&#39;m trying to figure out&quot;</p><p><strong>z - Measurement Vector</strong></p><p><strong>z</strong> represents &quot;what your sensors tell you&quot;. These are the actual readings from GPS, accelerometers, cameras, etc. They&#39;re noisy and incomplete but contain valuable information about the true state.</p><p><em>Intuitive meaning</em>: &quot;The noisy data my sensors give me&quot;</p><p><strong>P — Estimate Covariance</strong></p><p><strong>P</strong> represents your “confidence level”. It’s a matrix describing how uncertain you are about each state variable and how they’re correlated. Large <strong>P</strong> means “I’m quite uncertain”, small <strong>P</strong> means “I’m confident”.</p><p><em>Intuitive meaning</em>: “How unsure am I about my current estimate?”</p><p><strong>Q :Process Noise Covariance</strong></p><p><strong>Q</strong> captures “life’s unpredictability”. Even with perfect models, random disturbances affect your system — wind gusts, road bumps, or measurement errors in control inputs.</p><p><em>Intuitive meaning</em>: “How much random stuff happens to my system that I can’t predict?”</p><p><strong>R : Measurement Noise Covariance</strong></p><p><strong>R</strong> quantifies “sensor reliability”. Every sensor has noise — GPS might be accurate to ±3 metres, accelerometers drift, cameras get blurry. <strong>R</strong> tells the filter how much to trust each sensor.</p><p><em>Intuitive meaning</em>: “How noisy and unreliable are my sensors?”</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a0791831a398" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Thermodynamics and Engineering Concepts in Organisational Strategy]]></title>
            <link>https://medium.com/@martin_wh/from-gears-to-gains-engineering-concepts-in-corporate-strategy-d42eb291cc4d?source=rss-b6e8d2ae06b1------2</link>
            <guid isPermaLink="false">https://medium.com/p/d42eb291cc4d</guid>
            <category><![CDATA[thermodynamics]]></category>
            <category><![CDATA[business-strategy]]></category>
            <category><![CDATA[differentiation]]></category>
            <category><![CDATA[organizational-change]]></category>
            <category><![CDATA[interdisciplinary]]></category>
            <dc:creator><![CDATA[Martin W. Hansen]]></dc:creator>
            <pubDate>Tue, 14 Nov 2023 22:57:11 GMT</pubDate>
            <atom:updated>2023-12-05T12:57:51.594Z</atom:updated>
            <content:encoded><![CDATA[<p>Many would not consider that there are similarities between engineering, and business and organisational management. But inspiration for efficiency can often be found in the most unexpected places. While studying New Institutionalism and its insights into how organisations adapt and evolve within their environments, I saw clear parallels to thermodynamic systems, especially the principles underlying a Carnot engine. This constitutes the field of thermodynamics — a branch of physics focusing on heat, work, and energy forms. By drawing parallels between thermodynamic principles and organisational strategies, I propose that we can rethink conventional insights by lending mechanic concepts from other domains for creating new transformative narratives and management tools. This is in particularly of great importance when examining the challenges faced by our current economic landscape.</p><h3>1. Thermal Equilibrium and Organisational Homogeneity: The Imperative for Differentiation.</h3><p><strong><em>Q = m · c · ∆T</em></strong></p><p>In thermodynamics, thermal equilibrium occurs when a system has no net heat flow due to the absence of a temperature gradient (<em>∆T </em>=<em> </em>0). This concept mirrors organisational homogeneity, where a lack of internal diversity in skills, innovation, and processes leads to competitive disadvantage. For businesses and organisations alike, differentiation itself from its environment is paramount for strategic alignment and market positioning. The effectiveness of an business organisations, akin to the heat transfer equation, is crucial for sustainable growth and competitive advantage. Although trends like organisational <em>isomorphism</em> are prevalent in organisational research, where organisations often assimilate practices — evident in private practises which have infiltrated the public sector — this approach doesn’t always guarantee success. Following trends and aligning to one&#39;s surroundings can be a instinctively safe play, but it will only get you that far.</p><p>The effectiveness of an organisation, akin to the heat transfer equation <br><em>Q = m · c · ∆T, </em>hinges therefore on the ‘temperature difference’ between its internal capabilities and the external market (where Q is the heat transfer, <em>m </em>is the mass, <em>c </em>is the specific heat, and <em>∆T </em>is the temperature difference). Without this ‘heat flow’ of differentiation, organisations risk stagnation in a rapidly evolving global economy.</p><p>In our exploration of organisational dynamics through the lens of thermodynamics, the equation <em>Q = m · c · ∆T </em>offers some key insights. Here, <em>Q </em>represents the impact or change an organisation seeks to achieve. <br><em>c</em> symbolises the specific heat similar to the organisational culture and processes, and <em>∆T </em>the temperature difference, reflecting the differential between the organisation’s internal capabilities and the external market.</p><p>Central to this equation is ‘<em>m</em>’, or mass, representing an organisation’s resources. This is a crucial element that must be carefully balanced. A well-balanced ‘organisational mass’ ensures optimal resource allocation, striking a harmony between having sufficient resources to drive change and maintaining agility and responsiveness. This balance is key to sustainable growth and competitive advantage in a rapidly evolving global economy.</p><p>Consider <em>Q = m · c · ∆T </em>in an organisational context:</p><p><strong>Optimal Resource Allocation (<em>m</em>)</strong>: Just like mass in a thermodynamic system, an organisation must optimally allocate its resources (financial, human, technological) to effectively influence change. Too much mass can lead to inefficiency, while too little can impede progress.</p><p><strong>Cost Efficiency (<em>m</em>)</strong>: Balancing the ‘organizational mass’ ensures cost efficiency, preventing overinvestment in less impactful areas and focusing on those that drive substantial change</p><p><strong>Agility and Responsiveness</strong>: A well-balanced ‘organisational mass’ ensures that the organization remains agile and responsive. It allows for quicker adaptation to external changes (represented by <em>∆T</em>, the temperature difference) and more effective implementation of strategies.<br>Thinking this through and applying this thermodynamic analogy, organisations can more effectively sidestep homogeneity and also navigate both external factors and internal dynamics.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*KcMBLD-rZRBte_d4oyEHAg.jpeg" /></figure><h3>2. Capitalising on the ‘Temperature Difference’.</h3><p>The efficiency of a thermodynamic heat engine is tied to the temperature difference between its hot and cold reservoirs. This principle can be applied to organisations, where the ‘hot source’ represents their core competencies, and the ‘cold sink’ embodies the external environment and business arena. The greater this difference, the more efficiently an organisation operates. For instance, leveraging a robust engineering and manufacturing base (hot source) while adapting to challenging aspects like digital transformation (cold sink) can reignite its economic engine for enhanced efficiency and global competitiveness. Capitalising on this strength of ‘temperature difference’ for organisational efficiency and applying business transformation practices can thus be a mean for creating disruptive motion and velocity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2AqTsvx2G6--a7a93yvLJQ.png" /><figcaption>How can a business organisation ensure &#39;temperature&#39; difference from its organisational milieu and macroenvironment? Illustration kindly provided by DALL-E.</figcaption></figure><h3>3. Entropy and Organisational Complexity: Preserving Structure Amidst Growth.</h3><p><strong><em>∆ S = ∆ Q / T</em></strong></p><p>In thermodynamics, entropy ( <em>S</em> ) is a measure of disorder, tending to increase in isolated systems. As a company grows, it naturally veers towards complexity and disorder unless efforts are made to maintain structure and clarity. The equation <em>∆ S = ∆ Q / T </em>(change in entropy equals the heat transfer divided by the temperature) suggests that changes in &#39;organisational entropy&#39; can be managed by carefully controlling &#39;heat&#39; <em>Q</em> (processes, innovations) and maintaining a steady &#39;temperature&#39; <em>T</em> (generative corporate culture and strategy). In a real engine, and organisation for that matter, unlike an ideal Carnot engine, the unavoidable generation of extra entropy through friction and heat loss reduces overall efficiency.</p><p>In practice, addressing challenges like bureaucracy and digital hesitancy therefore involves reducing ‘organisational entropy’. This reduction of ‘organisational entropy’ involves implementing change management strategies and digital innovation initiatives, putting efforts into streamlining processes, promoting and supporting digital adoption, and also to inspire ownership cultures. The total executive and productive capacity of an organisation can be quite similar when comparing concepts like friction, and &#39;heat&#39;, just as in how executive and productive capacity can be lost.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SaPcJgqj-Enue0pYi7j-4Q.jpeg" /><figcaption>A colossal Carnot engine. Unsplash, Simone Hutsch.</figcaption></figure><h3>Applying an Engineering Mindset into Organisations and Business</h3><p>By opening our horizons for other domains and professions outside business and organisational domains, I contend that principles of thermodynamics provide a profound toolkit for creating change and progress in strategy and development. Embracing the ethos of differentiation, maximising the ‘temperature difference’ for efficiency, and managing organisational complexity are not just theoretical concepts; they are essential strategies for any business or organisation aspiring to promote stronger decisions and processes. Through integrating sound practices from successful implementations such as those seen in Public-Private Partnerships, while also appreciate the importance of <a href="https://global.oup.com/academic/product/a-translation-theory-of-knowledge-transfer-9780198832362?cc=gb&amp;lang=en">organisational translation</a>, it enables us to employ creative, innovative mindsets to propel organisations forward.</p><p>In light of this, I suggested that it would be considered beneficial to contemplate and and re-engage with organisational strategies and practices. If we embark and adopt these novel perspectives and tackle challenges in unconventional ways, we can transform corporate surroundings into advantages rather than barriers. Far from being just academic theory and musings; It now seem like they could be very crucial tools for any business or organisation nowadays, for those who are in pursuit for gaining momentum in the current dynamic landscape.</p><p><a href="https://www.forbes.com/sites/peterhinssen/2015/04/20/survive-disruption-by-harnessing-the-thermodynamics-of-organizations/">https://www.forbes.com/sites/peterhinssen/2015/04/20/survive-disruption-by-harnessing-the-thermodynamics-of-organizations/</a></p><p><a href="https://www.ceeol.com/search/article-detail?id=943861">https://www.ceeol.com/search/article-detail?id=943861</a></p><p>Jaja, S. A., Gabriel, J. M. O., &amp; Wobodo, C. C. (2019). Organizational isomorphism: The quest for survival. <em>Noble International Journal of Business and Management Research</em>, <em>3</em>(5), 86–94.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d42eb291cc4d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A bright side of AI: Ultrasound for Plant Health — A Green Revolution in Progress?]]></title>
            <link>https://medium.com/@martin_wh/a-bright-side-of-ai-ultrasound-for-plant-health-a-green-revolution-in-progress-5eba9d3d6068?source=rss-b6e8d2ae06b1------2</link>
            <guid isPermaLink="false">https://medium.com/p/5eba9d3d6068</guid>
            <category><![CDATA[sustainable-development]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[sustainability]]></category>
            <category><![CDATA[cnn-model]]></category>
            <category><![CDATA[ultrasonic]]></category>
            <dc:creator><![CDATA[Martin W. Hansen]]></dc:creator>
            <pubDate>Thu, 15 Jun 2023 19:52:27 GMT</pubDate>
            <atom:updated>2025-09-11T20:16:47.551Z</atom:updated>
            <content:encoded><![CDATA[<h3>The bright sides of AI: Ultrasonic Detection for Plant Health. Could it be a big win for sustainability?</h3><p>Have you ever wished your plants could tell you exactly what they needed to stay healthy? Have you had a favourite plant that, despite your best efforts, wilted and died? It’s a situation many of us have faced and wished we could do better. Fascinatingly, it turns out that our plants have been trying to communicate their needs all along.</p><p>In the quest for innovative solutions towards a more sustainable future, researchers have uncovered a novel method that marries technology and nature in a most intriguing way. Using artificial intelligence and ultrasound, they can now “listen” to tomato plants, a breakthrough that carries profound implications for the world of agriculture and beyond.</p><p>Published in the esteemed journal <a href="https://www.sciencedirect.com/science/article/pii/S0092867423002623">Cell</a>, the study utilises convolutional neural networks (CNN), a form of deep learning AI, to detect and interpret the ultrasonic emissions of plants. The unique element here is that plants, though silent to the human ear, have a symphony of their own that becomes audible with the right technology.</p><p><em>Do make sure to check out the sound sample of tomato plants in the </em><a href="https://www.sciencedirect.com/science/article/pii/S0092867423002623"><em>link</em></a><em> to this paper.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/996/1*KB4arn1fcHGjVDn3lMFWSw.jpeg" /><figcaption>Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Goldstein, R., Anikster, Y., … &amp; Hadany, L. (2023).</figcaption></figure><p>When plants experience stress, whether due to lack of water, extreme temperatures or disease, their ultrasonic emissions alter. These stress signals can be picked up by sensitive microphones and interpreted by CNN models, which have been trained to differentiate between the sounds of healthy and stressed plants. The potential applications in agriculture are far-reaching: If this technology really can enable real-time monitoring of crop health and targeted responses as this promising paper proposes, then I would say this is a revolutionary step forward.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*do98wQBfxhpnVhz0d0EquA.jpeg" /><figcaption>Khait, I., Lewin-Epstein, O., Sharon, R., Saban, K., Goldstein, R., Anikster, Y., … &amp; Hadany, L. (2023).</figcaption></figure><h3>Real-time feedback</h3><p>The implications of this research are significant. It gives us the potential to monitor plant health in real time, allowing us to respond swiftly to stresses, thereby improving crop yields and ultimately contributing to global food security. Not only could this reduce waste, but it could also decrease the reliance on harmful pesticides and fertilisers, as we’d be able to pinpoint exactly what the plant needs and when. A huge difference from detecting visually clues of plant health, when it may be too late and the damage is already done.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/910/1*n14SS44cWdKs6i3lKDgrxA.jpeg" /><figcaption>Pxfuel ©</figcaption></figure><h3>It’s now or nature</h3><p>Ultimately, this study is an example of the bright side of technology. It showcases the potential that lies at the intersection of AI and sustainability, giving us a glimpse into a future where technology aids crucial parts of plant life and agriculture. As we move forward, it’s clear that such innovative applications of AI, if properly applied, will be instrumental in driving our efforts towards a more sustainable future.</p><p><strong>Despite the abundance of talk and buzzwording about ‘sustainability’ , actionable solutions often seem far between and elusive. This innovative technology, however, brings a practical, deployable approach within our grasp and I am genuinly looking forward to learn more about this method.</strong></p><p><a href="https://www.sciencedirect.com/science/article/pii/S0092867423002623">Sounds emitted by plants under stress are airborne and informative</a></p><p><a href="https://www.sciencenews.org/article/plant-stress-ultrasonic-click-noise-sound">https://www.sciencenews.org/article/plant-stress-ultrasonic-click-noise-sound</a></p><p><a href="https://www.sciencedirect.com/science/article/pii/S0092867423002623">https://www.sciencedirect.com/science/article/pii/S0092867423002623</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5eba9d3d6068" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bayesian Structural Time Series and ARIMA; why not both?]]></title>
            <link>https://medium.com/@martin_wh/bayesian-structural-time-series-and-arima-why-not-both-de7fcd163d37?source=rss-b6e8d2ae06b1------2</link>
            <guid isPermaLink="false">https://medium.com/p/de7fcd163d37</guid>
            <category><![CDATA[bayesian-statistics]]></category>
            <category><![CDATA[education]]></category>
            <category><![CDATA[business-intelligence]]></category>
            <category><![CDATA[business-models]]></category>
            <category><![CDATA[arima]]></category>
            <dc:creator><![CDATA[Martin W. Hansen]]></dc:creator>
            <pubDate>Sun, 14 May 2023 21:29:34 GMT</pubDate>
            <atom:updated>2025-10-03T21:23:59.633Z</atom:updated>
            <content:encoded><![CDATA[<p>I recently completed a small online course <a href="https://www.videocation.no/">videocation</a> on ARIMA and found it to be quite fascinating. I had previously seen many posts written about ARIMA, and many of them were often related to computer programming. Notwithstanding, I must confess that programming languages are really not one of my best strengths, though I am nonetheless quite captivated by the mathematics involved in computer science.</p><h3>Trendy forecasting</h3><p>In any context, be it political, business, entrepreneurial or even medical, we all rely on accurate forecasting as it is critical for making informed decisions and for keeping ahead. Therefore I consider the importance of having an understanding and statistical comphrension of such tools such as Bayesian Structural Time Series (BSTS) and ARIMA to be both future-proof and solid.</p><p><strong>Bayesian Structural Time Series</strong> is a tool that analyses time-series data and predicts future trends by combining Bayesian statistics with time-series analysis. It provides a flexible and user-friendly framework for studying a broad range of time-series data, including financial data, economic data, and social data, and also medical research such as viral infection data.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FP_RnURpkgdE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DP_RnURpkgdE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FP_RnURpkgdE%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/3f6aa45df4a1410d9e1a11b36d1e895e/href">https://medium.com/media/3f6aa45df4a1410d9e1a11b36d1e895e/href</a></iframe><p>As of such, I recently came across a highly informative and intuitive video on this topic by Aric LaBarre, an Associate Professor of Analytics at North Carolina State University. In his video, he demonstrated the classical time forecasting with <strong>autoregressive modelling</strong> and the <strong>moving averages</strong>, taking into consideration both the long term- and short term data for integrating it into a more precise estimation, hence the abbreviation ARIMA.</p><p>But being a somewhat ardent Bayesianist myself, I had to ask: <br>What if we treated these data as a <strong>prior </strong>data, the <strong>likelihood</strong> of these time points data, in order to estimate a <strong>posterior distribution </strong>of what the future data will look like?</p><p>In his video aptly named the “<em>Bayesians are coming to Time Series</em>”, Professor LaCarre addresses this combination of BSTS and autoregressive modelling, as he made a quite useful and amusing analogy of the intrinsic differences between<strong> Freuquentists</strong> and<strong> Bayesianists</strong> methods: How would a<strong> </strong>Freuquentists and Bayesianists choose<strong> </strong>different fishing strategies when fishing in a pond?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZxppJSlVsYDQWlhFJnhx4A.jpeg" /><figcaption>Cottonbro Studios, Pexels.</figcaption></figure><p>As a frequentist, do you believe that the fish is stationary, and you cast several different nets or lures, hoping that they land in front of the fish? Or do you, in contrast, as a Bayesian, already know the probability of where the fish is moving and its movement patterns, allowing you to strategically cast one net or lure based on the probability that the fish will be in that area?</p><p>I also did enjoy the way he explained about MCMC, where his explanations made it so much easier to see the similarities in the statistical approaches of both MCMC (Markov Chain Monte Carlo) and BSTS.</p><h3>Historic data repeats itself</h3><p>The most exciting development in the field of time-series analysis is the combination of BSTS with ARIMA (AutoRegressive Integrated Moving Average) models. ARIMA models are another popular method for studying time-series data, and by combining them with BSTS, even more precise and accurate predictions can be made. As his brilliant video showed, the Mean Absolute Percentage Error (MAPE) showed to lowest by combining both autoregressive (AR) and Bayesian autoregressive methods, as demonstrated by the forecast of income vs. consumption.</p><p><strong>By combing both frequentist and bayesian methods, you do not need to choose between either BSTS or ARIMA: as Professor LaCarre shows, the best is to combine both.</strong></p><p>Note sept. 2025. <br>I have recently started to look more into seasonality alignment with cultural calendars. I am testing this SARIMA set for embedding weekly/annual cycles reflecting country-specific holidays and observances. I am doing this as a part of my <a href="http://www.withavenworks.com">Cross-Cultural consulting business</a>, where I look into hybrid frameworks for creating more precise intelligence.</p><p><a href="http://www.linkedin.com/in/martin-withaven">www.linkedin.com/in/martin-withaven</a></p><p><a href="https://towardsdatascience.com/bayesian-structural-time-series-interruption-method-5018761db92b">https://towardsdatascience.com/bayesian-structural-time-series-interruption-method-5018761db92b</a></p><p><a href="https://medium.com/@abhinaya08/forecasting-think-bayesian-9defa5f34502">https://medium.com/@abhinaya08/forecasting-think-bayesian-9defa5f34502</a></p><p><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8580163/">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8580163/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=de7fcd163d37" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Fight Bias with Bayes: A Connection Between Machine Learning and the Biological Brain.]]></title>
            <link>https://medium.com/@martin_wh/bayesian-neurology-exploring-the-link-between-machine-learning-and-brain-functions-3eae8a86b501?source=rss-b6e8d2ae06b1------2</link>
            <guid isPermaLink="false">https://medium.com/p/3eae8a86b501</guid>
            <category><![CDATA[evolutionary-algorithms]]></category>
            <category><![CDATA[decision-making]]></category>
            <category><![CDATA[neuroscience]]></category>
            <category><![CDATA[machine-learning]]></category>
            <dc:creator><![CDATA[Martin W. Hansen]]></dc:creator>
            <pubDate>Thu, 16 Feb 2023 19:07:38 GMT</pubDate>
            <atom:updated>2025-09-11T19:50:36.877Z</atom:updated>
            <content:encoded><![CDATA[<h3>Fight Bias with Bayes: The Experimental Connection Between Machine Learning and the Biological Brain</h3><p><em>This blog post is mainly based on the astonishingly fascinating paper by </em><a href="https://arxiv.org/abs/2006.13158"><em>Hideaki Shmimazaki </em></a><em>, where I will explain and discuss his article in the context of other research papers</em>.</p><blockquote>TL;DR.</blockquote><blockquote>The Bayesian Brain Theory illustrates the interplay between biological and artificial intelligence, showing us how machine learning demonstrates the similarities of biological brains. This is demonstrated through Bayesian inference in order to gather information. Information is here represented as probability distributions.</blockquote><blockquote>As animals grows and age, the brain activities associated with both spontaneous and stimulus-evoked reactions become increasingly similar. This process is seen consistently throughout the lifespan.</blockquote><blockquote>The thermodynamic model of the brain considers conservation of energy as the first law, and the second law of increasing entropy. The conservation acts as a constraint and by that enabling the neural activity to be more efficient. The second law considers increasing entropy as a mechanism for learning and interpreting stimuli from the external world.</blockquote><blockquote>To that aim, the Bayesian framework can be employed in many other contexts, and thus gaining insight into both machine learning algorithms and human cognition and biases alike.</blockquote><h4><strong>What is Bayesian neurology, and how does it relate to Machine Learning?</strong></h4><p>On the surface, the mechanisms of how our minds work may seem elementary and straightforward.</p><p>But firstly, we have to go back to very crude beginnings of organisms in order to understand these evolutionary objectives: What is the point in having a brain? And how can information be so important?</p><p>“<em>Information as originally defined by </em><a href="https://medium.com/udacity/shannon-entropy-information-gain-and-picking-balls-from-buckets-5810d35d54b4"><em>Shannon</em></a><em>, as it is a reduction of uncertainty. Selection means the elimination of a number of possible variants or options, and therefore a reduction in uncertainty. Natural selection therefore by definition creates information: the selected system now “knows” which of its variants are fit, and which are unfit</em>”. <a href="https://www.taylorfrancis.com/chapters/edit/10.4324/9780203937297-25/accelerating-socio-technological-evolution-ephemeralization-stigmergy-global-brain-francis-heylighen">Gershenson &amp; Heylighen 2003</a>.</p><p>In other words, there is an evolutionary reward of creating and managing information, and in particularly; reducing uncertainty. In one of my favourite research papers, H. Shimazaki demonstrates even the simplest thing such as how our retina reacts to different light intensities, and how this process requires a non-linear response. Somehow, the primary visual cortex already “knows” and responds in a way that requires more complex inference than what a linear response can provide. The reason why the retina and primary visual cortex knows this, can be understood as an evolutionary adaptation. Furthermore, we cannot consciously decide to adjust our retinae to such and such thresholds according to different light intensities. Instead, we now interpret this response mechanism as an <strong>unconscious inference </strong>(more on that later).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/850/1*PsaC5scGABCPgN6R8RspxQ.png" /><figcaption>Pupillary light reflex as a non-linear, and not linear response function.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/1*gEPBtuulBJRs_KQ4Th8c1w.jpeg" /><figcaption>The adjustment of retinae as a reflex to brightness has been showed to behave on a non-linear scale using a near-Bayesian inference ratio. Photo: Furkanvari, Unsplash.</figcaption></figure><h3>How Machine Learning can help us in understanding the brain better</h3><p>The frameworks of machine learning can be applied to investigate and research the biological brains in a variety of ways. Such models can employed in severeal ways as means of identifying patterns in brain activity. This is where the Bayesian inference aspect comes in, as being both a simple and elegant approach to explain its intrinsic complexity.</p><p>Bayesian inference in relation to brain models can be illustrated as follows with an example of two events, <strong>A</strong> and <strong>B</strong>. And between them there are the crucial probabilities that has to be considered:</p><p><strong>Posterior</strong>: The probability of event <strong>A </strong>happening, given that <strong>B </strong>has already happened:<strong> </strong>this is the probability you do not know and that you intend to find out. <strong>Prior</strong>: what you know beforehand about event <strong>A </strong>its probability. <strong>Likelihood</strong>: the probability of event <strong>B</strong> given event <strong>A</strong>, a known probability which you already have experienced before. It is the relationships between these different probabilities that are the core essence of Bayesian inference.</p><p>To further employ this Bayesian inference framework, we have to divide its mechanisms into parametres from Shimazaki&#39;s concept.</p><p><strong>x = neural population</strong></p><p><strong>w = brain structure</strong></p><p><strong>y = external stimuli</strong></p><p>Put into simpler terms: <strong><em>x</em></strong> is how the neurons behave, indicating the property of the neural population’s activity. <strong><em>W</em></strong><em> </em>is the very structure of how the biological brain is built up, and how it holds previous informations (in the form as probability distributions). <strong><em>Y</em></strong> is symbol for our external world, here manifested as stimuli. The external world provides us with samples that we can interpret as external stimuli. The letter <strong><em>p</em></strong> indicates the probability of a parameter being true.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*7ii_rRCAuWOWKYYx7ebwmg.jpeg" /><figcaption>Hideaki Shimazaki, Kyoto University</figcaption></figure><p>And it must be noted that the brain can only interpret a <strong>single sample </strong>through<strong> </strong>stimuli from the “true” external world. A sample of the external world in the form of stimuli is not the same as the completeness of the external world itself!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/778/1*KoWxINbbozY_xAKfylCTwA.png" /></figure><p>What the dynamics of the brain tells us, is that the difference between the external world and the joint probability of <em>p </em>(y|w) will converge and become closer to each other. In other words, the inner workings of the brain will gradually start to more closely resemble the external world: Hence, the very process of learning, hence the phenomenon of <em>learning</em>. This model also treat <em>x </em>as an vector form with the populations of <em>x</em>-ith of neurons.</p><blockquote>The brain is effectively building up data in the form of countless old and proven probability distributions, which are then cross-checked and finally selected as the most likely comparison for what we are experiencing at the present moment of stimuli.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z96nzjQvZ2E3bksPl2s2gw.png" /><figcaption>Distribution functions: We start with the prior and the likelihood and conclude with the posterior. <em>Mathias Harrer et al.</em></figcaption></figure><p>In order to compare the distributions of the probabilities of what you are experiencing now (stimulus <strong>Y </strong>and likelihood) compared to what your brain has experienced before (brain structure and prior <strong>W</strong>), we apply the calculation of <a href="https://medium.com/@monadsblog/the-kullback-leibler-divergence-5071c707a4a6"><strong>Kullback-Leibler Divergence</strong></a><strong> </strong>as a mean of measuring the statistical distance between these two distributions (here noted as “Y” and “W” instead of the traditional “P” and “Q”).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/504/1*74z0rxzJgE9Ai1295EPdWg.png" /><figcaption>Statistical distance between log distributuions of <strong>W</strong> and <strong>Y</strong>.</figcaption></figure><p>By that, the parameter of the brain <strong><em>W </em></strong>will seek to optimise its marginal likelihood. The log function is used because is more precise, and it makes computing small decimal numbers easier. The likelood function is <em>marginal</em> because the different likelihoods thought to be independent from each other.</p><p>By the use of an <a href="https://machinelearningmastery.com/argmax-in-machine-learning/">ArgMax</a> function similar to machine learning, <strong><em>W</em></strong><em> </em>will optimise and choose the distributions with the highest likelihood from a matrix of other likelihoods, thus noted <strong><em>W*. </em></strong>This ArgMax function can<strong><em> </em></strong>be viewed as a mechanism, relating to mathematics and computer science, that simply chooses the highest or lowest number from an array of numbers.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/869/1*l67a3PgiGMHDzZeLJ5kjMg.png" /><figcaption>The Argmax function finds the highest value in a matrix</figcaption></figure><p>The model for learning: <strong>W* </strong>= arg max (<strong><em>w</em></strong>) log <em>p</em> (<strong>Y</strong>1:<em>n</em>|<strong> w</strong>)</p><p>In this way, Shimazaki demonstrates a generative model (preceeding the current prevalence of Generative AI) in the form of a joint probability function which describes the connection between neural activity and stimuli.</p><p><em>p</em> (y|x,w) is noted as the <em>observation model</em>.</p><p><em>p </em>(x|w) is referred to as <em>spontaneous activity, </em>because<em> </em>there is no stimuli (<em>y</em>) presents.</p><p><em>p </em>(y|w) is how the neurons behave during stimuli and is here noted as <em>sensory stimulus mode</em>l.</p><p><strong><em>p</em> (y,x|w) = <em>p</em> (y|x,w) <em>p</em> (x|w)</strong></p><p>To make it more comprehensible, we can contemplate over this calculation more tangible way:</p><p>The probability of how the stimuli and the neural activity acts in combination, <strong>given </strong>the brain structure with previous information. This will result in:</p><p><em>The probability of the stimuli being true, given</em><strong><em> </em></strong><em>the neural activity multiplied with brain structure, multiplied with the probability of the neural activity, given</em><strong><em> </em></strong><em>brain structure</em>. Quite a mouthful! Shimazaki illustrates this further as the concept of the <em>generative model</em>. It generates models of the outside world (Y). <strong>Generative model</strong> = activity in observation model ×<strong> </strong>spontaneous activity. It makes a comparison of how the brain behaves during stimuli, and how it behaves <em>without</em> stimuli.</p><p>The probability of the neural activity which is initiated by stimulus is considered as a sample of the posterior distribution. This means that we will have a joint probability density of neural activity <strong>x</strong> given observation <strong>Y</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/725/1*aSNagKon5BsmLyTbPDbvCg.png" /></figure><p>We can demonstrate this by putting in some easy numbers.</p><p><strong>y = 0,35</strong>. Here we have a large degree of uncertainty about the stimulus from the outside world. Just like asking if what we now are experiencing, is true.</p><p><strong>x = 0,60</strong>. We have some degree of certainty about how the neurons behave.</p><p><strong>w = 0,80</strong>. We have a high degree of certainty about our prior knowledge, or should we call it prior experience, which resides in the brain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/811/1*g7Mx9kk33GmJQO0mtvy7Rw.jpeg" /></figure><p>As demonstrated above, we had greater uncertainty about the outside world (<em>Y</em>) stimuli, but ended up with greater certainty on the posterior distribution. By combining the numbers from the respective probabilities, we actually reduced the amount of uncertainty. However, it must be added that the calculation of probability will be more precise if we use a distribution function rather than a single numbers representing the percentage of probability. A distribution function provides more data through a bell-shape (width or dispersion) mean (location parameter) and standard deviation (scale parameter) (<a href="https://www.elsevier.com/books/doing-bayesian-data-analysis/kruschke/978-0-12-405888-0">Kruschke, John</a>).</p><p>In addition, Shimazaki points out that a perfect inference would probably be very unlikely, so the posterior distribution for the stimulus-evoked neural activity is expressed as an approximation with <em>q</em> (x|y) ≈ <em>p </em>(x|Y,w). This approximation is called the <em>recognition </em>model — the process of recognition happens when the stimulus (Y) is <em>combined </em>with the structure of the brain (w) which holds <em>prior </em>knowledge.</p><p>All this happens inside our faculties, and most of these processes are actually <strong>unconscious inference. </strong>Our <strong>likelihoods</strong> and <strong>priors</strong> are both conscious and unconscious, and we live and learn by these two parameters throughout life.</p><h3><strong>The thermodynamic properties of neural entropy and its constraints of the Bayesian brain</strong></h3><p>The researcher also presents another fascinating concept: the thermodynamic mechanism of the brain. How does the process of learning behave in itself? How does the changes of states inside the brain manifest themselves through neural spike dynamics?</p><p>The dynamics of neural activity is expressed in such a way by the thermodynamic law of conservation through the state of spontaneous activity. And by the second law which states that entropy increases, a manifested as a process of learning. These dynamics are then employed by <strong>modulating the gain</strong> of interplay between feedforward from more primitive parts of the brain, and feedback from higher cortical areas. As this back-and-forth communication requires time-delay, it can be measured and detected. For the sake of this blog post, we could view this in some similarty of how computers works, as of which these feedback- and forward streams are taking place. We could therefore in broad terms. distinguish these entities as short term storage in our hippocampus as Random Access Memory (RAM), and<em> </em>long term memory, as solid state drives or hard disk drives.</p><p>“<em>Similarly to the gain control in engineering systems, neural systems can realize the gain control by either feed- forward or feedback connections</em>…<em>We show that the delayed gain control of the stimulus response via recurrent feedback connections is modelled as a dynamic process of the Bayesian inference that combines the observation and top-down prior with time-delay” </em>H<em>.</em>Shimazaki<em>.</em></p><h3>The thermodynamic costs of storing and creating information</h3><p>In order to retain and generate knowledge through new stimuli, there must be an energy requirement involved!</p><p>This is because the recognition model <em>q</em> (x|Y) would require energy: It must firstly be activated, and that means the brain must change from its initial lower state of spontaneous activity <em>p</em> (x|w) where prior information is stored. The same holds true for observation models <em>p</em> (Y|x,w). We must then acknowledge that this process will imply <strong>a change of state, and therefore</strong> <strong>requires energy to initiate the observation process</strong>. On the other hand, the First Law of thermodynamics formulates the conservation of energy, where the total energy in a closed system can neither be created nor destroyed. By that, there will be several limiting factors. Such as how the refractory period limits the number of action potentials that a given nerve cell can produce per unit time, or the limits of neural configurations, and the metabolic “costs” of firing rates themselves. All these will put a limiting constraint on the neural activity as a whole.</p><p>Shimazaki states the stimulus-response neural activity where the entropy is as follows:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/461/1*a0fraTI0Pbog5ApNcfOMkw.png" /><figcaption>The entropy of the neural population of x given stimuli Y will be the logarithmic function of itself, which entails a monotonic increase.</figcaption></figure><p>The constraints of this entropy are considered as weighted biases of neuron firing activity rate<strong> </strong>α and gain control β between feedback and feedforward. The recognition model <em>q</em> (x|Y) will then be constrained by the log minus of the prior and the log minus of the observation model. This is solved by the author by using the Lagrange multiplier method in order to find the minimum of free energy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/560/1*3rz_7QRBBmJUjseUqKe2rA.png" /><figcaption>Entropy with constraints <strong>α </strong>and <strong>β.</strong></figcaption></figure><p>Which then will be calculated to be a probability density function where the free energy is as close to zero as possible. The probability density function of the approximated recognition model <em>q </em>(x|Y) will under the constrains be:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/670/1*YlwjrUZdKVMaIFHNEj_c8g.png" /></figure><p>While the second law of thermodynamic states the entropy in closed systems always increases, it can be seen as how neural activity under the constraints of controlling activity trajectories and firing rate, will assemble itself into a neural state of creating and storing information. It has been demonstrated that the development of information (hence <em>priors</em>) are evident in neural activity in animals. The internal spontaneous activity will optimise itself by becoming more similar to external stimuli-evoked activity in the converging trajectory as the <a href="https://www.science.org/doi/abs/10.1126/science.1195870"><em>animal grows</em></a><em>.</em></p><h3>Bayesian framework applied to decisions</h3><p>Some other similar papers demonstrate how <a href="https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/wcs.1540">cue combination</a> can create new likelihoods and priors. When different stimuli happen simultaneously such as smell, vision, sensations, and coinciding events, it builds up into new probability distributions. Hypothetically, say we have previously experienced that event <strong>A</strong>, <strong>B</strong>, <strong>C</strong>, <strong>D</strong> happened at the same time. When we then encounter a similar event where only <strong>A</strong>, <strong>B</strong>, <strong>C </strong>occurs, we will automatically make associations and therefore expect that event <strong>D</strong> will also happen.</p><h4>Cue combination</h4><p>Our senses can act together as cues to create a grouping of simultaneous stimuli. They will therefore represent <strong>joint probabilities</strong>. For example, a certain smell can act as a reminder of a haptic experience or a sense of place. For example, the smell of freshly baked food may remind someone of their childhood. Similarly, the sight of a particular mountain may recall memories of a past vacation. This combination of several senses into one experience (hence, <em>cue</em>), memories and feelings become more vivid and powerful. For example, the sound of waves crashing on the beach combined with the smell of salty air of and the warmth of the sun can create a powerful nostalgic feeling. Cue combination as a phenomenon can be particularly persuading and , dare I say — misleading, something that politicians and marketers alike are fully aware of, as we will get back to later.</p><h4>Bayesian decision-making</h4><p>Let&#39;s apply this mode of inference to the context of <a href="https://www.frontiersin.org/articles/10.3389/fnins.2018.00734/full">decision theory</a>. We can easily demonstrate this both in contexts of biological and machine intelligence.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/657/1*iYeSoCJvY1He-o9WtW_JeQ.png" /><figcaption><strong>Matsumori et al.</strong></figcaption></figure><p>In this way, we simply apply the same Bayesian inference into the context of decision-making: We replace the neural activity probabilities with <strong>hypothesis</strong> as a<strong> prior</strong> and <strong>data </strong>as <strong>evidence</strong>. Biases are calculated as weights α and β, changing the decision outcomes. This works through the equation under as:</p><p>log (P (H |D)) = β log P (D |H ) + α log P (H) + const.</p><p>We can prove this by asking more relatable questions:</p><blockquote>Are we forgetful? Then the weighted bias <strong>α</strong> of the prior is weaker.</blockquote><blockquote>Are we stereotyping? Then the weighted bias <strong>α</strong> of the prior is stronger.</blockquote><blockquote>Are we being too flexible? Then the weighted bias <strong>β</strong> of the likelihood is stronger.</blockquote><blockquote>Are we being too rigid? Then the weighted bias <strong>β</strong> of the likelihood is weaker.</blockquote><p>If we were working calibration of computer algorithms, the same types of questions concering inference could be asked.</p><h4><strong>How Artificial Intelligence can exploit the Achilles&#39; heel of our biological brain</strong></h4><p>We can also put this theoretical decision-model into a now relevant and pressing matters. Let’s say the topics of choice are politics and elections. As an example we could ask: <em>What is the likelihood that this politician’s statement is true, given the specific circumstances</em>? What previous evidence or <em>priors</em>, do we have? And as we all know by now, if the aim is to create confusion and bewilderment, digital algorithms can then be used with great effect to manipulate the likelihoods and priors in our minds. There is of course nothing new about newspapers and media pandering deliberately to our prejudices. It&#39;s just that nowadays it can be done more efficiently by the means of digital technology - such as flooding opponents or targeted readers with malevolently constructed or skewed news content.<em> </em>And to that end, it has become a common way of bypassing our senses of logic and tampering with the very same mental concepts — The forgetfullness, rigidness and stereotyping among readers or voters, just as illustrated in the diagram and equation above<strong>.</strong> Just like how the retina reacts to light, brains simply cannot turn off the old reflexes awakened by these unconscious inferences from past experiences, be it sensory impressions, or consuming digital content. Applying this concept into political science gives it even more credibility, along with fascinating new approaches. Will a political party change their political cause and orientation to X, if X is refuted? Or will X disreard new evidence and continue as a valid political cause and orientation? (<em>update 21/08/2023, further relevant findings, </em><a href="https://psycnet.apa.org/record/2023-92406-003"><em>https://psycnet.apa.org/record/2023-92406-003</em></a>). Given that this concept encapsulates the core of cognitive perception, or can portray how <a href="https://psycnet.apa.org/record/2023-92406-003">amplification of existing beliefs</a> can work, it is difficult to find a context, where this Bayesian inference framework does <em>not</em> hold relevance or applicability.</p><h4>Bayes Beats Bias</h4><p>On the flip side, this idea can also be used to our advantages. We could therefore use it as a mental tool for everyday relatable events, in order to make more informed decisions.</p><p>Simply put, if we remember to consciously reverse the question for any given context: “What is the probability that B will happen, given A (likelihood)?”. <strong>This means that you can use <em>likelihood</em> as a simple and quick tool against bias</strong>. Even if it doesn&#39;t take into account the other elements of the equation (as shown below), it will still improve your estimate.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*D7IJw2alu7TEyDML02tGjw.jpeg" /></figure><h3>Summary: looking into the essence of Shimazaki’s probabilities</h3><blockquote>Putting this into more relatable words, we are essentially just analysing three elements: What you are thinking without stimuli, what you are thinking during stimuli, and what your brain recalls from previous stimuli/experiences. <br>If we consider these three elements, they can be arranged in differenent mathematical relations to each other. And one of the biggest questions is about how the brain activity changes from experiencing stimuli, to not experiencing stimuli. And then the gain modulation will mediate between these, through “crosschecking” with higher brain regions that are more intelligent than the simple brain regions that just take in stimuli, hence a top-down mechanism. For instance, a delayed response (in milliseconds) can imply a feedback from the higher cortices.</blockquote><p>If we go back and look at it again in simple mathematical terms, we can entertain and distinguish the ideas of mental processses such as:</p><p><strong><em>p</em> (x|w)</strong> = What we think <em>given </em>what we have learned</p><p><strong><em>p</em> (y, x |w) </strong>= What we think <em>given</em> what we’ve learned, <em>times</em> what we experience <em>given</em> what we think</p><p><strong><em>p</em> (y|w)</strong> = What we experience <em>given</em> what we’ve learned.</p><p><strong><em>p</em> (y |x, w) </strong>= What we experience <em>given</em> what we think <em>and</em> what we’ve learned.</p><p>The natural environment is not stationary but continuously changing. Living organisms need to adapt and learn. Hence, organisms need to infer and try to understand what properties and phenomena’s which are most stationary and build up the complexity from there.</p><p>On the artificial side of things, machine learning has been gaining a lot of speed recently. We are beginning to see a sort of race between humans and AI in being the most effective and efficient at reducing uncertainty. Machines are leveraging their speed, accuracy, and data-crunching abilities while we shoud definitely make use of are our creativity, intuition, and for now, superior abductive problem-solving skills.</p><blockquote>“If the human brain were so simple, that we could understand it, we would be so simple that we couldn’t.” — Emerson M. Pugh –</blockquote><h4>I did it Bayes&#39; way</h4><p>Understanding the inner workings of biological and artificial intelligence are powerful and future-proof tools to possess, as well as an incredibly interesting subject in itself to learn about. And there is no doubt that this knowledge can be proactively employed in almost any circumstance that demands sound decisions. Once you have seen this mind-boggling concept, you cannot unsee it.</p><p><a href="https://www.linkedin.com/in/martin-withaven-hansen/">https://www.linkedin.com/in/martin-withaven-hansen/</a></p><p><em>My greatest respect and admiration for the brilliant and inspiring scientific work done by Hideaki Shmimazaki.</em></p><p><a href="https://arxiv.org/abs/2006.13158">The principles of adaptation in organisms and machines II: Thermodynamics of the Bayesian brain</a></p><p><a href="https://medium.com/udacity/shannon-entropy-information-gain-and-picking-balls-from-buckets-5810d35d54b4">https://medium.com/udacity/shannon-entropy-information-gain-and-picking-balls-from-buckets-5810d35d54b4</a></p><p><a href="https://www.taylorfrancis.com/chapters/edit/10.4324/9780203937297-25/accelerating-socio-technological-evolution-ephemeralization-stigmergy-global-brain-francis-heylighen">https://www.taylorfrancis.com/chapters/edit/10.4324/9780203937297-25/accelerating-socio-technological-evolution-ephemeralization-stigmergy-global-brain-francis-heylighen</a></p><p><a href="https://medium.com/@monadsblog/the-kullback-leibler-divergence-5071c707a4a6">https://medium.com/@monadsblog/the-kullback-leibler-divergence-5071c707a4a6</a></p><p><a href="https://machinelearningmastery.com/argmax-in-machine-learning/">https://machinelearningmastery.com/argmax-in-machine-learning/</a></p><p><a href="https://www.elsevier.com/books/doing-bayesian-data-analysis/kruschke/978-0-12-405888-0">https://www.elsevier.com/books/doing-bayesian-data-analysis/kruschke/978-0-12-405888-0</a></p><p><a href="https://www.science.org/doi/abs/10.1126/science.1195870">https://www.science.org/doi/abs/10.1126/science.1195870</a></p><p><a href="https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/wcs.1540">https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/wcs.1540</a></p><p><a href="https://psycnet.apa.org/record/2023-92406-003">https://psycnet.apa.org/record/2023-92406-003</a></p><p><a href="https://www.frontiersin.org/articles/10.3389/fnins.2018.00734/full">https://www.frontiersin.org/articles/10.3389/fnins.2018.00734/full</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3eae8a86b501" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A bright side of AI: The benefits of Bayesian Inference with Machine Learning for Medical Scans]]></title>
            <link>https://medium.com/@martin_wh/the-benefits-of-bayesian-inference-with-deep-learning-for-medical-scans-e463d64dde99?source=rss-b6e8d2ae06b1------2</link>
            <guid isPermaLink="false">https://medium.com/p/e463d64dde99</guid>
            <category><![CDATA[bayesian-machine-learning]]></category>
            <category><![CDATA[medicine]]></category>
            <dc:creator><![CDATA[Martin W. Hansen]]></dc:creator>
            <pubDate>Tue, 06 Sep 2022 17:01:03 GMT</pubDate>
            <atom:updated>2025-07-03T11:50:50.131Z</atom:updated>
            <content:encoded><![CDATA[<p>From my time back when I worked at a large hospital, I was always on the look to find ways of speeding up processes. One of my main concerns that kept coming up was; how can patient queues be shortened, and therefore; how can the processes of using medical scans be sped up?</p><p>Medical scans has to be a subject to medical interpretation, and this is where Bayesian inference comes in. Bayesian inference allows us to take into account prior information and probabilities when we interpret data, as it can be very helpful in making decisions about treatment. In the context of medical scanning, Bayesian inference can aid us to weigh up the risks and benefits of various courses of action, and to make decisions that are based more on huge amounts of historical data rather than guesswork.</p><p><strong>When it comes to diagnosing medical conditions, time is of the essence.</strong> The faster a medical expert can identify what is wrong with a patient, the sooner they can begin treatment and improve the patient’s prognosis. Now, we are able to use novel methods of diagnostic imaging ,that has been showing huge promises both in speed and accuracy <a href="https://cordis.europa.eu/project/id/702666">Bayesian inference through Deep Learning</a>. In short, engineers and medical professionals can train algorithms through feeding in data and facilitating a refinement of identify bioligical and strucutural patterns in medical images, which can then be used to make diagnoses and clinical decisions. Here’s a closer look at how this technology works and how it can benefit patients.</p><h3>Infering the Bayesian way</h3><p>Bayesian inference is a method of statistical reasoning that is based on the Bayesian theorem. It allows medical professionals to use data from previous cases to predict the probability that a certain condition is present in a new case. This information can then be used to improve diagnostic accuracy and speed up decision-making.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/1*j254ldb63eIl12iqYku6Ag.png" /></figure><p>By employing deep learning algorithms that automatically extract features from raw data (such as images or signals), this data can then be used to make predictions or decisions. In this way, large amounts of scanned images from healthy patients are being used to as a basis to callibrate the diagnostical properties of the algorhitm. The objective is that if an unknown structure within a patient’s scan deviates significanty from previous ‘healthy’ image data, the trained algorithms can detect that something does not look normal and thereby classify it as a scan that may need further medical inspection.</p><p>We can do a make-shift illustration of this on a basic level, by integrating a <a href="https://medium.com/@anuj_shah/through-the-eyes-of-gabor-filter-17d1fdb3ac97"><strong>Gabor filter</strong></a><strong> </strong>into a Bayesian equation. The Gabor-filter is way for a computer to &#39;look around&#39; and extracts contours, edges and patterns. We take the premise of wether an anomaly in a medical scan could be a tumour or not.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/503/1*G4r36vFiu8gJti0BcunS3g.png" /><figcaption>Image of Gabor-filter with different orientation and scales. <a href="https://www.researchgate.net/profile/Hadi-Seyedarabi">Hadi Seyedarabi</a></figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Xx-whGtH4IPiQF411WAhAA.png" /></figure><p><em>For demonstrational purpose, let&#39;s put in some simple numbers:</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LUoGpIHq3jZw-MYAzpXQ4Q.png" /></figure><p>“<em>These findings suggest that a hybrid deep learning and Bayesian inference clinical decision support system has the potential to augment diagnostic accuracy of non-specialists to approach the level of subspecialists for a large array of diseases on brain MRI</em>.” <a href="https://pubmed.ncbi.nlm.nih.gov/34131794/">Rudie et al., 2021.</a></p><h4>Significant improvement of clinical decision-making</h4><p>Viewed holistically at an organisational management level, the overall capacity for medical diagnoses and decision-making can see exponential growth. By reimagining medical team functions and roles, and optimising the skills of subspecialists, we can redefine team roles. This could strengthen the overall executive capability, potentially increasing the output of medical examinations tenfold</p><p><strong>By using Bayesian inference as one of many other tools, it allows non-specialists to alleviate extensive workloads for medical professionals, something that represents a definite positive aspect of AI-development and alignement. This also represents huge benefits: More patients being examined by fewer medical staff involved, shorter waiting times, and above all, stronger chances for positive medical outcomes.</strong></p><p><a href="https://www.linkedin.com/in/martin-withaven-hansen/">https://www.linkedin.com/in/martin-withaven-hansen/</a></p><p><a href="https://cordis.europa.eu/project/id/702666">https://cordis.europa.eu/project/id/702666</a></p><p><a href="https://medium.com/@anuj_shah/through-the-eyes-of-gabor-filter-17d1fdb3ac97">https://medium.com/@anuj_shah/through-the-eyes-of-gabor-filter-17d1fdb3ac97</a></p><p><a href="https://pubmed.ncbi.nlm.nih.gov/34131794/">https://pubmed.ncbi.nlm.nih.gov/34131794/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e463d64dde99" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>