<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[SRC Innovations - Medium]]></title>
        <description><![CDATA[IT Consultancy based in Melbourne. Explore the latest trends and best practices in search, SEO, cloud computing, AI and software testing with SRC Innovations’ informative blog. Stay up-to-date with expert analysis and insights from leaders in the industry. - Medium]]></description>
        <link>https://medium.com/src-innovations?source=rss----509983d0e19f---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 17:23:27 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/src-innovations" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[The Economy demands Productivity, not Efficiency]]></title>
            <link>https://medium.com/src-innovations/the-economy-demands-productivity-not-efficiency-ea69ce7f21c4?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/ea69ce7f21c4</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[return-on-investment]]></category>
            <category><![CDATA[business-strategy]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[project-management]]></category>
            <dc:creator><![CDATA[Jonathan Tan]]></dc:creator>
            <pubDate>Wed, 18 Mar 2026 11:43:30 GMT</pubDate>
            <atom:updated>2026-03-18T11:45:21.056Z</atom:updated>
            <content:encoded><![CDATA[<h4>(also, we overdelivered by 67% on the same budget &amp; timeline)</h4><p>First off, let’s be clear. Efficiency is about using less inputs to gain the same amount of outputs.</p><p>Productivity is about achieving more outputs with the same input.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*6R8IJ0_whOtWAVxF" /></figure><p>If you look at it with that understanding, you’ll immediately see why chasing efficiency is…. well, unproductive. Why save a few dollars, when you could leverage that to gain outsized value?</p><p>If you’re focused on reducing your inputs, and maintaining the same output, then you’re not growing. You’re maintaining your current position. You’re in the rat race, where you’re running faster and faster just to keep up. You’re stagnating. And that’s not a pleasant place to be.</p><h3>LLM based AIs do not automatically result in improved efficiency.</h3><p>The assumption that “having AI” increases efficiency shows a fundamental misunderstanding of how LLM based AI work. <strong>Investing in AI to “save time” is a trap.</strong></p><p>Just because AI can automate things, and thus reduce the time needed to do things, that doesn’t mean its increasing efficiency. It’s just another form of created automation that improves efficiency on some things, but inevitably leads to a new asset that needs to be maintained.</p><p>LLM based AI by itself cannot find efficiencies. It is not creative, it can’t think, it can’t understand the big picture. Years of research have proven that creativity is needed to find new efficiencies in problem spaces — LLM based AI does not have that creativity.</p><h3>So what CAN it do?</h3><p><strong>LLM based AIs are an enabler to increase productivity.</strong></p><p>They enable you to get <strong>drudgery</strong> done faster so that you can move on to the actual value generating work. This is especially obvious in the software development world where there are big impacts being seen everywhere.</p><p>LLM based AI are extremely good at extracting context out of something that exists and generating <strong>more</strong> things within that context. Which is fundamentally what <em>writing code</em> is about (cue the angry gnashing of teeth).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*SNxKLAJCNhqAiKer" /></figure><h3>Developers &amp; the Innovation Tax</h3><p>When developers are told to go build something that will generate value, they can’t just go and do it. They’ve suddenly got to deal with a wave of overhead.</p><p>In the pursuit of efficiencies, they need to:</p><ul><li>ensure the new system &amp; features comply with policies</li><li>find the company approved tools, generators, and patterns and then figure out how to use them</li><li>utilise scaffolds and leverage frameworks that support good development practices</li><li>use modern deployment methods that result in additional code for testing and infrastructure</li><li>document everything to support ongoing operations &amp; knowledge transfer</li></ul><p>So time passes as the developer aligns with generators, guidelines and guardrails before they can really progress on the new features.</p><p>And those generators that might have helped the developers get started faster? Those are another asset that the company has to maintain.</p><h3>Engineers! Not Developers</h3><p>Here’s the thing. Software Engineers want valuable outcomes. Software Developers just write code. For the first time, it’s actually easy for engineers to focus on engineering and deliver outcomes. For too long, they’ve spent more time discussing the appropriate way to do things and writing code, rather than just delivering on those valuable outcomes.</p><p>These LLM based IDEs make that possible by getting the repetitive, statistically similar tasks out of the way.</p><p>That enables your engineers to actually shine and to be PRODUCTIVE. They won’t need to seek ways to be efficient so they can get to the things that deliver value. They can just step straight to engineering solutions that deliver value.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*_PfPE3brVhY93S1W" /></figure><h3>Speaking of (over) delivering on value…</h3><p>Which brings me back to my byline. SRC Innovations — using its <strong>Augmented Intelligence Delivery Model</strong> — had a 9 week project to build a client an MVP application that required many typical e-commerce functionalities, as well as integration with on-device hardware.</p><p>We completed their MVP in about 5 weeks, and kept on giving them MORE features so that at the end of that 9 weeks, they’d gotten ~67% more revenue generating features than they’d planned for in their MVP.</p><p>It meant some of their musings about “if we had this, we could do real world marketing campaigns”, became “Hey! We now have this feature that we can use for this style of marketing AND to make money!”</p><p>SRC delivered improved productivity which led to faster revenue generation.</p><p>What else more is there to say?</p><h3>Talk to us</h3><p>(Actually I’ve been told that there is one more thing to say) If you wanted to increase your <strong>PRODUCTIVITY</strong>. If you want a significantly faster path to value. If you wanted to use your current resources to get more value. <a href="https://www.srcinnovations.com.au/contact">Talk to us.</a></p><p><strong>Our methodology is proven and has over delivered in the real world.</strong></p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2026/03/18/the-economy-demands-productivity-not-efficiency/"><em>https://blog.srcinnovations.com.au</em></a><em> on March 18, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ea69ce7f21c4" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/the-economy-demands-productivity-not-efficiency-ea69ce7f21c4">The Economy demands Productivity, not Efficiency</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Normalisation on AI Initiatives]]></title>
            <link>https://medium.com/src-innovations/a-normalisation-on-ai-initiatives-d06761c1cd23?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/d06761c1cd23</guid>
            <category><![CDATA[playbook]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[organizational-change]]></category>
            <dc:creator><![CDATA[Jonathan Tan]]></dc:creator>
            <pubDate>Wed, 10 Sep 2025 14:44:22 GMT</pubDate>
            <atom:updated>2025-09-10T14:44:22.605Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5896aZkK12p6g3iB" /></figure><p>By now, a lot of people would have heard how an MIT report says that 95% of AI initiatives have failed. Looking purely at that headline, it looks like AI is a bad ROI, and people should stop trying to get it going in their organisations.</p><p>This is the furthest thing from the truth: <strong>Now is not the time to disengage from AI tech.</strong></p><p>However, it IS time to properly understand what AI offers, when to leverage it, where to deploy it, and how to secure it.</p><p>It’s a huge topic, so let’s cover some of that in the rest of this blog!</p><h3>“What Normalisation Are You Referring to?”</h3><p>The “AI bubble” is not like the Dot Com bubble, nor any other bubble.</p><p>It’s not running on debt, hype-coins, or subprime mortgages. The major players — Microsoft, Google, Meta, Amazon — are all profitable through multiple revenue streams. Even the pure AI operators like OpenAI &amp; Anthropic are being funded by deep-pocketed investors, not by banks handing out risky loans.</p><p>It’s not a bubble at all, and therefore not going to pop. It’s more like an overfilled helium balloon and it will instead calm down, deflate a little, and continue to exist in the background, hovering in the corner of a giant room after a child’s birthday party…</p><p>The normalisation that is going to happen is when people realise that their prior excitement about LLMs as the replacement of all things and greatest revenue driver of all time actually isn’t true. They’ll flip the lens around to examine AI, LLMs &amp; GenAI as productivity enhancers instead, and then the new initiatives are going to be heaps more successful, and start being treated as just another tool that effective workers will be using to drive productivity.</p><h3>About GenAI, LLMs, and Productivity Growth</h3><p>Here’s another thing about GenAI, and LLMs in particular: They are a statistical machine designed to use historically trained data to predict what’s going to happen next. It’s a giant well of un-lived experience and a “knowledge” base that has been carefully curated based on previous learnings.</p><p>One day, I really need to do a post about the difference between “information” and “knowledge”, and how the concept of a website called a “knowledge base” is just fundamentally wrong…</p><p>If it helps, think of librarians in a closed stack library… Closed stack libraries don’t let you walk in and pick up any item you want. Instead, they have librarians that know enough about the books in the stack to give you an overview on any related topic, and maybe to even loan you the actual book. These librarians make your search faster, but it is still you who needs to do the reading, the deep thinking, and the application of this knowledge. All that execution is still <strong>your</strong> job.</p><p>LLMs (and all AI) are similar. They should be leveraged as a new tool to speed up information gain, knowledge creation &amp; execution. It’s about productivity growth, eventually leading to revenue growth.</p><h3>“Great, Show Me How”</h3><p>Sure! Here’s 3 steps to normalising AI</p><h4>1. Flip the Narrative</h4><p>Stop doomsaying and nay-saying on AI. It isn’t here to replace people, and LLMs being a statistical machine isn’t a bad thing. AI is here to offload execution so humans can focus on what we do best: Assessment, Critical Thinking, and Creativity.</p><p>Nobody shames a plumber for using a pipe-bending wrench, they just appreciate the job was done faster and cheaper.</p><h4>2. Identify where it helps</h4><p>A recent (Feb 2025) <a href="https://arxiv.org/abs/2503.04761">analysis of Claude conversations</a> found that</p><blockquote><em>57% of usage suggests augmentation of human capabilities while 43% suggests automation</em></blockquote><p>Find those areas of automation &amp; augmentation in your organisation through curiosity driven exploration. Don’t guess or dictate. Actually map out where your people spend time and effort. <strong>Then</strong> you can pick the right tool to support it.</p><p>LLMs are good at turning messy human processes &amp; information into structured, codified, units of execution. So focus on use cases where those pre-requisites exist.</p><p>Other times, your AI needs aren’t ChatGPT or Claude. Sometimes your AI needs are classical models, or ML solutions like classifiers.</p><p><strong>The real skill isn’t in using AI everywhere — it’s in knowing where, how, and when to use it.</strong></p><h4>3. Start small, prove value</h4><p>Don’t force workflow overhauls. If people are forced to change the way their work before they see value, they’ll resist the change.</p><p>Instead, meet your people where they hang out and show quick wins. Slip AI into existing tools and workflows and show value that make their current habits easier.</p><p>Small iterative improvements compound over time, just like interest in a savings account. That always results in longer term gains.</p><h3>In Closing</h3><p>I have intentionally not discussed why others have stumbled — that wouldn’t be flipping the narrative.</p><p>Instead, I’ve shared a proven 3 step playbook that will help <em>normalise</em> the use of AI in your organisation: Shift the conversation, find where it will actually help, and finally prove value with small, compounding wins.</p><p>If you’re on this journey — or if you like help spotting where AI can deliver real impact — <a href="https://www.srcinnovations.com.au/contact">reach out</a>. Whether it is brainstorming quick wins, or sharing what we’ve already deployed, SRC Innovations has a library of proven &amp; in-production AI solutions that fit into real workflows.</p><p>Let’s not look to disrupt for the sake of disruption. Instead let’s look for how we can leverage new tools to deliver value &amp; productivity — quickly and consistently.</p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2025/09/11/a-normalisation-on-ai-initiatives/"><em>https://blog.srcinnovations.com.au</em></a><em> on September 10, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d06761c1cd23" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/a-normalisation-on-ai-initiatives-d06761c1cd23">A Normalisation on AI Initiatives</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Tale of GKE, GCP, and Product Catalogue Search]]></title>
            <link>https://medium.com/src-innovations/a-tale-of-gke-gcp-and-product-catalogue-search-e363499b49c3?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/e363499b49c3</guid>
            <category><![CDATA[google-cloud-platform]]></category>
            <category><![CDATA[srchy]]></category>
            <category><![CDATA[google-kubernetes-engine]]></category>
            <category><![CDATA[google-gemini-ai]]></category>
            <dc:creator><![CDATA[Jonathan Tan]]></dc:creator>
            <pubDate>Thu, 28 Aug 2025 14:59:28 GMT</pubDate>
            <atom:updated>2025-08-28T14:59:28.315Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*C8chrM0kQwp5CqK5" /></figure><p>SRC Innovations has successfully run a Product Catalogue Search Engine as a SaaS solution for a number of years now. It serves a range of Australian clients, serving search traffic using a mix of modern ML as well as classical AI methods.</p><p>I’d attribute part of our success to our use of the Google Cloud Platform (GCP) and it’s Google Kubernetes Engine (GKE) simply because it is very nicely integrated, whilst at the same time giving us the flexibility that is so desired where cloud deployments are concerned.</p><p>In this blog, I’m going to give a quick overview of our stack so you can see how we’re benefiting from GCP and their cloud platform.</p><h3>The Srchy Stack</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*2qTbfOnB6Q0E0j9q" /></figure><p>The bulk of Srchy’s services — both stateless microservices &amp; stateful applications — sit within Google Kubernetes Engine. Over the years, within those GKE clusters, we have run:</p><ul><li>various Java based stateful search engines — backed by GCE’s “Balanced drives” for a mix of cost effectiveness and “good enough” disk IO</li><li>MongoDb servers</li><li>at least 2 RNN based Neural Network (mapped to a node with an attached GPU — of course)</li><li>about 45 (currently) different microservices mostly using NodeJS</li><li>several — more recent — LLM powered services</li><li>a lot of batch jobs</li><li>and one of my favourites… GCP’s Config Connector</li></ul><p>All of these sit on a stack of GCE nodes underneath, efficiently sharing CPU + Memory resources during quiet times, and then scaling additional nodes when things get busy, and more instances are required so more compute resources are available.</p><h3>About Config Connector</h3><p><a href="https://cloud.google.com/config-connector/docs/overview">Config Connector</a> is Google’s answer to “can we get a kubernetes based declarative mechanism to build GCP resources?”</p><p>In short, it lets you define GCP resources by using YAML from within a GKE cluster. If you’re already using Kubernetes, then you’d be very well versed in the approach, and you’ll appreciate how it works.</p><h4>Config Connector vs Terraform vs gcloud CLI</h4><p><strong>Approach</strong></p><ul><li>Config Connector: Declarative, eventually consistent</li><li>Terraform: Imperative + with drift recovery mechanism</li><li>`gcloud` CLI: Purely imperative</li></ul><p><strong>Runs</strong></p><ul><li>Config Connector: Only in Kubernetes, only for GCP</li><li>Terraform: Multi-cloud, dev laptop or pipeline, needs hosted solution for sharing state</li><li>`gcloud` CLI: GCP only, can run from dev laptop or pipeline</li></ul><p><strong>Resource Drift</strong></p><ul><li>Config Connector: Constant reconciliation with YAML</li><li>Terraform: Checked on command</li><li>`gcloud` CLI: No drift detection</li></ul><p><strong>Example Config Connector manifests</strong></p><p>Below I provide two sample Config Connector manifests. These are actually part of our Helm charts ensuring that our microservices that may rely on things like Storage Buckets and their associated storage notification events can be co-deployed together, ensuring that their dependencies are met as per the good practices espoused by full stack Infrastructure As Code principles.</p><pre>apiVersion: storage.cnrm.cloud.google.com/v1beta1<br>kind: StorageBucket<br>metadata:<br>  annotations:<br>    cnrm.cloud.google.com/force-destroy: &quot;false&quot;<br>  labels:<br>    helm.sh/chart: catalog-import-extract-monitor-0.1.17<br>    meta.srchy.ai/app: catalog-import-extract-monitor<br>    meta.srchy.ai/client: srchy<br>    meta.srchy.ai/client-env: prod<br>    meta.srchy.ai/client-brand: srchy<br>    app.kubernetes.io/name: catalog-import-extract-monitor<br>    app.kubernetes.io/instance: demo<br>    app.kubernetes.io/version: &quot;0.0.17&quot;<br>    app.kubernetes.io/managed-by: Helm<br>  name: srchy-monitored-bucket<br>  namespace: config-connector<br>spec:<br>  location: australia-southeast1<br>  storageClass: STANDARD<br>  lifecycleRule:<br>    - action:<br>        type: SetStorageClass<br>        storageClass: COLDLINE<br>      condition:<br>        age: 30<br>        withState: ANY</pre><p>As you can see above, the storage bucket has — as part of its manifest declared a storage class of coldline after 30 days, and the Storage Notification — displayed below — even specifies the objectNamePrefix.</p><pre>apiVersion: storage.cnrm.cloud.google.com/v1beta1<br>kind: StorageNotification<br>metadata:<br>  name: srchy-monitored-bucket-0<br>  namespace: config-connector<br>  labels:<br>    helm.sh/chart: catalog-import-extract-monitor-0.1.17<br>    meta.srchy.ai/app: catalog-import-extract-monitor<br>    meta.srchy.ai/client: srchy<br>    meta.srchy.ai/client-env: prod<br>    meta.srchy.ai/client-brand: srchy<br>    app.kubernetes.io/name: catalog-import-extract-monitor<br>    app.kubernetes.io/instance: demo<br>    app.kubernetes.io/version: &quot;0.0.17&quot;<br>    app.kubernetes.io/managed-by: Helm<br>spec:<br>  bucketRef: <br>    external: srchy-monitored-bucket<br>  payloadFormat: JSON_API_V1<br>  topicRef:<br>    name: srchy-monitored-bucket-topic<br>  objectNamePrefix: upload/<br>  eventTypes:<br>    - &quot;OBJECT_FINALIZE&quot;</pre><p>We have found this to be a significantly easier and more repeatable manner of deploying GCP resources.</p><h3>About Google’s AI Services — Gemini &amp; AI Studio</h3><p>Google’s Gemini is acknowledged as a capable LLM and drives many systems — including <a href="https://blogs.opentext.com/secure-trusted-and-ethical-ai/">OpenText’s own Aviator platform</a> — and its API integrations with the rest of the Google ecosystem is pretty sweet. It’s AI Studio is also a very nice piece of developer kit, and makes it really easy to explore the quality of LLM prompts across different models at the same time.</p><p>This isn’t the right place for a full run down of Google’s AI services compared to others providers, but I do have some quick comments based on our experiences. The way that Google has integrated access to AI systems of all types — both modern neural network styles, as well as classical models — and made the necessary hardware also available via standard VMs and also via GKE is pretty impressive, and should make it a good pick for anybody who is interested in deploying AI at scale in this modern world.</p><h3>Other GCP Resources</h3><p>Before you get the impression that we only use GKE, I’ll also touch now on the other GCP resources.</p><h4>Cloud Storage Object Buckets</h4><p>Because, let’s face it, are there <em>any</em> cloud based implementations out there that haven’t benefited from cheap and plentiful object storage?</p><h4>PubSub</h4><p>GCP actually has a comprehensive and fully-managed PubSub offering in a single service that provides the typically desired message queueing, streaming, and pub-sub functionalities desired for eventing use cases.</p><p>This is — especially from an architecture and developer perspective — neater than some other cloud platforms’ combinations of 2 services to achieve <em>almost</em> the same thing.</p><p>We use it to — as mentioned above — identify when clients have provided their latest product catalogues so that we can ingest and run our ML workloads across them for search-related product enrichment before shipping them to our search engines.</p><h4>Platform Security &amp; Access Control — i.e. Artefact Registry, Secrets Management, and Service Accounts!</h4><p>As you may have inferred, we run fairly swish DevOps pipelines, which includes a bunch of deployment pipelines. This means that we know we need to have stable deployment packages, without secrets committed to git, and with using service accounts that have really tightly controlled IAM policies.</p><p>Service Accounts &amp; their associated IAM policies are all quite feasible as well via the Config Connector resources. We’ve had to do a nice little script separately to bridge the “We have a secret, how do we get it into Secrets Manager AND into Kubernetes” gap, but otherwise, Config Connector continues to be a good way to declaratively define GCP resources.</p><h4>Observability — i.e. Logs Explorer, Log Based Metrics, and Alerts</h4><p>You may have noticed that we use Istio as a service mesh in our clusters. This has made it very easy for us to set up our microservices to output access logs, which GKE then easily ingests into its Logs Explorer suite. We can then set up Log Based metrics so we can monitor things like HTTP traffic to our services, as well as other metrics. On top of that, we can turn GCP’s log based metrics into alerts based on thresholds, and have them integrated with Slack and our other alerting routing systems!</p><p>GCP also have a very nice query language for their Logs Explorer where it is straightforward for anybody to pick up and quickly understand how it works. This is a far cry from other logging solutions where their DQL tends to require a bit of effort to fully comprehend.</p><h4>Observability — GKE Dashboards</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*2agq5232ADRVBhRo" /></figure><p>I really want to give a special callout to GCP on how brilliant their out-of-the-box GKE dashboards are. We initially used a lot of prometheus + grafana, but we dropped most of it when GCP released their OOTB GKE dashboards as they give you everything you could ever need. Pod and node resource usage, filtered by namespaces, and even better, interactive playbooks about how to investigate and resolve cluster problems!</p><h3>The Wrap Up</h3><p>In a complex and varied landscape of multiple cloud providers, Google’s GCP has proven its functional strength and reliable infrastructure, combining to make our lives easy.</p><p>Their AI offerings especially from within their Cloud are superbly developer friendly and effective, streamlining our innovation without compromising depth.</p><p>For us as a consulting and product company, these efficiencies mean we can focus on delivering business value and engaging with the technical details that matter, rather than being slowed down by unnecessary hurdles.</p><p>Put simply: we like GCP, we’ll continue to trust it as part of our multi-cloud mix, and we’ll keep turning to it when it makes sense!</p><p>If you had questions about GCP, GKE, or how Google’s AI offerings can help your business drive real business value quickly, please reach out at <a href="https://www.srcinnovations.com.au/contact">https://www.srcinnovations.com.au/contact</a>.</p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2025/08/26/a-tale-of-gke-gcp-and-product-catalogue-search/"><em>https://blog.srcinnovations.com.au</em></a><em> on August 26, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e363499b49c3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/a-tale-of-gke-gcp-and-product-catalogue-search-e363499b49c3">A Tale of GKE, GCP, and Product Catalogue Search</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Ethical Guardrails for AI Revolution]]></title>
            <link>https://medium.com/src-innovations/ethical-guardrails-for-ai-revolution-dc567cf4f63f?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/dc567cf4f63f</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[security]]></category>
            <category><![CDATA[data-privacy]]></category>
            <category><![CDATA[resolution-strategy]]></category>
            <dc:creator><![CDATA[SRC Innovations]]></dc:creator>
            <pubDate>Tue, 23 Jul 2024 00:21:23 GMT</pubDate>
            <atom:updated>2024-07-23T00:20:37.009Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/512/0*bMdqgBqX7SQTeR3k" /></figure><p>Artificial intelligence is rapidly transforming our world. From facial recognition technology to self-driving cars, AI is automating tasks, streamlining processes, and fundamentally changing the way we interact with technology. However, alongside the undeniable benefits come ethical considerations that developers and users alike must carefully navigate. From bias and discrimination to privacy violations and lack of accountability, the ethical pitfalls of AI are numerous if not implemented responsibly.</p><h3>Data Privacy</h3><p>The incredible power of AI is fuelled by access to massive datasets, including sensitive personal information of individuals. There are valid fears about privacy violations and lack of consent if this data is not properly safeguarded for appropriate use. According to surveys, data breaches and exposures are rampant across industries due to insufficient data governance practices.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*HTZ-QY6RLsl9sOws" /></figure><h4>Resolution Strategies</h4><ul><li>Enforce principles like data minimisation to only collect what’s truly needed.</li><li>Implement robust data protection measures, including encryption, access controls, and data masking techniques.</li><li>Require opt-in consent and give customers control over their data use there by earning their trust.</li><li>Conduct regular audits and assessments of AI systems and data practices to identify and mitigate security vulnerabilities and ethical risks.</li></ul><h3>Algorithmic Bias and Fairness</h3><p>AI algorithms are only as good as the data they’re trained on. Unfortunately, biased data can lead to biased algorithms, perpetuating social inequalities in areas like loan approvals, hiring decisions, and even criminal justice. If the training data used to build an AI model reflects historical biases around race, gender, age or other protected characteristics, the model can simply perpetuate those human biases at machine scale.</p><p>Imagine a scenario where an AI system consistently denies loan applications from a certain demographic group or an AI recommendation engine not showing some of the opportunities to specific class of people in our society. We can talk about multiple examples like this which can lead to discriminatory outcomes in decision-making, resource allocation, or customer interactions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*umhYwX65yZFlo1zp" /></figure><h4>Resolution Strategies</h4><ul><li>Use tools to scan training data for representational biases and skews.</li><li>“Test, Test, Test !” Test your models from different perspectives and dimensions such as technical, ethical, legal, social etc. Also you could measure the false positives rate, equal opportunity, individual fairness etc of the model for different groups and individuals.</li><li>Test Models against benchmark datasets to check for discriminatory outputs.</li><li>Set acceptable thresholds and guard-rails for allowable bias levels.</li></ul><h3>Transparency and Explainability</h3><p>Many AI systems operate as complex black boxes, making it difficult to understand how they arrive at certain decisions. This lack of transparency can erode trust and raise concerns about accountability.</p><p>By understanding the reasoning of AI models we can get insights that we not apparent before which can lead to improved decision making and outcomes. This is also crucial in debugging problems and fixing incorrect predictions.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/0*K1s7JGPTiN132vyg" /></figure><h4>Resolution Strategies</h4><ul><li>Use Explainable AI (XAI) techniques like Feature Importance, decision trees, counterfactual explanations etc.</li><li>Allow user inputs to explore “what-if” scenarios to see how outputs change.</li><li>Caution about the AI ability and scope. Provide clear notice and get consent from users when AI is involved and use simple non technical language to describe the functionality. For example how a specific product was recommended to the user while placing an order.</li><li>Provide users a way to override AI generated outcomes when possible.</li></ul><h3>Human Oversight and Accountability</h3><p>While AI automates tasks and streamlines processes, the human element remains crucial. There need to be clear processes for monitoring AI actions and enforcing guidelines around fairness, transparency and human values.Without mechanisms to audit decisioning processes and enact course corrections, AI could make critical mistakes or unethical choices while lacking true accountability.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/512/0*HteUTBRIr3eZjrD7" /></figure><h4>Resolution Strategies</h4><ul><li>Implement a Human-in-the-Loop (HITL) Approach. For example, design workflows where humans can validate or approve high stake decisions.</li><li>Establish an AI ethics review committee. Establish clear governance framework and review the policies/guidelines regularly.</li><li>Provide comprehensive training of employees. Educate decision makers on AI capability, limitations and potential risks.</li><li>Develop contingency plans. Create protocols to quickly disable AI systems, have human based backup ready in cause of AI failure.</li></ul><p>As AI transforms business operations, it’s important to prioritise data privacy and security. By implementing strong protection measures, being transparent, addressing algorithmic biases, and promoting ethical AI use, businesses can navigate the ethical challenges of AI with integrity. Embracing ethical AI principles goes beyond meeting regulations; it’s a commitment to safeguarding data privacy, establishing trust, and fostering a secure and ethical business environment for all stakeholders.</p><p>Whether you’re aiming to enhance your AI systems or seeking guidance on ethical AI implementation, our expert team is ready to assist. Contact us at <a href="http://hello@srcinnovations.com.au">hello@srcinnovations.com.au</a> to embark on your journey towards AI excellence and ensure a future defined by trust and innovation.</p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2024/07/23/staying-mindful-ethical-guardrails-for-ai-revolution/"><em>https://blog.srcinnovations.com.au</em></a><em> on July 23, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dc567cf4f63f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/ethical-guardrails-for-ai-revolution-dc567cf4f63f">Ethical Guardrails for AI Revolution</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Save money on hosting, enhance security and improve SEO with Static Sites | SRC Innovations]]></title>
            <link>https://medium.com/src-innovations/save-money-on-hosting-enhance-security-and-improve-seo-with-static-sites-src-innovations-f0c998db7efc?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/f0c998db7efc</guid>
            <category><![CDATA[static-site-generator]]></category>
            <category><![CDATA[seo-improvement]]></category>
            <category><![CDATA[static-site]]></category>
            <category><![CDATA[modern-web-development]]></category>
            <category><![CDATA[performance-optimization]]></category>
            <dc:creator><![CDATA[SRC Innovations]]></dc:creator>
            <pubDate>Sun, 30 Jun 2024 23:31:00 GMT</pubDate>
            <atom:updated>2024-06-30T23:29:31.057Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*CaxPhcaCP9h4O-72" /></figure><p>Static Sites aren’t anything new and go back to the very beginning of the world wide web so in this article we explore why many companies are giving static sites a second go as well as talk about the most popular tools for creating modern static sites.</p><h3>A brief history on static sites</h3><p>Traditional static sites were the only websites on the early internet even the <a href="https://info.cern.ch/hypertext/WWW/TheProject.html">Original Homepage of the World Wide Web</a> was a static website. Essentially the web itself was static. But as technology became more sophisticated and business and consumer needs grew the limitations of the static web were exceeded. Around the same time SQL powered relational-databases were hitting the mainstream so business moved towards using databases to store and manage their content. This in-turn bought about the age of the web where new CMS products (content management systems) such as WordPress, Drupal and Joomla surged to prominence and the web as we know it today took shape. These new dynamic websites were able to de-couple the design of a website and the content making it possible to simplify websites as a collection of templates or partial templates i.e. headers / footers and stitch them together with the content upon request of the user. This lead to the rise of user-generated content or Web 2.0. The increasing complexity of the modern web hasn’t come without some drawbacks namely:</p><ul><li><strong>Higher hosting costs</strong> — Dynamic sites are complex and have many components all of which require configuring and hosting.</li><li><strong>Lower performance</strong> — Each request requiring another code execution from the server.</li><li><strong>Lower security</strong> — Mass adoption of prominent CMS products luring cyber criminals to find and share common vulnerabilities.</li></ul><p>Modern static sites over-come the limitation of their primitive predecessors as well as addresses the downside of dynamic websites with the advent of Static Site Generators (SSGs).</p><h3>What is a Static Site Generator?</h3><p>A Static Site Generator (SSG) is a library or framework which takes code files as input and compiles it to make static HTML documents that can be hosted directly on the Internet. Modern SSGs are an excellent choice for marketing websites, blogs, portfolios and even e-commerce sites. Modern SSGs have the following advantages:</p><ul><li><strong>Templating</strong> — reusable UI elements and layout decoupled from content.</li><li><strong>Common content syntax</strong> — The advent of markdown has lead to a content authoring revolution.</li><li><strong>Lower hosting cost</strong> — Only requires the hosting of static HTML files.</li><li><strong>Lighting fast performance</strong> — very low latency from server requests thanks to lightweight output from SSGs.</li><li><strong>Better SEO</strong> — Improved performance of static sites means they rank higher than their heavier competitors.</li><li><strong>Improved security</strong> — Static sites have no backend for would be cyber criminals to exploit.</li><li><strong>Improved developer experience</strong> through modern tooling and logical architecture.</li></ul><h3>Top Static Site Generators</h3><p>Now that we know what static site generators are lets talk about some of the most popular Static Site Generator frameworks.</p><h3>Next.JS</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*Z3vteP-qgOI5IAIk" /></figure><p><a href="https://nextjs.org/">Next.JS</a> is arguably the most popular of this list but Next.JS is not limited to only being a static site generator it can also be used to create complex web applications offering Server-Side Rendering (SSR), Client-Side Rendering (CSR) all powered by <a href="https://react.dev/">React</a>. Developers are able to control the level of SSR / CSR on a per-page level which makes Next.JS extremely powerful for a wide range of projects.</p><p><strong>Features:</strong></p><ul><li>Powered by <a href="https://react.dev/">React</a>.</li><li>Server-Side (SSR) &amp; Client-Side Rendering (CSR).</li><li>Incremental Static Rendering.</li><li>Dynamic HTML Streaming.</li></ul><h3>Gatsby</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*Nz1BeOzyE7v-IDhH" /></figure><p><a href="https://www.gatsbyjs.com/">Gatsby</a> is another React powered static site generator it also supports deferred static generation (DSG), and server-side rendering (SSR) and boasts an excellent developer experience. As of version 4, Gatsby now gives developers the power to do static site generation at a per-page level allowing for more granular control of your project.</p><p><strong>Features:</strong></p><ul><li>Powered by <a href="https://react.dev/">React</a>.</li><li>Server-Side (SSR) &amp; Client-Side Rendering (CSR).</li><li>Deferred static generation (DSG).</li><li>Static site generation (SSG) on a per-page level.</li></ul><h3>Hugo</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*UeP4riTEe_dD12wN" /></figure><p><a href="https://gohugo.io/">Hugo</a> is a popular static site generator which is powered by <a href="https://go.dev/">Go</a>. Hugo boasts lightening fast performance claiming its build process takes less than 1 millisecond per-page. Hugo also claims to be a “content strategist’s dream” with unlimited content types, taxonomies, menus, dynamic API-driven content and extends traditional markdown with Hugo shortcodes to add even more flexibility for writing and managing content.</p><p><strong>Features:</strong></p><ul><li>Powered by <a href="https://go.dev/">Go</a>.</li><li>Lightweight and lightening fast performance.</li><li>Markdown support.</li><li>Pagination.</li><li>Taxonomies.</li><li>Internationalisation (i18n).</li><li>Integrated Google Analytics support.</li><li>Over 300 themes.</li></ul><h3>SvelteKit</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*FsQWaMV-SdUHbm8z" /></figure><p>Powered by Svelte <a href="https://kit.svelte.dev/">SvelteKit</a> is similar to NextJS in that its usefulness as a framework isn’t limited to only static site generation it can also be used for complex web applications featuring server-side rendering, client side routing and a number of adapters for different deployment targets e.g., Node.js, Vercel, and more to make deployments simple.</p><p><strong>Features:</strong></p><ul><li>Powered by <a href="https://svelte.dev/">Svelte</a>.</li><li>Dynamic Server-Side Rendering.</li><li>CSRF protection.</li><li>Client side routing.</li></ul><h3>Astro</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*xEQnTc-JDzqImeQk" /></figure><p><a href="https://astro.build/">Astro</a> is a JavaScript UI Framework agnostic content driven web framework meaning it can be combined with a UI framework of your choice or leverage Astro components which are template components which render plain HTML so no client side JavaScript improving performance to the end user.</p><p><strong>Features:</strong></p><ul><li>Super light weight.</li><li>Excellent documentation and developer quality of life.</li><li>Beginner friendly.</li><li>Server-Side Rendering as default</li><li>No JavaScript by default — improving performance and saving devices of the client.</li><li>Lightening fast performance.</li><li>JavaScript library / framework agnostic — write components your way.</li><li>Improved performance with the utilisation of <a href="https://docs.astro.build/en/concepts/islands/">component islands</a>.</li></ul><h3>Eleventy</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*SuN3hvhTEojUVNlZ" /></figure><p><a href="https://www.11ty.dev/">Eleventy</a> is a relatively newcomer to the SSG space but is gaining popularity due to its excellent developer experience and impressive performance. Elleventy is built using vanilla JavaScript and Node.JS so is considered fairly easy for beginners and keeps the project lightweight compared to its competitors.</p><p><strong>Features:</strong></p><ul><li>Lightweight.</li><li>Beginner friendly.</li><li>High performance.</li><li>Independent template languages.</li><li>JavaScript library / framework agnostic.</li></ul><h3>Choosing a Static Site Generator</h3><p>When it comes to selecting which Static Site Generator is right for your particular project there is a few considerations to keep in mind.</p><ul><li><strong>Programming Language:</strong> Arguably the most important aspect to consider is the programming language the Static Site Generator is built on. Ideally you should choose a SSG that aligns with your existing skill-set.</li><li><strong>Required Features:</strong> The features required for your selected project could heavily influence your decision. For example for a <strong>simple portfolio or blog site Hugo or Eleventy would be more suitable</strong> but if your project requires something more complex you might be better off selecting Astro, Gatsby or Next JS.</li><li><strong>Developer Experience and Community Support:</strong> This depends on your level of experience but <strong>if you’re somewhat of a beginner you might be better off selecting Astro or Eleventy</strong> as both of these project have excellent documentation, community support and developer quality of life. Also worth considering that projects that lack community support might also have a shorter life span than their more popular counterparts.</li></ul><h3>Putting it all together</h3><p>So in this article we’ve learned a brief history of the web. We learned about Static Site Generators, their use, and their advantages. We learned about the top frameworks for creating modern static websites and lastly we highlighted the considerations to be made when selecting which Static Site Generator you should use.</p><p>Modern Static Generators offer unique advantages when they are applied to the correct application. If you’re planning on making your project static and you’re still unsure of what to choose or you’d rather leave the selection process to the experts. <a href="https://srcinnovations.com.au/contact-us"><strong>Reach out to us as at SRC Innovations</strong></a>, we are more than happy to help.</p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2024/07/01/save-money-on-hosting-enhance-security-and-improve-seo-with-static-sites/"><em>https://blog.srcinnovations.com.au</em></a><em> on June 30, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f0c998db7efc" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/save-money-on-hosting-enhance-security-and-improve-seo-with-static-sites-src-innovations-f0c998db7efc">Save money on hosting, enhance security and improve SEO with Static Sites | SRC Innovations</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Transforming Travel Safety with Flightcare Global — A Case Study]]></title>
            <link>https://medium.com/src-innovations/transforming-travel-safety-with-flightcare-global-a-case-study-8af8448244ef?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/8af8448244ef</guid>
            <category><![CDATA[agile-development]]></category>
            <category><![CDATA[case-study]]></category>
            <category><![CDATA[technology-innovation]]></category>
            <category><![CDATA[business-solutions]]></category>
            <category><![CDATA[scalable-solutions]]></category>
            <dc:creator><![CDATA[SRC Innovations]]></dc:creator>
            <pubDate>Mon, 06 May 2024 05:55:16 GMT</pubDate>
            <atom:updated>2024-05-06T05:55:32.807Z</atom:updated>
            <content:encoded><![CDATA[<h3>Transforming Travel Safety with Flightcare Global — A Case Study</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*WIAo3drBG8adqS_w" /></figure><p>As technology continues to revolutionise and disrupt many industries, aviation is in the early stages of some exciting and profound changes in areas such as efficiency, safety, and passenger experience. In-flight medical events can be challenging for airlines to handle efficiently, as traditional methods often lack real-time support and coordination between airline crews and medical professionals.</p><p>SRC provided technical expertise to support Flightcare Global’s development of a revolutionary technology platform offering aviation medical solutions — <em>Pre-Flight</em>, <em>In-Flight</em> &amp; <em>Crewcare. </em>The Flightcare Global technology platform provides a unique level of care and service to passengers, crew and airline operations.</p><p><strong>This case study illustrates how this was all made possible with the agile and collaborative approach of SRC experts working closely with Flightcare Global.</strong></p><h3>Overview</h3><ul><li><strong>Client</strong>: <a href="https://flightcareglobal.com/">Flightcare Global</a></li><li><strong>Industry</strong>: Aviation</li><li><strong>Key business functions</strong>:</li><li>Passenger medical support</li><li>Airline staff assistance</li><li>Medical case management</li><li><strong>Location</strong>: Global</li><li><strong>Goal or Objective</strong>: Addressing the gap in the market for real-time medical support during flights, opening doors to additional components that enhance medical assistance for both passengers and crew members at every stage of their travel.</li><li><strong>Services</strong>: An advanced platform developed by SRC and Flightcare Global that facilitates instant healthcare communication and management.</li></ul><h3>How We Made It Happen</h3><p>Throughout the engagement, we adopted a 3-phase approach to minimise costs and increase agility and value.</p><ol><li><strong>Prototype Development</strong>: An initial phase focused on crafting a compelling prototype to secure investments and ensure project sustainability.</li><li><strong>Capability Expansion and Client Acquisition</strong>: With the prototype in place, phase 2 involved expanding system capabilities and acquiring clients, marking a significant shift towards market engagement.</li><li><strong>Ongoing Development Model</strong>: As we onboarded new airlines, we began an ongoing development phase which maintains a responsive model that adapts to evolving clients and industry demands, thus ensuring the system’s relevancy and efficacy.</li></ol><p>Our commitment to open and ongoing communication fostered an environment where ideas flowed freely and any concerns, changes in priority and new requirements, were promptly addressed. <strong>Our flexible team was capable of meeting client’s changes in vision, delivering on-time and under-budget solutions at each phase of the project, ensuring a seamless and efficient project progression.</strong></p><blockquote><em>I was already aware of SRC and the work they do, and after an initial conversation I felt comfortable they were the right partner. SRC felt like a safe pair of hands. Their breadth of experience, technical expertise and ability to work in an agile way drove our product’s success from the prototyping phase to the full-scale development.</em></blockquote><h4>- Micheal Monaghan, CTO at Flightcare Global</h4><p>Depending on the evolving phases and tasks, we adapted our methodology to meet the client’s needs, providing an agile team responsible for the delivery of the user experience, technical design, web and app development, leveraging modern technologies on the cloud.</p><blockquote><em>We developed a robust and scalable solution that can be adapted and enhanced across iOS, Android and web platforms to support their clients’ needs and evolving industry regulations. </em><strong><em>This is what makes Flightcare’s solution truly innovative</em></strong><em>.</em></blockquote><h4>- Vith Visagathilagar, General Manager at SRC</h4><p>At SRC, we believed so strongly in the Flightcare product that we invested equity in the business, reflecting our dedication to its success and growth.</p><h3>Key Success Criteria</h3><ul><li>Working with a startup business like Flightcare Global, it was crucial to adopt an agile and collaborative approach. This methodology facilitated adjustments and allowed us to accommodate the evolving needs of the client.</li><li>Through every phase of the project, challenges were identified and effectively addressed, leveraging SRC’s flexibility as a key asset. We responded to changing priorities and budget constraints by strategically adjusting our team size, broadening our skill set across technologies and ensuring that we continued to meet project deadlines effectively.</li></ul><h3>Conclusions</h3><p><strong>Flightcare Solution’s transformative impact on the aviation industry showcases the power of technology-driven solutions in enhancing safety and operational efficiency.</strong> By connecting airline crews with medical experts, passenger and personnel safety and well-being have been elevated at every stage of their journey.</p><h4>Michael Monaghan added that:</h4><blockquote><em>The system performed well and achieved the scalability goals we had for the platform. Our customers have been delighted with the </em>products and the intuitive interfaces across our service channels. SRC have been invaluable in realising our vision for the Pre-Flight<em>, </em>In-Flight<em> &amp; </em>Crewcare<em> products.</em></blockquote><p>SRC’s responsiveness and commitment, enabled the delivery of a high-value product that filled a critical market gap and can be leveraged with numerous clients.</p><blockquote><em>I knew SRC was able to deliver, and I had faith they would come through at every stage</em><strong><em>. We’d like to continue the partnership</em></strong><em> to keep adding features and deliver to client’s expectations.</em></blockquote><blockquote><strong><em>I would definitely recommend working with them </em></strong><em>for their technical skills, flexibility, ability to deal with the demands of a startup and deliver a solution that scales up with enterprise customers.</em></blockquote><h4>- Micheal Monaghan, CTO at Flightcare Global</h4><h3>Contact us</h3><p><a href="https://srcinnovations.com.au/contact-us">Reach out</a> to SRC Innovations for bespoke technology solutions that can transform your business.</p><p>Our team of experts is dedicated to working closely with you to develop and integrate innovative solutions tailored to your unique needs to help you gain the competitive edge required to stand out in your industry. Browse <a href="https://srcinnovations.com.au/">our site</a> to read more about us or reach out on <a href="mailto:hello@srcinnovations.com.au">hello@srcinnovations.com.au</a> to discover how we can assist you in achieving your strategic goals.</p><p>SRC Innovations: Flexible. Focused. Effective.</p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2024/05/06/transforming-travel-safety-with-flightcare-global-a-case-study/"><em>https://blog.srcinnovations.com.au</em></a><em> on May 6, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8af8448244ef" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/transforming-travel-safety-with-flightcare-global-a-case-study-8af8448244ef">Transforming Travel Safety with Flightcare Global — A Case Study</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Single Sign On (SSO) with AWS Cognito’s Hosted UI]]></title>
            <link>https://medium.com/src-innovations/single-sign-on-sso-with-aws-cognitos-hosted-ui-2fec46f30ea5?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/2fec46f30ea5</guid>
            <category><![CDATA[aws-cognito]]></category>
            <category><![CDATA[ux]]></category>
            <category><![CDATA[sso]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[SRC Innovations]]></dc:creator>
            <pubDate>Wed, 17 Apr 2024 00:45:57 GMT</pubDate>
            <atom:updated>2024-06-30T23:26:03.631Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/581/1*VonjXecTQ2Ze1_bgTG_SjA.png" /></figure><p>Single sign-on (SSO) is a centralised authentication process that allows a user to access multiple applications or services with one set of login credentials (such as username and password).</p><p>AWS Cognito <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html">User Pools</a> is one such service that facilitates your user authentication, authorisation and user management for web and/or mobile applications that can integrate with Federated Identities to support Single Sign-On capabilities.</p><p>If you are trying to get started with AWS Cognito, then keep on reading!</p><p>It explains how to enable SSO in AWS Console with AWS Cognito’s Hosted UI with SAML Identity Provider.</p><p>Wonder what the default AWS Cognito Hosted UI could look like?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/0*ZSKKMYPPGxt4iV-R" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/740/0*YDzudg3YYWzSv8Po" /></figure><p>Sign-in flow:</p><p>The diagram below shows a standard login flow using AWS Cognito Hosted UI which has been configured with a SAML Identity Provider.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*3wAoJv6HiLp5ykfw" /></figure><p><strong>Too Long Didn’t Read (TLDR) Version</strong> <strong>The TLDR version:</strong></p><ol><li>Have an Identity Provider (IdP) SAML2 file for SSO. Examples of an IdP are Azure, Google, Facebook and Apple.</li><li>Through AWS Console, navigate to your AWS Cognito Userpool and add an Identity Provider via Sign-in experience tab under the heading <strong>Federated identity provider sign-in</strong>.</li><li>In App Integration tab, under the <strong>Domain</strong> Panel, you either create a Cognito domain or a custom domain.</li><li>Still in App Integration tab, under the <strong>App clients and analytics</strong> Panel, select your App client. Here you will update the following Hosted UI configuration and Host UI customisation (optional).</li><li>Once these are configured for your AWS Cognito Userpool, configure your Identity Provider and prove them with an assertion consumer endpoint.</li></ol><ul><li>e.g.: https://&lt;Your user pool domain&gt;/saml2/idpresponse</li><li>With an Amazon Cognito domain: https://&lt;YourDomainPrefix&gt;.auth.&lt;region&gt;.amazoncognito.com/saml2/idpresponse</li><li>With a custom domain: https://&lt;Your custom domain&gt;/saml2/idpresponse</li><li>For some SAML identity provides, you also need to provide the service provider (SP) urn / Audience URI / SP Entity ID. e.g., urn:amazon:cognito:sp:&lt;yourUserPoolID&gt;</li></ul><p>6. Once the above are done, you can access the hosted UI via the url: https://&lt;cognito-or-custom-domain&gt;/login?response_type=&lt;response-type&gt;&amp;client_id=&lt;userpool-app-client-id&gt;&amp;redirect_uri=&lt;redirect_uri&gt;</p><ul><li>response_type could either be code or token</li></ul><p>7. Your application will have to be modified to handle the trigger logic of the Hosted UI URL and the redirect URL behaviour to use the tokens provided by AWS Cognito Hosted UI flow.</p><p><strong>The “Ok, I need a bit more info &amp; pictures” version:</strong></p><p>Firstly, have an Identity Provider (IdP) SAML2 file for SSO.</p><ul><li>Examples of an IdPs are Azure, Google and Facebook</li><li>This could be a metadata document (typically an XML file) or a metadata document endpoint URL</li></ul><p>Then, through AWS Console, navigate to your AWS Cognito Userpool and add an Identity Provider via Sign-in experience tab under the heading <strong>Federated identity provider sign-in</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*xjj-zKuj4B1NtOT_" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/440/0*3bAhiXeZR4MZKBX4" /></figure><p>When creating an identity provider we have to take into consideration what we want to display as the Identity Provider Name and how the hosted UI will dynamically select the appropriate identity provider (if there are more than one identity providers).</p><p>To allow the Hosted UI logic to dynamically select the appropriate identity provider when inputing a corporate email, we need to provide a Provider name as the corporate domain e.g., corporation.com. Identifiers needs to also be configured with the corporate domain as this will help Cognito determine which identity provider to use when Hosted UI asks for a corporate email.<br>For more information about this, have a further read <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-managing-saml-idp-naming.html">HERE</a>.</p><p>Add sign-out flow should also be configured.</p><p>One last thing that needs to be configured properly before creating the provider is the Map attributes between SAML provider and your user pool. Will need this mapping to provide AWS Cognito the appropriate user pool attributes to successfully authenticate.</p><p>Once the identity provider has been created, in App Integration tab, under the <strong>Domain</strong> Panel, you either create a Cognito domain or a custom domain.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*Ju_O8KgyeuryqWQK" /></figure><p>Things to consider when deciding to use <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-assign-domain-prefix.html">Cognito domain</a> or a <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-add-custom-domain.html">custom domain</a>.</p><ol><li><strong>Branding and User Experience</strong>: Custom domains allow you to maintain brand consistency throughout the authentication process. Instead of displaying AWS Cognito URLs to users, you can use your own domain, which enhances trust and provides a seamless user experience.</li><li><strong>Security and Trust</strong>: Utilising a custom domain can increase trust among users. When users see a familiar domain during the authentication process, they are more likely to trust the application.</li><li><strong>Ease of Use</strong>: Setting up a custom domain in AWS Cognito is straightforward and well-documented.</li></ol><p>Still in App Integration tab, under the <strong>App clients and analytics</strong> Panel, select your App client. Here you will update the following Hosted UI configuration and Host UI customisation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*2uBACoOCFqaETj03" /></figure><p>In the Hosted UI configuration shown above, you will have to update Allowed callback URLs, Allowed sign-out URLs and add the newly created identity provider in the Identity Providers section.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*9RNN_k-HURSJ7JXh" /></figure><p>Hosted UI customisation allows you to modify the css of the hosted UI. More information <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-app-ui-customization.html">HERE</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*H7OtyGia-qZESIIK" /></figure><p>Once these are configured for your AWS Cognito Userpool, configure your SAML Identity Provider and prove them with an assertion consumer endpoint. More information can be found <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-configuring-federation-with-saml-2-0-idp.html">HERE</a>. Below shows the possible endpoints to configure at your SAML Identity Provider.</p><pre>https://&lt;Your user pool domain&gt;/saml2/idpresponse<br><br>With an Amazon Cognito domain:<br>https://&lt;YourDomainPrefix&gt;.auth.&lt;region&gt;.amazoncognito.com/saml2/idpresponse<br><br>With a custom domain:<br>https://&lt;Your custom domain&gt;/saml2/idpresponse<br><br>For some SAML identity provides, you also need to provide the service provider (SP) urn / Audience URI / SP Entity ID.<br>e.g., urn:amazon:cognito:sp:&lt;yourUserPoolID&gt;</pre><p>Once the above are done, you can access the hosted UI via the url:</p><p>https://&lt;cognito-or-custom-domain&gt;/login?response_type=&lt;response-type&gt;&amp;client_id=&lt;userpool-app-client-id&gt;&amp;redirect_uri=&lt;redirect_uri&gt;</p><p>response_type could either be code or token</p><p>The response_type value will depend on the implementation of the application to integrate AWS Cognito&#39;s Hosted UI.</p><p>When code is used as a response_type, an extra step is required to be done by your application. response_type code returns a code as a parameter that&#39;s appended to your redirect URL. Your application needs to make a request to the <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/authorization-endpoint.html">authorisation endpoint</a> to retrieve the identity/access token from Cognito.</p><p>Whilst token appends the identity and access tokens as parameters to the redirect URL.</p><p>With this, your AWS Cognito is configured to support SSO.</p><p><strong>Sample Implementation</strong></p><p>As mentioned above, depending on how you integrate Hosted UI into your application will help determine if you will be using response_type value of code or token.</p><p>Below will show an example of how you can use AWS Amplify to integrate AWS Cognito Hosted UI to your application. These examples will use Javascript as the programming language with React as the framework of a web application. This example will use a response_type value of token.</p><p>Install aws-amplify</p><pre>npm i aws-amplify</pre><p>Once installed, we have to configure aws-amplify.</p><p>The file below will setup the configuration required by aws-amplify.</p><pre>// authConfig.js file which help configure AWS Amplify<br>const awsConfig = {<br>  Auth: {<br>    // REQUIRED - Amazon Cognito Region<br>    region: &lt;region&gt;,<br>    // OPTIONAL - Amazon Cognito Federated Identity Pool Region<br>    // Required only if it&#39;s different from Amazon Cognito Region<br>    identityPoolRegion: &lt;AWS_REGION&gt;,<br>    // OPTIONAL - Amazon Cognito User Pool ID<br>    userPoolId: &lt;USER_POOL_ID&gt;,<br>    // OPTIONAL - Amazon Cognito Web Client ID (26-char alphanumeric string)<br>    userPoolWebClientId: &lt;CLIENT_ID&gt;,<br>    // OPTIONAL - Hosted UI configuration<br>    oauth: {<br>      domain: &lt;USER_POOL_DOMAIN&gt;,<br>      scope: [<br>        &quot;phone&quot;,<br>        &quot;email&quot;,<br>        &quot;profile&quot;,<br>        &quot;openid&quot;,<br>        &quot;aws.cognito.signin.user.admin&quot;,<br>      ],<br>      redirectSignIn: &lt;REDIRECT_SIGNIN_URL&gt;,<br>      redirectSignOut: &lt;REDIRECT_SIGNOUT_URL&gt;,<br>      responseType: &quot;token&quot;, // note that REFRESH token will only be generated when the responseType is code<br>    },<br>  },<br>};<br>export default awsConfig;</pre><p>Now that you have a function to set the configuration for aws-amplify, the code below shows you how you would use the function. On your main app file, import aws-amplify and configure.</p><pre>// App.jsx<br>import { Amplify, Auth, Hub } from &quot;aws-amplify&quot;;<br>import awsConfig from &quot;./utils/authConfig&quot;;<br>Amplify.configure(awsConfig);</pre><p>The following function ssoCognito will trigger the Hosted UI flow when function is called.</p><pre>// src/api/ssoCognito.js<br>import { Auth } from &quot;aws-amplify&quot;;<br><br>export const signInWithSSO = () =&gt; {<br>  // eslint-disable-next-line no-async-promise-executor<br>  return new Promise(async (resolve, reject) =&gt; {<br>    try {<br>      const user = await Auth.federatedSignIn();<br>      resolve(user);<br>    } catch (e) {<br>      reject(e);<br>    }<br>  }).catch((err) =&gt; {<br>    throw err;<br>  });<br>};</pre><p>The following code snippet displays a sample usage of the signInWithSSO function in React.</p><pre>// src/pages/login.jsx<br>import { useEffect } from &quot;react&quot;;<br>import { useState } from &quot;react&quot;;<br>import { signInWithSSO } from &quot;~/api/ssoCognito&quot;;<br>import { Button } from &quot;~/components&quot;;<br><br>export const Login = () =&gt; {<br>  const [error, setError] = useState(&quot;&quot;);<br><br>  const signInSSO = async () =&gt; {<br>    try {<br>      setError(&quot;&quot;);<br>      await signInWithSSO();<br>    } catch (err) {<br>      setError(err.message);<br>    }<br>  };<br><br>  return (<br>    &lt;div&gt;<br>      &lt;p&gt;Sample Text&lt;/p&gt;<br>      &lt;Button onClick={signInSSO}&gt;&quot;Sign in&quot;&lt;/Button&gt;<br>      &lt;p&gt;{error}&amp;nbsp;&lt;/p&gt;<br>    &lt;/div&gt;<br>  );<br>};</pre><p>When Hosted UI flow is complete and redirects the user, you will need to have a way to handle the Cognito session, the following code snippet shows an example of how to handle Cognito sessions via useEffect.</p><pre>// App.jsx<br>import { Amplify, Auth } from &quot;aws-amplify&quot;;<br>import { useEffect } from &quot;react&quot;;<br>import { useNavigate } from &quot;react-router-dom&quot;;<br>import awsConfig from &quot;./utils/authConfig&quot;;<br><br>Amplify.configure(awsConfig);<br><br>const App = () =&gt; {<br>const navigate = useNavigate();<br><br>useEffect(() =&gt; {<br>  async function handleSession() {<br>    // Handle the Cognito session<br>    try {<br>      await Auth.currentSession();<br>      // You can also access user information like this:<br>      await Auth.currentAuthenticatedUser();<br>      // Redirect to the appropriate page if the session is valid.<br>      navigate(&quot;/sample&quot;);<br>    } catch (error) {<br>      console.warn(&quot;No valid session&quot;);<br>    }<br>  }<br><br>  handleSession();<br>}, []);<br><br>return (&lt;div&gt;Sample Code&lt;/div&gt;)<br>};<br><br>export default App;</pre><p>When the Cognito session is valid, it will redirect the user to the appropriate page. From here, application should continue as normal.</p><p><strong>Advantages/Disadvantages</strong> <strong>Advantage:</strong></p><ul><li>Easy Integration: straightforward way to integrate user authentication without having to build a custom authentication system from scratch.</li><li>Support for Social Identity Providers: AWS Cognito supports integration with social identity providers</li><li>Scalability</li><li>Customisation: Allows the customisation of the look and feel of the UI to some extent. e.g. colours, logos and some aspects of the user interface.</li></ul><p><strong>Disadvantages:</strong></p><ul><li>Limited customisation: customisation may not meet the specific design requirements of all applications.</li><li>Vendor Lock-in: you are tying your application to the AWS ecosystem. Migrating away from AWS Cognito to another authentication solution may require significant effort and resources</li><li>Learning Curve: integrating and configuring AWS Cognito, including the Hosted UI, may require a learning curve, especially for developers who are new to AWS services or identity management concepts</li><li>Reliability on AWS Services: Your application’s authentication functionality relies on the reliability and availability of AWS infrastructure</li></ul><p><strong>Last Words</strong></p><p>I hope that this one has helped you figure out how to configure AWS Cognito to use Hosted UI to enable Single Sign On (SSO) capabilities and get an idea on how you can integrate it to your application.</p><p>This is a basic way to do it. If you need help, OR wanted to discuss the right solution for you, contact us <a href="https://srcinnovations.com.au/contact-us">HERE</a>.</p><p>References:</p><ul><li>Cognito App Client Configuration: <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-configuring-app-integration.html">https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-configuring-app-integration.html</a></li><li>Viewing Cognito’s Hosted UI: <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-app-integration.html#cognito-user-pools-app-integration-view-hosted-ui">https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-app-integration.html#cognito-user-pools-app-integration-view-hosted-ui</a></li><li>List of Identity Providers: <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/external-identity-providers.html">https://docs.aws.amazon.com/cognito/latest/developerguide/external-identity-providers.html</a></li><li>Cognito with SAML: <a href="https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-saml-idp.html">https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-saml-idp.html</a></li></ul><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2024/04/03/single-sign-on-sso-with-aws-cognitos-hosted-ui/"><em>https://blog.srcinnovations.com.au</em></a><em> on April 3, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2fec46f30ea5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/single-sign-on-sso-with-aws-cognitos-hosted-ui-2fec46f30ea5">Single Sign On (SSO) with AWS Cognito’s Hosted UI</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Reactivity in Vue 3]]></title>
            <link>https://medium.com/src-innovations/reactivity-in-vue-3-9df57af36407?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/9df57af36407</guid>
            <category><![CDATA[option-api]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[composition-api]]></category>
            <category><![CDATA[reactivity]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[SRC Innovations]]></dc:creator>
            <pubDate>Mon, 26 Feb 2024 01:24:39 GMT</pubDate>
            <atom:updated>2024-04-03T09:47:59.782Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*96YyhS8Euhr064mKyM9Rxw.png" /></figure><p>Reactivity is a programming technique that implies reacting to the changes in a declarative manner and this has transformed the way modern web and mobile applications are built.</p><p>Like its competitors such as React, and Angular, the reactivity system is one of the significant features of Vue.js. As the name suggests, at a very high level, it is related to reacting to the component’s data/state changes and updating views dynamically. When any of the data of the component’s state changes, it updates the view by re-rendering with the latest changes.</p><p>This blog aims to give you a high-level overview of how to implement reactivity system in Vue 3 using Options and Composition API.</p><h4>Glance at Options and Composition API</h4><p>There has been a lot of discussion about Options API vs Composition API since Composition API was introduced as part of Vue 3. So before we start with the reactivity system, let’s understand Options and Composition APIs.</p><p><strong>Options API</strong></p><p>Options API is a traditional way of declaring components in Vue.js. It consists of building blocks such as data, methods, computed properties, props etc. Options API is easy and simple to understand and is a good choice for beginners. However, it could lead to spaghetti code and fragmentation as codebase and complexity increase. Also, it does not provide reusability unlike the Composition API.</p><p>Given below is the example of the structure of the Options API.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*v3qijaGSz2e1IWaw" /></figure><p><strong>Composition API</strong></p><p>Composition API is a newer way of declaring Vue component introduced in Vue 3 to address the shortcomings of Options API. In the Composition API, everything from component data to methods to computed properties is declared in the setup hook. The main advantage of the Composition API is that you can abstract the functionality into its components. It can be reused in different components using a concept called Composable. Composable is a function that leverages Composition API and encapsulates stateful logic which then can be re-used in different Vue components. Here is the official documentation of the Composable <a href="https://vuejs.org/guide/reusability/composables">https://vuejs.org/guide/reusability/composables</a>.</p><p>The use cases of Composables are mainly renderless components (components without a template) from functional utilities to API handlers to fetch data, process and return with data and/or errors if any.</p><p>Given below is an example of the structure of the Composition API.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*DN6wPAs1la7edK3G" /></figure><p><strong>Here is the quick comparison between both the APIs</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MJVnqFcw_Hqbx2SdkAZOMQ.png" /></figure><h4>Reactivity using Options API</h4><p>In Options API, all the properties declared under the data functions are reactive by default. Vue calls the data function while creating a component instance and makes all the properties reactive. However, unlike Vue 2 reactivity, the Vue 3 reactivity system uses Javascript proxies under the hood to create a reactive state of the component.</p><p>When declaring reactive properties to the component, make sure to declare those in the data function as those properties are added only when an instance gets created for the first time. You can still add the instance properties to the component outside of the data function but those properties won’t be able to trigger reactive updates.</p><p>For example, Let’s say we have a component <strong>ComponentA</strong> that uses options API and we have 3 instance properties declared. And we are rendering the title.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*HmV_cc3my7V6JDcr" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*RPPn05Zzlfuftr6P" /></figure><p>Initially, this will display <strong>Default title</strong>. Now when a user clicks on the button, it will update the instance property as you can see in the above screenshot and it re-renders the view dynamically. So now, the component would render <strong>Title A</strong> instead.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*8Y7ylzMXXSSkd2Xm" /></figure><p>Vue also supports deep reactivity which means, in case of mutating nested objects or arrays, it will also trigger reactive updates.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*zwnXY6Z-MMZGpwjH" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*TmUJXFY10Ecnc93m" /></figure><p>Now, when a user clicks <strong>Update the suburb</strong> button, it should update the DOM with <strong>Suburb B</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*kNROC82Bf2b9rupT" /></figure><p>This is a high-level overview of the reactivity in Vue 3 using Options API. To explore more about reactivity using options API, please refer to this documentation <a href="https://vuejs.org/guide/essentials/reactivity-fundamentals.html">https://vuejs.org/guide/essentials/reactivity-fundamentals.html</a> (and make sure to select <strong>Options API</strong> from toggle).</p><h4>Reactivity using Composition API</h4><p>You can declare reactivity in 2 ways: Ref() and Reactive() functions. The following section explains how to declare reactive values using both the ways and its limitations below.</p><p><strong>Ref()</strong> <strong>function</strong></p><p>You can declare reactive variables using the ref() function which takes a value as an argument and returns the Ref object where a value that is passed in as an argument gets wrapped with value object. As you can see in the example given below, the variable count is a reactive variable and then you can return it from the component so that templates can refer to those for rendering.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*KP0KjOlkYou4kEeG" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*Zz26TLzT_XSf98M4" /></figure><p>However, as you noticed, there is no need to refer to the variable count using .value in template syntax as Vue unpacks the value while rendering.</p><p>You can also update the reactive variable and to mutate the variable, you need to declare a function as part of the setup function and return that function just like a reactive variable.</p><p>As you can see in the below example, the increment function is defined in a top-level setup function and is returned so that the template can access it for event-handling purposes etc.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*5vbXjVFbNn8W7Iuj" /></figure><p>Under the hood, a dependency-based tracking reactivity system makes reactivity work where it creates tracks of every reactive variable at the time of first render and when any of the variables mutate, it triggers the re-rendering of part of the component that is tracking it.</p><p>Ref() function also supports deep reactivity where if any of the nested properties of the complex structures such as an array of objects etc are changed, it triggers re-rendering of the DOM. However, you can opt out of the deep reactivity as well where it would only track .value access using <strong>Shallow Ref</strong>. More on shallow reactivity can be found here <a href="https://vuejs.org/api/reactivity-advanced#shallowref">https://vuejs.org/api/reactivity-advanced#shallowref</a>.</p><p><strong>Reactive()</strong> <strong>function</strong></p><p>You can also define reactive variables using reactive() function which accepts argument but unlike ref() function where it returns a Ref object, it returns an object that itself is reactive using the proxy method. Hence, while accessing the variable in functions or templates, you do not need to use .value.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*jlSVrOhJ85NouAWO" /></figure><p>As reactive() function uses javascript proxies, it returns the same object when you call the method reactive on the same object or call reactive on the existing proxy.</p><p>However, there are a few limitations to using reactive() function</p><ol><li>You can use reactive() only with object types such as object, array, map, and sets but not with primitive types such as number, string etc.</li><li>Replacing an existing reactive object would not work as “replace” destroys the reference to the previous reactive object.</li><li>It is not destructure friendly as destructuring primitive values of an object into variables or passing it to function would eliminate reactive linkage.</li></ol><p>This is a high-level overview of the reactivity in Vue 3 using Composition API. To explore more about reactivity Composition API, please refer to this documentation <a href="https://vuejs.org/guide/essentials/reactivity-fundamentals.html">https://vuejs.org/guide/essentials/reactivity-fundamentals.html</a> (and make sure to select Composition API from toggle)</p><h4>Seamlessly Transition with SRC Innovations</h4><p>Whether you’re looking to migrate from the Options API to the Compositions API or from Vue 2 to Vue 3, remember that SRC Innovations is here to assist you at any time. We have skilled and experienced developers who are more than willing to assist you with these tasks.</p><p>Please don’t hesitate to contact us <a href="https://srcinnovations.com.au/contact-us">here</a>.</p><h4>References</h4><ul><li>Reactivity Fundamentals: <a href="https://vuejs.org/guide/essentials/reactivity-fundamentals.html">https://vuejs.org/guide/essentials/reactivity-fundamentals.html</a></li><li>Reactivity in Depth: <a href="https://vuejs.org/guide/extras/reactivity-in-depth.html">https://vuejs.org/guide/extras/reactivity-in-depth.html</a></li><li>Composition API FAQ: <a href="https://vuejs.org/guide/extras/composition-api-faq.html">https://vuejs.org/guide/extras/composition-api-faq.html</a></li><li>BBEdit: <a href="https://apps.apple.com/au/app/bbedit/id404009241?mt=12">https://apps.apple.com/au/app/bbedit/id404009241?mt=12</a></li></ul><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2024/02/26/reactivity-in-vue-3/"><em>https://blog.srcinnovations.com.au</em></a><em> on February 26, 2024.</em></p><p>At SRC, we are at the forefront of empowering businesses of all sizes to gain a competitive edge in their industries. Visit <a href="https://srcinnovations.com.au/">our site</a> to learn more about us and <a href="https://srcinnovations.com.au/contact-us">get in touch</a> to discover how to transform your business through innovative and tailored technology solutions.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9df57af36407" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/reactivity-in-vue-3-9df57af36407">Reactivity in Vue 3</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Entering 2024 with thoughts regarding Large Language Models & their limitations | SRC Innovations]]></title>
            <link>https://medium.com/src-innovations/entering-2024-with-thoughts-regarding-large-language-models-their-limitations-src-innovations-1d342c920839?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/1d342c920839</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[rnn]]></category>
            <category><![CDATA[business-process]]></category>
            <category><![CDATA[development]]></category>
            <dc:creator><![CDATA[SRC Innovations]]></dc:creator>
            <pubDate>Sun, 21 Jan 2024 23:47:21 GMT</pubDate>
            <atom:updated>2024-01-17T22:19:18.827Z</atom:updated>
            <content:encoded><![CDATA[<h3>Entering 2024 with thoughts regarding Large Language Models &amp; their limitations</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vi_c_ZdYVmF7evEV" /></figure><p>We’ve had people suggest several things about Large Language Models (LLMs) that we felt was worthwhile touching on as we begin a new year, especially the comments like these:</p><p>“ <strong>LLMs are the way forward!</strong> “</p><p>and</p><p><strong>“Who cares anymore about the other text based ML architectures?!”</strong></p><h3>LLMs are actually really complex</h3><p>You might have heard similar statements too, implying that LLMs are all that businesses should now care about.</p><p>Proponents of those styles of statements tend to give the examples of:</p><ul><li>they can write code better than some of my junior developers!</li><li>they can summarise big complex documents SO easily!</li><li>they have replaced my call centre online chat team and none of my customers have realised!</li></ul><p>Whilst LLMs are without a doubt pretty damned cool, and some of the above reasons ARE valid — and at least one feels mildly unethical — there are also multiple reasons why they aren’t the be-all and the end-all in ML systems.</p><p>LLMs are big &amp; complex. Whilst it is hard to find verified data about how much has gone into OpenAI’s ChatGPT models, the general estimates and sourced data tends to be:</p><ul><li>10–45tb of data</li><li>140+ billion parameters</li><li>Enough parallel processing for training to be equivalent to 355 years of compute time</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*ps9OXwW3iksDp7IZ" /></figure><p>This has a whole bunch of implications. It immediately becomes something that only the bigger companies with deep, deep pockets can easily train for their own custom purposes. You COULD take one of the existing LLMs and — with its owner’s permission — additionally train it on your domain specific knowledge, but that’s still not going to be <strong><em>your</em></strong> LLM.</p><p>Google’s parent company has also gone on record to say that the costs of using a LLM to replace their current searches could increase costs by 10 times. Given they earn $60b a year, and are worried about the increased expenditure, you can see why it’s something only the biggest enterprises can afford it.</p><p>Those 140 billion parameters also require a lot more memory. That training data also requires a lot more storage and incurs more network costs.</p><p>As complexity of ML models increases, so do the costs. Quite simply, not everybody can afford building their own, and “renting” one isn’t always appropriate, nor realistic.</p><h3>LLMs aren’t infallible</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*rZhZv2ZDiJ4gdfdL" /></figure><p>The whole hallucination thing has had a lot of coverage already so I’m not going to go into detail about it, but the fundamental thing is that you can’t trust ALL of the information you get from an LLM, therefore, it’s still not a 100% viable replacement for some things. And if you’re spending that much money on building your own… you’d probably want something a lot more infallible. Or if you’re using another company’s LLM, you probably already had some reservations &amp; were (or plan to) carefully examine its limitations — just like you’d do for all SaaS. Right?</p><h3>Those copyright issues…</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*xHTRsGTSNTNdXayO" /></figure><p>As mentioned above, OpenAI’s GPT models were trained on an estimated 45tb of data that OpenAI got from various places. Several major content generation companies, like the BBC, and the New York Times, have alleged that part of the data was from their copyrighted content, and are pushing for an overhaul of the legal landscape related to the use of their copyright content that companies like OpenAI are using to generate revenue.</p><p>This makes the use of OpenAI’s stuff in a real production environment a minor item of concern… If the LLM that you are relying on suddenly gets shuttered/suspended/crippled due to legal issues, what is that impact to you?</p><p>There <strong><em>are </em></strong>several companies that are attempting to create LLMs that have a clearer chain of ownership with regard to training data, but they have yet to succeed to the level that OpenAI’s GPT models have. It’s also arguable if they can reach the same level of capabilities when they have less data to train on.</p><p>That said, OpenAI and other LLM owners are already working on ways to better separate copyrighted content from their training data, so this is definitely a space to watch.</p><h3>The Alternatives</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*AIopWDupQdpy4drC" /></figure><p>There are alternatives to LLMs that avoid some of the above problems. One of the ones that we’re using — for various reasons — within SRC Innovations are Recurrent Neural Networks (RNNs) utilising Long-Short-Term Memory (LSTMs) models. These are well known to generate short strings of text based on data, and have also been used in recent years to generate chatbots — albeit with a much more limited data set, and not anywhere near as chatty. But, it become well versed in domain.</p><p>We like these because we can train these much faster, and effectively, and with a set of data whose provenance we can be very clear on, and sometimes even on data that we can generate. Much more feasible for a medium sized company like us!</p><p>I’ve provided below a table that shows some of the more common text related deep learning models, and also order of magnitude estimates on the amount of data required all the listed text models</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PhjQ_2HXDX5ZrN7LE2THzQ.png" /></figure><p>And don’t forget… Every increase in data, leads to a corresponding increase in training time!</p><h3>Or even better: Combine them!</h3><p>There are also no reasons why LLMs couldn’t be combined in their usage with other ML systems. Or to even mix LLMs with algorithmic systems.</p><p>For example, you could have a text based CNN that has been trained on your internal document corpus, and categorises them by information architecture &amp; classifies them based on your security groups. And then let a LLM provide the results to a user, and then accept further input from the user to finalise the discussion!</p><p>Or use an RNN to process a series of time-based metrics regarding visitor traffic to your systems, and then an LLM to provide external contextual information regarding events that may be causing the trends that are seen in the visitor traffic.</p><h3>Regarding how AI might replace human jobs fully…</h3><p>I’m going to go on the record now and say that this isn’t going to happen in 2024. <em>[Editor’s note: There’s some dissent within SRC Innovations about whether this COULD occur in 2024, but this is JT’s blog post, so I </em><strong><em>guess</em></strong><em> I could let him make this statement… 😉]</em> It might many, many years in the future, but that won’t be in 2024.</p><p>What <strong><em>will</em></strong> happen, is that people might get replaced by other people that are better at using AI to help them perform their job.</p><p>Think of AI as another tool in your toolkit. If an experienced plumber is looking to hire another plumber, they’re gonna pick the one who is more handy with a wrench. Same story. In 2024 — and the years to come — businesses &amp; people are gonna accomplish more if they properly utilise the tool that AI is, and competitors that don’t, are gonna fall by the wayside.</p><h4><strong>Sidebar: An example of how LLMs are not yet ready to replace people</strong></h4><p>Here’s an article where products on Amazon are turning up with names that have CLEARLY been generated by Chat-GPT. This is an example of AI being used <strong>poorly</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MGEtd4Z_5Ix__3SKQf6N1w.png" /></figure><p><a href="https://arstechnica.com/ai/2024/01/lazy-use-of-ai-leads-to-amazon-products-called-i-cannot-fulfill-that-request/">https://arstechnica.com/ai/2024/01/lazy-use-of-ai-leads-to-amazon-products-called-i-cannot-fulfill-that-request/</a></p><p>As I’d mentioned… People can’t be replaced by AI in 2024, but if these same people had used their AI properly, they’d be able to get more done, faster.</p><h3>In Conclusion</h3><p>Learn to use AI as a tool, and pick the one that is appropriate for what you’re trying to accomplish.</p><p><strong>If you would like SRC Innovations to discuss what’s appropriate for your AI needs, please reach out via our website: </strong><a href="https://www.srcinnovations.com.au"><strong>https://www.srcinnovations.com.au</strong></a><strong>, we’re very ready to help. 😀</strong></p><p>We’re also contemplating a flowchart about how an organisation should approach the use of AI, please let us know if you’d be interested in that as a follow-up post. 😀</p><h4><strong>A note about the Images</strong></h4><p>Yes, those images are created by DALL-E, because I have 0 talent for art, and I didn’t want to bug my actual artistic crew (who are busy on something else) for a whole bunch of images. I did also put each of the generated images through Google Images to see if it looked like anything else that already existed. There are definite similarities in style, but nothing that looked exactly the same.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*8OIllEwj82kinxC_" /></figure><p>I hope you enjoyed my use of an AI tool to add some colour and whimsy to my post!</p><p>More importantly, I hope you found the overall post enlightening and helpful for your AI intents for 2024. Feel free to reach out if you wanted an opinion, guidance or even just with comments!</p><p><strong>Happy 2024!</strong></p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2024/01/15/entering-2024-with-thoughts-regarding-large-language-models-their-limitations/"><em>https://blog.srcinnovations.com.au</em></a><em> on January 15, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1d342c920839" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/entering-2024-with-thoughts-regarding-large-language-models-their-limitations-src-innovations-1d342c920839">Entering 2024 with thoughts regarding Large Language Models &amp; their limitations | SRC Innovations</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Fair Use — A New Beginning | SRC Innovations]]></title>
            <link>https://medium.com/src-innovations/fair-use-a-new-beginning-src-innovations-511d9ec2d078?source=rss----509983d0e19f---4</link>
            <guid isPermaLink="false">https://medium.com/p/511d9ec2d078</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ethics]]></category>
            <category><![CDATA[ai-risk]]></category>
            <category><![CDATA[law]]></category>
            <category><![CDATA[ai-regulation]]></category>
            <dc:creator><![CDATA[SRC Innovations]]></dc:creator>
            <pubDate>Sun, 21 Jan 2024 23:46:34 GMT</pubDate>
            <atom:updated>2024-01-18T00:50:24.585Z</atom:updated>
            <content:encoded><![CDATA[<h3>Fair Use — A New Beginning</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/788/0*5IUPrx91RJOxHEu6" /></figure><p>We stand at the threshold of a new era — a world powered by AI systems. It’s a world that promises increased productivity, efficiency, safety, transformation, and personalisation. We’ve all experienced the remarkable capabilities of technologies like ChatGPT and Mid-Journey, but amidst the excitement lies a darker side that may be as detrimental as it is beneficial.</p><h3>Current Risks</h3><p>Concerns such as deep fakes, mass surveillance, discrimination, privacy breaches, accountability, and job security pose tangible risks in this environment. To navigate these dangers, it is crucial to establish safeguards against misuse and address inherent software flaws.</p><p>These are the biggest concerns around AI systems (as identified by <a href="https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=56495df72706">Forbes</a>):</p><ol><li><strong>Lack of transparency</strong><br>The degree of openness and clarity in understanding how the AI system functions, makes decisions, and processes data. See <a href="https://medium.com/mind-ai/lack-of-transparency-could-be-ais-fatal-flaw-7c33b855928c">Lack of transparency could be AI’s fatal flaw</a>.</li><li><strong>Bias and discrimination</strong><br>The potential for AI algorithms and models to exhibit unfair or prejudiced behaviour, leading to unequal treatment or negative impacts on certain individuals or groups based on their characteristics, such as race, gender, ethnicity, age, or other protected attributes. See <a href="https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist">Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day</a>.</li><li><strong>Privacy</strong><br>The protection and control of individuals’ personal information and data. See <a href="https://www.weforum.org/agenda/2022/03/designing-artificial-intelligence-for-privacy/">Why artificial intelligence design must prioritise data privacy</a>.</li><li><strong>Ethical dilemmas</strong><br>This covers issues such as their autonomous nature, their potential impact of their decisions on individuals or society, and their involvement of sensitive data. Watch video on <a href="https://www.ted.com/talks/patrick_lin_the_ethical_dilemma_of_self_driving_cars">the self-driving car dilemma</a>. → <a href="https://hls.ted.com/project_masters/4597/manifest.m3u8">https://hls.ted.com/project_masters/4597/manifest.m3u8</a></li><li><strong>Security risks</strong><br>Like any other software or technology, are susceptible to security breaches, attacks, and exploits if not adequately protected. See <a href="https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/">FTC investigates OpenAI over data leak and ChatGPT’s inaccuracy</a>.</li><li><strong>Concentration of power</strong><br>The accumulation and centralisation of decision-making authority, control, and influence within a small number of entities or organisations that possess advanced AI capabilities. See <a href="https://www.technologyreview.com/2022/04/19/1049378/ai-inequality-problem/">How to solve AI’s inequality problem</a>.</li><li><strong>Job displacemen</strong>t<br>The risk of automation and AI technologies to replace or eliminate certain job roles traditionally performed by humans. See <a href="https://www.axios.com/2023/03/29/robots-jobs-chatgpt-generative-ai">AI and robots fuel new job displacement fears</a>.</li><li><strong>Dependence on AI</strong><br>The level of reliance of individuals, organisations, or society as a whole on artificial intelligence technologies for various tasks, decision-making processes, and functions.</li><li><strong>Economic inequality</strong><br>The disparity in wealth, income, and economic opportunities that can be exacerbated or perpetuated by the adoption and deployment of artificial intelligence technologies.</li><li><strong>Legal and regulatory challenges</strong><br>The complex legal and regulatory issues that arise due to the rapid advancement and widespread adoption of artificial intelligence technologies.</li><li><strong>AI arms race</strong><br>The competition among countries, organisations, or entities to develop and deploy advanced artificial intelligence technologies for strategic, economic, or military advantage.</li><li><strong>Loss of human connection</strong><br>The potential decrease or erosion of genuine emotional or social interactions between individuals due to increased reliance on artificial intelligence and technology for communication and engagement. See <a href="https://www.nytimes.com/2023/05/03/technology/personaltech/ai-chatbot-pi-emotional-support.html">My Weekend With an Emotional Support A.I. Companion</a> .</li><li><strong>Misinformation and manipulation</strong><br>The potential for, particularly in the context of social media and information dissemination, to spread false or misleading information and to be exploited for nefarious purposes. See <a href="https://www.axios.com/2023/07/10/ai-misinformation-response-measures">How AI will turbocharge misinformation — and what we can do about it</a>.</li><li><strong>Unintended consequences</strong><br>The unforeseen outcomes or effects that arise from the deployment and use of artificial intelligence technologies.</li><li><strong>Existential risks</strong><br>The potential threats that advanced artificial intelligence technologies could pose to the continued existence of humanity or to the preservation of civilisation as we know it. This issue is captured in sci-fi movies like <a href="https://www.npr.org/2023/07/31/1191017889/ai-artificial-intelligence-movies">2001: A Space Odyssey</a>, <a href="https://www.hbo.com/westworld">West World</a> and <a href="https://www.ladbible.com/news/technology/ai-i-robot-plot-could-happen-093791-20230718">I, Robot</a>.</li></ol><p>Governments worldwide are racing to create legal frameworks that regulate AI to mitigate potential risks. Leading this effort, the European Union (EU) stands at the vanguard. In April 2023, they finalised a proposed framework that will undergo refinement and become law by year-end, setting the stage for responsible AI regulation.</p><p>By proactively addressing these challenges and implementing effective protection, we can ensure that the benefits of AI are harnessed responsibly, safeguarding our society and shaping a future where technology thrives hand in hand with human well-being.</p><p>—</p><h3>The EU Proposal</h3><p>The goals of the EU framework is to ensure AI systems are safe, transparent, traceable, non-discriminatory and environmentally friendly. Additionally they want AI system to have human oversight as a preventative measure to prevent harmful outcomes.</p><p>The EU’s regulatory framework adopts a risk-based approach, categorising AI systems based on the level of risk they present to users: unacceptable risk, high risk, limited risk, and minimal or no risk. Each category is subject to corresponding regulations, with higher-risk systems facing more extensive oversight and control.</p><h4>Unacceptable Risk</h4><p>AI systems deemed as posing an unacceptable risk are those considered to be dangerous to individuals and will be prohibited. Such systems include:</p><ol><li>Cognitive behavioural manipulation of individuals or vulnerable groups, like voice-activated toys that encourage unsafe conduct in children.</li><li>Social scoring, involving categorising people based on behaviour, socio-economic status, or personal traits. For example, this is to avoid similar state control like <a href="https://www.technologyreview.com/2022/11/22/1063605/china-announced-a-new-social-credit-law-what-does-it-mean/">the Chinese city that rated aspects of residents’ behaviour</a>.</li><li>Real-time and remote biometric identification systems, such as facial recognition.</li></ol><p>Certain exceptions may be permitted. For instance, “post” remote biometric identification systems, wherein identification occurs after a considerable delay, may be allowed for prosecuting serious crimes, but only with court approval.</p><h4>High Risk</h4><p>AI systems categorised as high risk, are those that have adverse impacts on safety or fundamental rights. These high-risk systems will be further divided into two distinct categories:</p><ol><li>AI systems within the scope of the <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A02001L0095-20100101">EU’s product safety legislation</a>. This includes products such as toys, aviation equipment, automobiles, medical devices, and elevators.</li><li>AI systems falling into eight specific areas that will have to be registered in an EU database:</li></ol><ul><li>Biometric identification and categorisation of natural persons</li><li>Management and operation of critical infrastructure</li><li>Education and vocational training</li><li>Employment, worker management and access to self-employment</li><li>Access to and enjoyment of essential private services and public services and benefits</li><li>Law enforcement</li><li>Migration, asylum and border control management</li><li>Assistance in legal interpretation and application of the law.</li></ul><h4>Limited Risk</h4><p>This largely refers to generative AI, like ChatGPT and Mid Journey. These systems would have to comply with transparency requirements:</p><ul><li>Disclosing that the content was generated by AI</li><li>Designing the model to prevent it from generating illegal content</li><li>Publishing summaries of copyrighted data used for training.</li></ul><h4>Minimal or No Risk</h4><p>Limited risk AI systems must adhere to minimal transparency requirements, enabling users to make informed decisions. After interacting with the applications, users can choose whether to continue using them. Users must be informed when they are engaging with AI, including systems that generate or alter image, audio, or video content, such as deepfakes.</p><p>—</p><h4>Alternative Approach</h4><p>The UK government is pursuing a <a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper">pro-innovation approach</a> to legislation and regulation. Given the fast evolution of AI systems they want an agile and iterative development to react to the rapidly changing advancements in the field. Industry has praised this pragmatic and proportionate approach.</p><p>The approach is defined by five principles to promote responsible development and usage of AI systems:</p><ol><li><strong>Safety, security and robustness</strong><br>AI systems in the UK need to have been trained and built on robust data.</li><li><strong>Appropriate transparency and explainability</strong><br>The users should be able to understand how the system operates.</li><li><strong>Fairness</strong><br>AI must not compromise the legal rights of individuals.</li><li><strong>Accountability and governance</strong><br>There should be adequate oversight and clear lines of accountability for the usage of AI systems.</li><li><strong>Contestability and redress</strong><br>It is essential to have mechanisms for seeking redress in case an AI system causes harm.</li></ol><p>This approach is designed to allow AI system development to flourish while putting in safety guardrails for the public.</p><p>The EU and UK have both defined their objectives clearly (“what”), but they are yet to determine the specific methods (“how”), which presents a more significant challenge for the lawmakers.</p><p>—</p><h3>The Challenges</h3><p>There are numerous challenges facing governments in framing an effective legal and regulatory framework in striking a balance between promoting innovation and protecting its citizens.</p><p>The most pressing challenges are:</p><ul><li>Clear definitions and terminology</li><li>Adaptable regulations that can keep pace with technical advancements</li><li>Auditing and certifications</li><li>Effective enforcement</li><li>Defining ethical guidelines and impact assessments</li><li>Ensuring transparency through full disclosure of training materials and methodologies</li><li>Creating accountability mechanisms for handling complaints or appeals</li><li>Interoperability and consistency across international borders</li></ul><p>—</p><h3>Australia</h3><p>In 2018, Australia unveiled a <a href="https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles">voluntary ethics framework</a>. Since then AI systems advancements have snowballed and now Australia faces the real challenge to put in place proper legislation and regulation.</p><p>On, 1 June 2023, the Australian government published a <a href="https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia.pdf">consultation paper</a> as a first step on this path.</p><p><a href="https://www.minister.industry.gov.au/ministers/husic">Ed Husic</a>, the industry and science minister, said “People want to think about whether or not that technology and the risks that might be presented have been thought through and responded to in a way that gives people assurance and comfort about what is going on around them. Ultimately, what we want is modern laws for modern technology, and that is what we have been working on.”</p><p>At the time of writing this the government have invited public <a href="https://consult.industry.gov.au/supporting-responsible-ai">feedback</a> on how to mitigate any potential risks of AI and support safe and responsible AI practices (closes 26 July 2023).</p><p>No official time frame has been put on enacting legislation and regulation.</p><h3>AI in the Lawmakers’ Cross-hairs</h3><p>Already in 2023 there have been several major incidents that is turning the attention in global law makers. Here are two of the biggest stories.</p><h4>ChatGPT and Privacy</h4><p>In April 2023, the Italian government briefly banned ChatGPT over privacy concerns. Italy threatened to investigate whether ChapGPT complies with the General Data Protection Regulation (GDPR). The GDPR governs the way in which personal data can be used, processed and stored.</p><p>The tipping point for the Italians (and organisations like Samsung) was an <a href="https://news.trendmicro.com/2023/05/13/openai-chatgpt-data-breach/">incident on 20 March</a> that the app had experienced a data breach involving user conversations and payment information.</p><p>Other nations, Ireland and Germany among them, are watching events in Italy closely to determine if they should also ban ChatGPT on similar grounds.</p><p>The Italian Data Protection watchdog disapproved of the “mass collection and storage of personal data to train algorithms” in the platform, citing no legal basis, and expressed concerns about exposing minors to inappropriate content due to lack of age verification.</p><p>The ban has since been lifted after Open AI (ChatGPT’s owner) introduced several privacy-related changes, including making it clearer to European users about how they can delete their personal data from the chatbot program.</p><p>Read more:<br><a href="https://techcrunch.com/2023/03/31/chatgpt-blocked-italy/">Italy orders ChatGPT blocked citing data protection concerns</a> <a href="https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/">Samsung bans use of generative AI tools like ChatGPT after April internal data leak</a></p><h4>Stable Diffusion v Getty Images</h4><p>In February 2023 Getty Images brought a lawsuit against Stability AI, accusing the company of using 12 million images without authorisation or compensation to train its AI model.</p><p>Not only are the generated images a strong likeness they even apply the Getty Images watermark in a brazen trademark infringement.</p><p>Read more on this story: <a href="https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/">Getty Images lawsuit says Stability AI misused photos to train AI.</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/0*53gyKG2Yz5RhMfwB" /></figure><p>—</p><h3>AI and SRC</h3><p>At SRC, our <a href="https://srchy.ai/">Srchy</a> product, an eCommerce product catalog search service, uses machine learning to personalise search results based on a customer’s behaviour. In our usage of AI our data collection is anonymised so that collected information has been stripped of any identifiable characteristics, making it impossible to link the data to specific individuals. This practice is common where organisations seek to analyse trends and patterns without compromising individuals’ privacy or identities. Customers receive the full benefit of AI without any risk to their privacy.</p><p>—</p><h3>Forewarned is Forearmed</h3><p>There are serious concerns raised over the risks AI poses to society and humanity.</p><p>One is a letter <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">issued by the non-profit Future of Life Institute</a> and signed by more than 1000 people including Musk, Apple co-founder Steve Wozniak. In the letter they call for a pause on <a href="https://www.smh.com.au/link/follow-20170101-p57zy4">advanced AI development</a> until shared safety protocols for such designs were developed, implemented and audited by independent experts.</p><p>Another statement, issued by the the <a href="https://www.safe.ai/statement-on-ai-risk">Centre for AI Safety</a> and signed by the likes of the heads of <a href="https://openai.com/">OpenAI</a> and <a href="https://www.deepmind.com/">Google Deepmind</a> warn of the existential threat to humanity.</p><p>Could this be exaggeration driven by insincere motives? Is this just another Y2K-like hysteria? Maybe. But given the importance of the people in the technology field endorsing these letters it’s probably to our advantage to pay heed and tone down any scepticism, because what if they’re right? Can the risk be ignored?</p><p>Effective legislation and regulation is a countermeasure to the risks. At the rate AI technology is evolving governments must act soon in an effective and maintainable approach, lest we push all innovation offshore.</p><p><em>Originally published at </em><a href="https://blog.srcinnovations.com.au/2023/08/04/fair-use-a-new-beginning/"><em>https://blog.srcinnovations.com.au</em></a><em> on August 4, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=511d9ec2d078" width="1" height="1" alt=""><hr><p><a href="https://medium.com/src-innovations/fair-use-a-new-beginning-src-innovations-511d9ec2d078">Fair Use — A New Beginning | SRC Innovations</a> was originally published in <a href="https://medium.com/src-innovations">SRC Innovations</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>