<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Alexa Steinbrück on Medium]]></title>
        <description><![CDATA[Stories by Alexa Steinbrück on Medium]]></description>
        <link>https://medium.com/@alexasteinbruck?source=rss-8e980e537c2b------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Tue, 12 May 2026 02:45:23 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@alexasteinbruck/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[How to host an open source LLM]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://alexasteinbruck.medium.com/how-to-host-an-open-source-llm-9c79c0e6e378?source=rss-8e980e537c2b------2"><img src="https://cdn-images-1.medium.com/max/1024/1*2cwwpzuglP-OcjljZdBEtA.jpeg" width="1024"></a></p><p class="medium-feed-snippet">This article is the result of research I conducted between April and June 2025. The focus is on solutions from the EU (and Germany in&#x2026;</p><p class="medium-feed-link"><a href="https://alexasteinbruck.medium.com/how-to-host-an-open-source-llm-9c79c0e6e378?source=rss-8e980e537c2b------2">Continue reading on Medium »</a></p></div>]]></description>
            <link>https://alexasteinbruck.medium.com/how-to-host-an-open-source-llm-9c79c0e6e378?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/9c79c0e6e378</guid>
            <category><![CDATA[women-in-tech]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[privacy]]></category>
            <category><![CDATA[gdpr]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Fri, 24 Oct 2025 11:22:00 GMT</pubDate>
            <atom:updated>2025-10-24T11:22:00.724Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[“PermaPrompting Feral AI Agents” Summerschool in Prague (July 2025)]]></title>
            <link>https://alexasteinbruck.medium.com/permaprompting-feral-ai-agents-summerschool-in-prague-july-2025-14ac61aa193e?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/14ac61aa193e</guid>
            <category><![CDATA[design]]></category>
            <category><![CDATA[hackathons]]></category>
            <category><![CDATA[prague]]></category>
            <category><![CDATA[hugging-face]]></category>
            <category><![CDATA[ai-agent]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Mon, 04 Aug 2025 09:54:54 GMT</pubDate>
            <atom:updated>2025-08-04T09:59:29.305Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="A vivid scene with a group of people communicating inside a large rather empty room, sitting casually on couches or standing and chatting" src="https://cdn-images-1.medium.com/max/1024/1*ZVxKKgZ_9yxqUEffIIxNvg.jpeg" /><figcaption>Thinking, designing, (vibe) coding and interdisciplinary chatting :-)</figcaption></figure><p>The city of Prague has become my second home, and it’s also where I connected with a group of artists/designers and researchers of the <a href="https://collective.uroboros.design/">Uroboros collective</a>. The events they organise offer quite unusual perspectives on technology and society, guided by a <strong>more-than-human</strong> worldview. One of their recurring themes, especially researched by <a href="https://medium.com/@lenkahamosova">Lenka Hamosova</a>, is “embodiment”, an aspect so often neglected in common tech discourses, and one I instinctively connect to and know will become crucial in the future.</p><p>So when this community <a href="https://www.instagram.com/uroborosfestival/p/DKfJdADtzy0/">announced</a> that they are organising a summer school with the curious name of “PermaPrompting Feral AI Agents”, I had to be a part of it!</p><figure><img alt="The instagram post of the event showing a chatgpt input in the background and a mysterious cloudy sky" src="https://cdn-images-1.medium.com/max/1024/1*lE5rlhPMc8CKHgOBa-FDYA.jpeg" /><figcaption>Visual of the event</figcaption></figure><h3>An intriguing crossover</h3><p>The title of this summer school alone was packed with fascinatingly complex concepts:</p><ul><li><strong>AI Agents</strong> – the emergent (and hyped) paradigm where generative AI systems get enhanced with more decision-making capabilities and access to tools to interact with other systems. It’s also criticised by people such as <a href="https://techcrunch.com/2025/03/07/signal-president-meredith-whittaker-calls-out-agentic-ai-as-having-profound-security-and-privacy-issues/">Signal founder Meredith Whittaker</a>.</li><li><strong>Permaculture</strong> – which I knew very vaguely but was eager to learn more about! And what was PermaPrompting about?!</li><li><strong>Feral</strong> – a word that I had encountered before at Uroboros events, especially through <a href="https://materie.me/">Markéta Dolejšová’s</a> research work on multi-species research and more-than-human ecologies.</li></ul><p>As someone with degrees in both AI and Fine Arts who worked at the intersection for more than a decade, these kinds of interdisciplinary conferences and surprising cross-overs are exactly where I live. I didn’t know how permaculture would fit into that yet, though :-)</p><p>And then there was that provocative line in the description of the summer school: “<em>Machine learning use, transform or refuse</em>”? This hit right at the heart of the question I’ve been struggling with since the hype of generative AI: Whether there can be emancipatory and responsible uses of a technology which might be fundamentally unethical at its core. I also observed during the last months that there is a more mainstream movement emerging to “refuse AI” altogether.</p><figure><img alt="A group of people working on their laptops while listening to a presentation given by Denisa Kera" src="https://cdn-images-1.medium.com/max/964/1*MKoFm7n-u5HzBdMfO73Tuw.jpeg" /><figcaption>Workshop by Denisa Kera</figcaption></figure><h3>The Programme</h3><p>From July 21–24, we gathered in a building of the Academy of Fine Arts (AVU) in Prague’s Letná district. The participants were impressively international: they came from Turkey, India, Bosnia, the Netherlands, Poland, Italy, Mexico, the USA, Ireland, Germany (me) and of course the Czech Republic. Their professions ranged from philosophy, microbiology, architecture, automotive design, photography, and more.</p><p>The goal was to build our own “agents” — however they chose to interpret that term (more on that later). We were supposed to bring our own datasets to work with.</p><p>Along this way, we were treated with inspiring talks, technical workshops and artistic lecture performances.</p><h4>Day 1: Foundations and Philosophy</h4><p>The first day opened with welcomes from organisers <a href="https://enriquencinas.com/">Enrique Encinas</a> and <a href="https://materie.me/">Markéta Dolejšová</a>. After participant introductions, philosopher <a href="https://ramonalvarado.net/">Ramón Alvarado</a> from the University of Oregon gave a talk on “What is an Agent” — exploring the concept of agency across different philosophical traditions.</p><h4>Day 2: Python and Commandline</h4><p>On the second day, we got hands-on experience with <a href="https://github.com/huggingface/smolagents">Smol Agents</a>, a Python library by HuggingFace for building AI agents with just a few lines of code. After this, philosopher and designer <a href="https://scholar.google.com/citations?user=y6mQSlAAAAAJ&amp;hl=en">Denisa Kera</a> presented her research on conversing with datasets (such as satellite data) with the help of LLMs. She also talked briefly about her research on <a href="https://en.wikipedia.org/wiki/Ergative%E2%80%93absolutive_alignment">ergative languages</a> and LLMs. The rest of the day was dedicated to independent work on our projects.</p><h4>Day 3: Feralities and Intuition</h4><p>The third day started with a presentation by <a href="https://materie.me/">Markéta Dolejšová</a> on “Practising Feralities” and a workshop by <a href="https://www.youtube.com/@CreativeAIDuchess">Lenka Hamosova</a><strong> </strong>exploring the role of intuition and the body when interacting with generative AI technology.</p><h4>Day 4: Presentations</h4><p>The final day focused on wrapping up projects and presenting our work to the group. What projects were developed during the summer school? A web-based game mimicking a pollination garden for bees. A synthetic movie speculating about invasive plants in Prague. A “popstar generator” based on a dataset of natural volcanic behaviour. And more!</p><p>I got lucky to collaborate with Dutch designer <a href="https://www.instagram.com/giliamantonie/?hl=en">Giliam Ganzevles</a> during the summer school on an AI/web project. It all started with the data that Giliam brought to the summer school: “Circular soil chromatographs”. It was the first time I saw these kinds of images and got immediately hooked on their beauty and expressiveness (they are analogue datavisualisations!)</p><p>Together we built a web application that is half speculative art/AI project and half scientific image analysis – but that deserves its own blogpost!</p><figure><img alt="A crowd of people infront of a presentation in a dark room. The presentation shows a human iris and a pattern of circular brown images in the background. There are a couple of potted plants on the stage." src="https://cdn-images-1.medium.com/max/1024/1*i7DMF0A6gLbnKsEx-_931w.jpeg" /><figcaption>Our final presentation of the iris/soil chromatography web application we built (which was not an agent!)</figcaption></figure><h3>The highlights</h3><p>These were 4 extremely intense days, both intellectually, creatively and socially! Long days that started in the morning and ended in bars late at night. The interdisciplinary mix of people was so nice and something I would love to have more often in my life. The kind of conversations that emerged there is something that I wouldn’t find in pure technical spaces (like an “AI agent hackathon”) nor pure art contexts.</p><h3>The tensions</h3><p>That said, my dual background — technical AI knowledge and art practice — created some interesting tensions throughout the event. This is a common theme in interdisciplinary work, and I think these kinds of frictions are what make such gatherings valuable. Three specific points kept me thinking:</p><h4>💥 Agent (Terminology)</h4><p>Throughout the event, I watched the word “agent” get applied to everything with remarkable generosity. Simple chatbots? Agents. Basic LLMs with system prompts? Agents. A python programme that outputs something? Agents.</p><p>This drove me a bit nuts. I was aware that the term Agent has several meaning in different contexts (Philosophy, social science, psychology). But I thought when we speak about concrete technology, we should stick to the technical terminology?</p><h4>💥 Including AI technology in the more-than-human discourse</h4><p>It makes me deeply uncomfortable to see how the more-than-human discourse is extended to AI systems. Terms like “feral AI” seem to suggest that these systems possessed a genuine wildness or autonomy.</p><p>I’ve spend years arguing against AI anthropomorphisation. One of the myths we tackled on aimyths.org was literally “AI has agency”. The more-than-human discourse makes sense to me for actual non-human entities: animals, plants, and ecosystems are genuine actors. But extending this to AI seems to undo years of demystification work: treating prediction machines as autonomous beings and <strong>hiding the agency</strong> of the ones that make them such as companies like OpenAI.</p><p>I am probably missing a lot of nuances in the more-than-human (or <em>New Materialism</em> or <em>Object Oriented Ontology</em>) discourse, but applying it to AI and seems to me to play right into the hands of the big AI monopoly companies and their AGI and longtermists cult members.</p><h4>💥 Blind Spots in “Critical AI”</h4><p>I was expecting a critical art bubble discourse about AI, but was surprised about the lack of critical awareness when it comes to actually using the tools. Here we were, discussing AI resistance while casually burning through energy-intensive models. Video generation or computation-heavy reasoning processses in “agentic AI” — the ecological implications flew under the radar. Similar to the privacy implications, when uploading pictures of peoples faces or conversational notes to some “cloud”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rdJ-UW87G_Xz0nsCucejKg.jpeg" /><figcaption>Organizer Enrique Encinas is welcoming everybody for the final presentations on the last day</figcaption></figure><h3>What I’m Taking Away</h3><p>This was a precious event! Despite of these tensions described above — or rather, because of them I found this summer school to be incredibly inspiring and productive.</p><p>Uroboros collective again managed to create a safe space that encourages discovery, experimentation, bringing people from disciplines to the table. The overall atmosphere was so generous and creative, the participants were incredible, and Enrique’s warm moderation kept everything flowing beautifully.</p><p>Practicing interdisciplinary communication is always quite exhausting I guess. Reading everything that is being said twice with regard to different interpretations is quite energy-demanding. It seems almost like my body enters another state of metabolism due to it!</p><p>I definitely want to learn more about the more-than-human discourse and philosophy such as new materialism and object oriented ontology. And see how critical AI literacy would fit in there without sacrificing the crucial points.</p><p>And it was wonderful that I could bring my dog Lillet along, who was warmly welcomed despite her sometimes weird and socially awkward behaviour. This openness is just a nice side effect of the organizers’ genuine commitment to multi-species research :-)</p><p><em>The summer school “PermaPrompting Feral AI agents” (21-24.07.2025) is part of the “Ars Biologica” series of summer schools. The next event will take place in Budweis, CZ.</em></p><h3>Links</h3><ul><li>Open Call and Programme of the summer school: <a href="https://collective.uroboros.design/open-call-permaprompting/">https://collective.uroboros.design/open-call-permaprompting/</a></li><li>Uroboros collective (organizers of the summer school): <a href="https://collective.uroboros.design/">https://collective.uroboros.design/</a></li><li>Lenka Hamosova (founding member of Uroboros collective) on Medium: <a href="https://medium.com/@lenkahamosova">https://medium.com/@lenkahamosova</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=14ac61aa193e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bot Development for Messenger Platforms: WhatsApp, Telegram and Signal (2025 guide)]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://alexasteinbruck.medium.com/bot-development-for-messenger-platforms-whatsapp-telegram-and-signal-2025-guide-50635f49b8c6?source=rss-8e980e537c2b------2"><img src="https://cdn-images-1.medium.com/max/1024/1*RILTSnb_xO5ukUKq_qmWrQ.jpeg" width="1024"></a></p><p class="medium-feed-snippet">What&#x2019;s technically possible? What&#x2019;s legally allowed? An overview of official and unofficial APIs, open source tools and platform policies</p><p class="medium-feed-link"><a href="https://alexasteinbruck.medium.com/bot-development-for-messenger-platforms-whatsapp-telegram-and-signal-2025-guide-50635f49b8c6?source=rss-8e980e537c2b------2">Continue reading on Medium »</a></p></div>]]></description>
            <link>https://alexasteinbruck.medium.com/bot-development-for-messenger-platforms-whatsapp-telegram-and-signal-2025-guide-50635f49b8c6?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/50635f49b8c6</guid>
            <category><![CDATA[telegram]]></category>
            <category><![CDATA[chatbots]]></category>
            <category><![CDATA[signal]]></category>
            <category><![CDATA[product]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Sat, 05 Jul 2025 17:26:01 GMT</pubDate>
            <atom:updated>2025-07-30T09:42:14.613Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[Switching to a new email provider (but keeping your old domain)]]></title>
            <link>https://alexasteinbruck.medium.com/switching-to-a-new-email-provider-but-keeping-your-old-domain-9f2d808afa94?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/9f2d808afa94</guid>
            <category><![CDATA[technology-trends]]></category>
            <category><![CDATA[email]]></category>
            <category><![CDATA[privacy]]></category>
            <category><![CDATA[communication]]></category>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Tue, 30 Jul 2024 13:53:19 GMT</pubDate>
            <atom:updated>2024-07-30T21:58:00.578Z</atom:updated>
            <content:encoded><![CDATA[<h3>No big deal: Switch your email provider, but keep your old domain</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4Z91T8RDSveAQVE83tXY2A.jpeg" /><figcaption>ProtonMail stores your end-to-end encrypted email under 1,000 meters of granite (close to Attinghausen, Switzerland)</figcaption></figure><p>Email is the <strong>lifeline of my professional existence</strong>. As a freelance software developer, it is the primary way I connect with clients, secure new gigs, manage projects, and, of course, send invoices. Missing an email could lead to lost opportunities and unnecessary confusion. Simply put, email is how I feed myself and my dog!</p><p>Given its importance, switching my email account seemed like a daunting task. I had several concerns: How long would the transition take? Would there be any downtime? Could emails get lost in the process? Would my emails suddenly be marked as spam?</p><p>I wrote this short article to answer these questions and share some best practices. Hopefully, it will also ease some of your insecurities.<br><strong>TL;DR: It’s fairly easy, the risks are low, and you can do it too!</strong></p><p>This short article is <strong>for you</strong> if you have a custom email address and domain (e.g., yourname@yoursite.com) and want to move it to a new email provider.</p><p>This article is <strong>NOT for you </strong>if you have a Gmail address or similar (yourname@gmail.com) and want to switch email provider.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1ICpEaCxlTSj7svsPmN7CA.jpeg" /></figure><h3>Choosing ProtonMail as my new email provider</h3><p>Recently, I decided to switch my email provider. My requirements were: enhanced privacy, enhanced security for me and my clients, reasonable pricing, good user experience (admin interface, mail client). And most importantly, I needed to keep my already existing custom domain: <a href="https://studio.alexasteinbruck.com/">alexasteinbruck.com</a></p><p>After some market research and comparison, I chose <a href="https://proton.me/">Proton Mail</a> for these main reasons:<br>1. they put privacy at first place<br>2. their pricing is reasonable<br>3. their mail client has a great UI<br>4. their code is open source</p><p>Proton Mail prioritizes <strong>privacy</strong> above all else, as stated in their commitment:</p><blockquote>“We provide easy-to-use alternatives to Big Tech services and their surveillance business models. With Proton, your data is protected, not exploited.”</blockquote><p>Their privacy features include end-to-end encryption ensuring that not even Proton itself can read your emails. Moreover, Proton Mail detects and disables tracking technologies embedded in incoming emails. Beyond these core offerings, Proton Mail includes several other interesting features, like disposable email addresses, a free VPN service and Proton Scribe, an integrated privacy-friendly version of ChatGPT.</p><p>Two other impressive facts about Proton:</p><ul><li>Proton was founded by a group of scientists who met at CERN. They were advised by Sir Tim Berners-Lee who is often called the inventor of the world wide web.</li><li>Proton operates two data centers to ensure reliability and security. One is located in Lausanne, while the other serves as a backup in Attinghausen, housed in the former K7 military bunker under 1,000 meters of granite rock. <a href="https://qz.com/1103310/photos-the-secret-swiss-mountain-bunker-where-millionaires-stash-their-bitcoins">Here’s</a> a Quartz reportage about it.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sP14qDNt0RN5uD02qXukSQ.png" /><figcaption>Migration status displayed in the Proton migration assistant</figcaption></figure><h3>How to do the migration (outline)</h3><p>Disclaimer: Please check the actual website of Proton for more detailed and up-to-date instructions, this is just an overview!</p><h4>Step 1 — Sign up with Proton Mail</h4><p>Sign up with Proton Mail. Then<strong> </strong>go to the “Domain names” settings page and click <strong>Add domain</strong> or<strong> Review</strong>. This will launch a convenient <strong>assistant</strong> that will walk you through the necessary steps!</p><h4>Step 2 — Update DNS records</h4><p>Open a new tab and login at your domain provider. That’s the company where you once registered your custom domain (it might also be where you bought your web space/hosting). It is important to understand that hosting and domains are different things that should always be decoupled: You could cancel your hosting plan, but keep your domain.</p><p>Navigate to the DNS settings of your domain provider. Here you will see a table of your DNS records.</p><ol><li>Add a DNS record of type TXT (this is to prove to Proton that you are indeed the owner of the domain) to your list of DNS records. Copy the values from the Proton assistant website.</li><li>Add 2 MX records — these are the actual records for email routing: mail.protonmail.ch and mailsec.protonmail.ch</li><li>Add a few more DNS records for mail origin verification, security and spam prevention: SPF (that’s a record of type TXT), DKIM (record of type CNAME) and DMARC (record of type TXT)</li></ol><h4><strong>What needs to happen with your old DNS records?</strong></h4><p>That depends on the type of records.<br>Old MX records: You need to delete/overwrite them. Alternatively keep them and give them a priority that has a higher number (meaning lower priority) than the Proton entries.<br>Old SPF/DKIM/DMARC records: A single domain can have only 1 TXT record for SPF! But the value of the SPF record can reference multiple servers in the text content.<br>Check in Protons migration assistant for more recommendations!</p><h4>Step 3 — Wait ⏳</h4><h4>How long will the transition take?</h4><p>This is hard to tell. DNS is an unpredictable beast. It can take up to 72 hours for the new DNS records to update through the system. In my case it took only about 3 hours for the whole procedure to be completed. In your Proton admin interface you can see the status of the migration.</p><h3>FAQs/Questions</h3><h4>Is there any “downtime”?</h4><p>In case that the new MX records have not yet been picked up by the DNS servers worldwide, they will still refer to the old MX records pointing to your old email provider. This means you could still receive emails on your old email server, so don’t shut it down too quickly.</p><h4>Can emails get lost in the process?</h4><p>It is unlikely for incoming emails to get lost, if you have any MX records present in your DNS settings. In the rare case that an email could not be sent to one mail server or the other, they will “bounce” and the sender would be notified by a message: “Undelivered Mail Returned to Sender”.</p><h4>Can both email servers be “active”</h4><p>If you keep both MX entries yes. The record with the lowest priority is used first, then the higher ones until one server responds. If they have similar priorities then the emails would be randomly split between the two servers (this is how load balancing is usually done). But one single email will never be delivered to more than one server!</p><h3>More tips to smooth the process</h3><ol><li><strong>Notify Your Key Contacts:</strong> Inform your important contacts about the upcoming migration. This reduces the risk of missed communications.</li><li><strong>Choose the Right Timing:</strong> Plan the migration for a weekend or a period when your email activity is lower.</li><li><strong>Adjust TTL Before Migrating:</strong> Prior to starting the migration, set your MX records to a low TTL (Time to Live). This ensures that DNS servers will update the records more quickly once you switch to the new email server, speeding up the transition and reducing the likelihood of email delivery issues.</li></ol><h3>Conclusion</h3><p>The transition to a new email provider turned out to be fairly easy and not as risky as I initially thought. This was largely due to Proton Mail’s user-friendly transition assistant, which streamlined the process and minimized potential issues. Overall, the migration went smoothly, confirming that with the right tools and preparation, changing email providers can be a straightforward task.</p><h4><strong>One Last Thought</strong></h4><p>If you prioritize security, it’s crucial to ensure that your domain provider also offers robust security features. Relying solely on a secure email provider isn’t enough. If someone gains access to your domain provider account, they could alter DNS settings and redirect your MX records to a different mail server. This would allow them to impersonate you and intercept emails intended for you. Therefore, securing both your email provider and domain account is essential for comprehensive protection.</p><h3>Appendix / DNS Terminology</h3><p><strong>DNS</strong> = Domain Name System<br><strong>TTL</strong> = Time to live, refers to how long a<a href="https://www.ibm.com/topics/dns-server"> DNS server</a> can serve a cached<a href="https://www.ibm.com/topics/dns-records#:~:text=IBM&amp;text=Vazquez%2C%20Michael%20Goodwin-,What%20are%20DNS%20records%3F,than%20complex%20numerical%20IP%20addresses."> DNS record</a>.</p><h4><strong>Relevant DNS </strong>record<strong> types</strong></h4><p><strong>MX</strong> — a mail exchange record<br><strong>TXT</strong> — a text record<br><strong>CNAME</strong> — canonical name record</p><h4>How to refer to the columns in the DNS table:</h4><p>the record <strong>type</strong>, e.g. TXT<br>the <strong>host name</strong>, e.g. @ <br>the<strong> value</strong><em>, </em>e.g. protonmail-verification=xxx.</p><h4><strong>Special records of type TXT</strong></h4><p>- SPF (Sender Policy Framework) — to authenticate the sender and ensure that only authorized servers can send emails from your domain<br>- DMARC (Domain-based Message Authentication, Reporting and Conformance) — protect from email spoofing</p><h4><strong>Special records of type CNAME</strong></h4><p><strong>- DKIM</strong> (DomainKeys Identified Mail) — to prevent hackers to mess with your email, to verify that messages haven’t been tampered with in transit<br>- <strong>SPF</strong>, <strong>DKIM</strong> and <strong>DMARC</strong> are used for email security and authentication.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9f2d808afa94" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Headless WordPress with Gatsby: How to set up a custom “Gallery” post type for free]]></title>
            <link>https://alexasteinbruck.medium.com/headless-wordpress-for-gatsby-how-to-set-up-a-custom-gallery-post-type-for-free-f407a3512744?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/f407a3512744</guid>
            <category><![CDATA[wordpress]]></category>
            <category><![CDATA[gatsby]]></category>
            <category><![CDATA[jamstack]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[graphql]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Wed, 15 May 2024 12:50:46 GMT</pubDate>
            <atom:updated>2024-05-15T14:12:14.112Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zkduiIrTgKlOwHmXU7pivw.jpeg" /></figure><p>Keywords: headless CMS, headless wordpress, gallery, custom post type, Gatsby, ACF, Pods, Custom meta fields, repeatable fields, gatsby-plugin-image, gatsby-source-wordpress, GraphQL</p><h3>The usecase</h3><p>I am building a website that contains multiple image galleries. I am using Wordpress as the CMS (in a so called “headless CMS” mode) and build the real site with Gatsby.</p><p>This article is about how to customize the Wordpress CMS/Admin interface to include a “Gallery” post type and how to query the resulting data model in Gatsby/Graphql.</p><p>This article is NOT about building or designing a gallery component with React/Gatsby.</p><h4>My requirements/requests/feature requests</h4><p>The gallery should be both a) user friendly on the Wordpress side and b) its data model should integrate harmoniously with the Gatsby/GraphQL universe. This means:</p><p>- it should be free! I don’t want to spend any money on Wordpress plugins (looking at you, ACF Pro)<br>- the gallery should support any number of images<br>- the order of images should be defined by the user, ideally by means of a drag’n’drop UI<br>- it should be compatible with gatsby-source-wordpress (GraphQL based)<br>- it should be compatible with gatsby-plugin-image</p><p>It sounds like a simple use case, but turned out to be harder to realise (if you don’t want to spend money)! I researched 3 solutions, but only the first fulfilled all my requirements: <a href="https://pods.io/">Pods</a>. At the end of the article I share two more alternative solutions.</p><h3>Solution: Pods! (with a little hack)</h3><p>The Pods framework is the open source alternative to Advanced Custom Fields (ACF). This tutorial uses <strong>Pods version 3.2.1.</strong></p><p>First create a pod with name “Gallery” and a field group named “Gallery fields”, then add a field of type “File/Image/Video” named “Images” and set the “upload limit” to “multiple files”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/950/1*zGaG6gYE-LEfcne1P0gF9Q.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/752/1*uWVWF8A-CAHXIxRUgIy2Lw.png" /><figcaption>Settings up a new pod named “Gallery” with a custom field named “Images”</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/665/1*lU8r7U5V_Lthoq-QT8Rsww.png" /><figcaption>This is how it looks for the CMS user: A nice, clean UI with image thumbnails and reordering capabilities</figcaption></figure><h4>How do I query this in Gatsby?</h4><p>First you need to enable GraphQL visibility for your pod:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1021/1*Wf6KXbdHX-YAZWs1onC4Vg.png" /></figure><p>Then query “wpGallery” (the name of your pod prefixed with “wp”) and the field “images” (the name of your custom field):</p><pre>query MyQuery {<br>  wpGallery(id: {eq: &quot;cG9zdDoyOTA=&quot;}) {<br>    id<br>    images {<br>      nodes {<br>        gatsbyImage(width: 900)<br>      }<br>    }<br>  }<br>}</pre><pre>{<br>  &quot;data&quot;: {<br>    &quot;wpGallery&quot;: {<br>      &quot;id&quot;: &quot;cG9zdDoyOTA=&quot;,<br>      &quot;images&quot;: {<br>        &quot;nodes&quot;: [<br>          {<br>            &quot;gatsbyImage&quot;: {<br>              &quot;images&quot;: {<br>                &quot;sources&quot;: [<br>                  {<br>                    &quot;srcSet&quot;: &quot;/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/2601eded68646ab34949d76d53ad7f2e/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D225%26h%3D150%26fm%3Davif%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 225w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/7bb857502b00873ff560c6e47a556b47/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D450%26h%3D300%26fm%3Davif%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 450w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/7071a6526579ae261f81b8bc4463365f/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D900%26h%3D600%26fm%3Davif%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 900w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/15ac18a8e8615988613cad19537f1c4e/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D1800%26h%3D1200%26fm%3Davif%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 1800w&quot;,<br>                    &quot;type&quot;: &quot;image/avif&quot;,<br>                    &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                  },<br>                  {<br>                    &quot;srcSet&quot;: &quot;/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/ab11f2182071e384443dffa045bdc85a/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D225%26h%3D150%26fm%3Dwebp%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 225w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/bcfaf29f3ee3fb1b3ebbc12a78cfd3a1/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D450%26h%3D300%26fm%3Dwebp%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 450w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/e4c8d7aecd9714785ba817d01c221dbc/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D900%26h%3D600%26fm%3Dwebp%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 900w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/d3a4d104262dc3a57513c01c59ce88ed/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D1800%26h%3D1200%26fm%3Dwebp%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 1800w&quot;,<br>                    &quot;type&quot;: &quot;image/webp&quot;,<br>                    &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                  }<br>                ],<br>                &quot;fallback&quot;: {<br>                  &quot;src&quot;: &quot;/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/e9e96822177d1cdf77ea1240c2a11d68/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D225%26h%3D150%26fm%3Djpg%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36&quot;,<br>                  &quot;srcSet&quot;: &quot;/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/e9e96822177d1cdf77ea1240c2a11d68/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D225%26h%3D150%26fm%3Djpg%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 225w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/accbf945a4094cd86d6f5636e23417dd/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D450%26h%3D300%26fm%3Djpg%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 450w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/27448af9126a8394bb236d421215d85d/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D900%26h%3D600%26fm%3Djpg%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 900w,/_gatsby/image/358421b5b0035ac93a42cc6d46b7e7ad/c059c0edbdd574ab1b7bf2f752b4c647/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fwlodzimierz-jaworski-2BbwrlmIaX8-unsplash-scaled.jpg&amp;a=w%3D1800%26h%3D1200%26fm%3Djpg%26q%3D75&amp;cd=e7373efa71b459fa1ff1b97721c3da36 1800w&quot;,<br>                  &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                }<br>              },<br>              &quot;layout&quot;: &quot;constrained&quot;,<br>              &quot;width&quot;: 900,<br>              &quot;height&quot;: 600,<br>              &quot;backgroundColor&quot;: &quot;rgb(184,136,88)&quot;<br>            }<br>          },<br>          {<br>            &quot;gatsbyImage&quot;: {<br>              &quot;images&quot;: {<br>                &quot;sources&quot;: [<br>                  {<br>                    &quot;srcSet&quot;: &quot;/_gatsby/image/36980c936cf4dd8450a45772432b0482/3708ce2a4f25ec91a919c6d9555185f8/image-1.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D225%26h%3D180%26fm%3Davif%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 225w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/802098db786cef50e197ef74f50cb348/image-1.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D450%26h%3D360%26fm%3Davif%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 450w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/fb118e5b30f1e15bea9cc774b966092a/image-1.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D900%26h%3D720%26fm%3Davif%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 900w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/d1de0b1a8153f7c6d4e2633c7f5a7a79/image-1.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D1800%26h%3D1440%26fm%3Davif%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 1800w&quot;,<br>                    &quot;type&quot;: &quot;image/avif&quot;,<br>                    &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                  },<br>                  {<br>                    &quot;srcSet&quot;: &quot;/_gatsby/image/36980c936cf4dd8450a45772432b0482/b8f4e899a486e60196419eb35f37bc5a/image-1.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D225%26h%3D180%26fm%3Dwebp%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 225w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/26f0ddc1a5bd4576bce36b80b070c2ad/image-1.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D450%26h%3D360%26fm%3Dwebp%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 450w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/dc03710d3a93c7667d53754ef3f1253b/image-1.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D900%26h%3D720%26fm%3Dwebp%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 900w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/bec98bd98ea6a7bb83efe73b5ec43d9b/image-1.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D1800%26h%3D1440%26fm%3Dwebp%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 1800w&quot;,<br>                    &quot;type&quot;: &quot;image/webp&quot;,<br>                    &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                  }<br>                ],<br>                &quot;fallback&quot;: {<br>                  &quot;src&quot;: &quot;/_gatsby/image/36980c936cf4dd8450a45772432b0482/6636af1d59a2d8aeba7e72991e51ceaf/image-1.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D225%26h%3D180%26fm%3Djpg%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b&quot;,<br>                  &quot;srcSet&quot;: &quot;/_gatsby/image/36980c936cf4dd8450a45772432b0482/6636af1d59a2d8aeba7e72991e51ceaf/image-1.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D225%26h%3D180%26fm%3Djpg%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 225w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/8655e58e51785ab73611a2608e69b245/image-1.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D450%26h%3D360%26fm%3Djpg%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 450w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/d033b7c618abb2f1f62fe01fe409178e/image-1.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D900%26h%3D720%26fm%3Djpg%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 900w,/_gatsby/image/36980c936cf4dd8450a45772432b0482/582efc2f619e31a1a8a81576ff1f6514/image-1.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage-1.jpeg&amp;a=w%3D1800%26h%3D1440%26fm%3Djpg%26q%3D75&amp;cd=97ba0befaf041547ee8d103d2944682b 1800w&quot;,<br>                  &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                }<br>              },<br>              &quot;layout&quot;: &quot;constrained&quot;,<br>              &quot;width&quot;: 900,<br>              &quot;height&quot;: 720,<br>              &quot;backgroundColor&quot;: &quot;rgb(88,104,56)&quot;<br>            }<br>          },<br>          {<br>            &quot;gatsbyImage&quot;: {<br>              &quot;images&quot;: {<br>                &quot;sources&quot;: [<br>                  {<br>                    &quot;srcSet&quot;: &quot;/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/2601eded68646ab34949d76d53ad7f2e/image.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D225%26h%3D150%26fm%3Davif%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 225w,/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/7bb857502b00873ff560c6e47a556b47/image.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D450%26h%3D300%26fm%3Davif%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 450w,/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/7071a6526579ae261f81b8bc4463365f/image.avif?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D900%26h%3D600%26fm%3Davif%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 900w&quot;,<br>                    &quot;type&quot;: &quot;image/avif&quot;,<br>                    &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                  },<br>                  {<br>                    &quot;srcSet&quot;: &quot;/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/ab11f2182071e384443dffa045bdc85a/image.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D225%26h%3D150%26fm%3Dwebp%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 225w,/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/bcfaf29f3ee3fb1b3ebbc12a78cfd3a1/image.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D450%26h%3D300%26fm%3Dwebp%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 450w,/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/e4c8d7aecd9714785ba817d01c221dbc/image.webp?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D900%26h%3D600%26fm%3Dwebp%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 900w&quot;,<br>                    &quot;type&quot;: &quot;image/webp&quot;,<br>                    &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                  }<br>                ],<br>                &quot;fallback&quot;: {<br>                  &quot;src&quot;: &quot;/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/e9e96822177d1cdf77ea1240c2a11d68/image.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D225%26h%3D150%26fm%3Djpg%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a&quot;,<br>                  &quot;srcSet&quot;: &quot;/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/e9e96822177d1cdf77ea1240c2a11d68/image.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D225%26h%3D150%26fm%3Djpg%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 225w,/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/accbf945a4094cd86d6f5636e23417dd/image.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D450%26h%3D300%26fm%3Djpg%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 450w,/_gatsby/image/4fb3b94908b64bb6f750236008c445e6/27448af9126a8394bb236d421215d85d/image.jpg?u=http%3A%2F%2Fcms.mywordpresswebsite.de%2Fwp-content%2Fuploads%2F2024%2F05%2Fimage.jpeg&amp;a=w%3D900%26h%3D600%26fm%3Djpg%26q%3D75&amp;cd=dc54c0b0da4be912cb958b8bdd61c97a 900w&quot;,<br>                  &quot;sizes&quot;: &quot;(min-width: 900px) 900px, 100vw&quot;<br>                }<br>              },<br>              &quot;layout&quot;: &quot;constrained&quot;,<br>              &quot;width&quot;: 900,<br>              &quot;height&quot;: 600,<br>              &quot;backgroundColor&quot;: &quot;rgb(88,168,72)&quot;<br>            }<br>          }<br>        ]<br>      }<br>    }<br>  }<br>}</pre><p>Now there is one problem: The order in which the images appear in the output of the GraphQL query is not the order in which the images appear in the CMS!</p><p>This appears to be a bug in the <a href="https://github.com/pods-framework/pods/tree/main/src/Pods/Integrations/WPGraphQL">Pods GraphQL integration</a>.</p><h4>Workaround: Querying WP RestAPI to get correct image order</h4><p>I found this workaround: <br>Wordpress also offers a RestAPI that holds the same data as the GraphQL endpoint. Luckily, in the RestAPI the order of the images is correctly encoded. So we can do an additionally request to this RestAPI just to get the order of the images and then apply this information to the result of the GraphQL query.</p><p>First, enable the RestAPI for the pod itself, and then also for the custom fields (!).</p><p>Then you can do the following request, e.g. server side in your gatsby-node.js:</p><pre>const response = await fetch(<br>    &quot;http://cms.mywordpresswebsite.de/wp-json/wp/v2/gallery/293&quot;<br>  )</pre><ul><li><strong>/wp-json/wp/v2 </strong>is the default endpoint of the Wordpress RestAPI</li><li><strong>/gallery</strong> is the name of our pod</li><li><strong>/293</strong> is the id of a specific gallery instance</li></ul><p>The JSON output contains an image field which is an array which contains the images in the order in which they were defined in the CMS.</p><p>In the JSON output, every image has an <strong>ID</strong> field that is identical to the <strong>databaseId</strong> in the GraphQL output. This way both can be matched.</p><pre>{<br>  &quot;id&quot;: 293,<br>  &quot;slug&quot;: &quot;gallery-1&quot;,<br>  &quot;type&quot;: &quot;gallery&quot;,<br>  &quot;link&quot;: &quot;http://cms.mywordpresswebsite.de/gallery/gallery-1/&quot;,<br>  &quot;title&quot;: {<br>    &quot;rendered&quot;: &quot;Gallery 1&quot;<br>  },<br>  &quot;content&quot;: {<br>    &quot;rendered&quot;: &quot;&quot;,<br>    &quot;protected&quot;: false<br>  },<br>  &quot;template&quot;: &quot;&quot;,<br>  &quot;acf&quot;: [],<br>  &quot;images&quot;: [<br>    {<br>      &quot;ID&quot;: &quot;291&quot;,<br>      &quot;guid&quot;: &quot;http://cms.mywordpresswebsite.de/wp-content/uploads/2024/05/wlodzimierz-jaworski-2BbwrlmIaX8-unsplash.jpg&quot;,<br>      &quot;post_type&quot;: &quot;attachment&quot;,<br>      &quot;post_mime_type&quot;: &quot;image/jpeg&quot;,<br>      &quot;pod_item_id&quot;: &quot;291&quot;<br>    },<br>    {<br>      &quot;ID&quot;: &quot;281&quot;,<br>      &quot;guid&quot;: &quot;http://cms.mywordpresswebsite.de/wp-content/uploads/2024/05/image-1.jpeg&quot;,<br>      &quot;post_type&quot;: &quot;attachment&quot;,<br>      &quot;pod_item_id&quot;: &quot;281&quot;<br>    },<br>    {<br>      &quot;ID&quot;: &quot;279&quot;,<br>      &quot;guid&quot;: &quot;http://cms.mywordpresswebsite.de/wp-content/uploads/2024/05/image.jpeg&quot;,<br>      &quot;post_type&quot;: &quot;attachment&quot;,<br>      &quot;post_mime_type&quot;: &quot;image/jpeg&quot;,<br>      &quot;pod_item_id&quot;: &quot;279&quot;<br>    }<br>  ],<br>}<br><br></pre><p>This is the best solution I could find to build a free gallery admin interface in Wordpress for Gatsby. In the last section I would like to share more attempts to solve this usecase and their pro and cons.</p><h3>Appendix (Two other solutions)</h3><h4>Solution 2: Advanced Custom Fields (ACF)</h4><p>When it comes to customizing Wordpress, the “Advanced Custom Fields” (ACF) Wordpress plugin is a big one. ACF has a free version and a paid version (ACF Pro). I am going with the free version. This tutorial uses <strong>ACF version 6.2.9.</strong></p><p>After creating a custom post type “Gallery” I am looking for any field types that might be a good fit for a gallery, such as “Image” or “Gallery”. Bummer: “Image” does not have a “repeater” option, and “Gallery” is a paid feature of ACF Pro :-(</p><p>A workaround could be to set up a fixed number of image fields. This is a bit ugly and does not offer the user much flexibility. But it might be okay in some situations.</p><p><strong>How do I query this in Gatsby?<br></strong>First, make sure that you have enabled the “Show in GraphQL” option in the ACF settings in Wordpress!</p><p>Then in the GraphQL query I then query “galleryfields” (or whatever custom name I gave to my fieldgroup, and then query every single image field that I created:</p><pre>query MyQuery {<br>  allWpGallery {<br>    nodes {<br>      id<br>      title<br>      galleryfields {<br>        image1 {<br>          node {<br>            gatsbyImage(width: 900)<br>          }<br>        }<br>        image2 {<br>          node {<br>            gatsbyImage(width: 900)<br>          }<br>        }<br>        image3 {<br>          node {<br>            gatsbyImage(width: 900)<br>          }<br>        }<br>      }<br>    }<br>  }<br>}</pre><p>Tip: It can be useful to use GraphQL fragments to avoid this repetition.</p><p><strong>Pros</strong><br>- Admin interface is constrained to only image fields<br>- In Gatsby I can access every single image seperately and build whatever components with it</p><p><strong>Cons</strong><br>- predefined number of image fields<br>- User cant change order easily</p><h4>Solution 3: A simple Wordpress Page</h4><p>This solution is pragmatic. The thinking goes: A regular Wordpress page contains a free editor field which can hold almost anything, including images. So it can be used as a gallery: The user inserts the images in the Gutenberg editor as “blocks” in any order. No custom post types or custom meta fields needed.</p><p><strong>How do I query this in Gatsby?<br></strong>In my GraphQL query I then query the field “content” which returns the whole HTML blob which contains the images inline.<br>Great: The plugin gatsby-source-wordpress converts even the <em>inline</em> images into Gatsby images! (Detailed documentation is available on <a href="https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-source-wordpress/docs">Github</a>)<br>Notice how in the response the image source links don’t point to the Wordpress instance anymore, but to the Gatsby site instead (relative path: /_gatsby/file/…).</p><p><strong>Pros<br>- </strong>Unlimited, arbitrary number of images<br>- Good UI for inserting images and changing the image order</p><p><strong>Cons<br>- </strong>It’s all one big HTML blob, but I might want to access the images individually to build a completely different markup from them with React<br>- Too little control over layout, Wordpress includes its own markup and classes<br>- The standard Wordpress block editor offers users too much freedom</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f407a3512744" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Let’s hope “The AI Dilemma” never gets turned into a Netflix series]]></title>
            <link>https://alexasteinbruck.medium.com/lets-hope-the-ai-dilemma-never-gets-turned-into-a-netflix-series-6c4de6a5d282?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/6c4de6a5d282</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[humane-tech]]></category>
            <category><![CDATA[ai-ethics]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Wed, 28 Jun 2023 16:46:22 GMT</pubDate>
            <atom:updated>2023-08-02T14:21:30.285Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*_jNk7QbT18olXgDM1muFKw.jpeg" /></figure><p><em>The new campaign by the “Center of Humane Technology” is dripping with sensationalist X-risk AGI hype and pseudo-science. That’s not how we should educate the public about the problems associated with AI.</em></p><p>In their one-hour talk which has been viewed by more than 2.6M people on <a href="https://www.youtube.com/watch?v=xoVJKj8lcNQ&amp;t=1s">Youtube</a>, Aza Raskin and Tristan Harris request the audience to stop for a moment and take a deep breath. Raskin closes his eyes, and you hear the intimate sound of his breath through the microphone. Then with the serenity of an esoteric mentor, he instructs the people in the audience to practice <em>kindness towards themselves</em>:</p><blockquote>“It’s going to feel almost like the world is gaslighting you. People will say at cocktail parties, you’re crazy, look at all this good stuff [AI] does (…) show me the harm, point me at the harm, and it’s very hard to point at the concrete harm. So really take some self-compassion.”</blockquote><p>“AI Dilemma” is the new campaign by the “Center of Humane Technology” (CHT), a non-profit founded by Aza Raskin and Tristan Harris in 2018 to educate the public and advise legislators about the harmful impact that technology can have on individuals, institutions, and society. In their early years, their primary focus was on social media and the so-called “attention economy” which led to their major involvement in the Netflix documentary “The social dilemma” in 2020.</p><p>The new campaign focuses on the risks associated with Artificial Intelligence — or as they call it “<strong>Gollems”</strong>— a tongue-in-cheek acronym for a group of AI technologies they summarize as “Generative Large Language Multi-Modal Models”. “Gollems” include systems like ChatGPT and like.</p><p>According to Raskin and Harris, there have been two contact points of <em>humanity with AI</em> so far:</p><p>The first contact was due to the proliferation of social media and recommendation algorithms (“curation AI”). The results of this first contact had been disastrous to the psyche of the individual and the functioning of society: Information overload, addiction, doom scrolling, polarization, fake news, etc.</p><p>The second contact of <em>humanity</em> happens now in the year 2023 with “creation AI” — or as they call it “Gollems”. And this is where the Center for Humane Technology steps in: “We should not make the same mistakes as with social media”. Raskin and Harris want to warn us about the dangers of these “Gollem”-type AIs before they get integrated everywhere and become “entangled” with society. “We can still choose the future we want”.</p><p>Raskin and Harris have consulted many experts in the area of AI safety about what the actual problem is with this “Golem AIs”. “We’re talking about how a race dynamic between a handful of companies of these new Golem class AI’s are being pushed into the world as fast as possible” they say. “The reason we are in front of you is that the people who work in this space feel that this is not being done in a safe way”.</p><p>So they’re all about slowing down and democratic dialogue. That sounds good for now. Even if it is still a little unclear where they actually locate the problems of these “Gollem” systems.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*uApX7XnNmJBJEQZp" /></figure><h3>“50% of AI researchers believe …” wrong numbers, no context.</h3><p>There is a sentence that they show 3 times throughout their talk — it seems to be the backbone of their alarming argumentation:</p><p>“50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI”.</p><p>This jaw-dropping number appears to result from a 2022 survey conducted by an institution called “AI Impacts” (hardly readable in the small print on the slide). Raskin and Harris don’t give the audience any contextual information about these numbers.</p><blockquote>It is utterly irresponsible to speak of “50% of AI researchers” given that it is literally just 80 people from a biased survey.</blockquote><p>But it is even worse: These numbers are plain wrong, and the survey methodology is deeply flawed as has been explained <a href="https://twitter.com/MelMitchell1/status/1649135315615903759">here</a> and <a href="https://twitter.com/DrTechlash/status/1649268770215702529">here</a>. How the survey was conducted: The organization approached 4271 AI researchers who published in a specific year at NeurIPS conference. Only 738 people agreed to participate in the survey (you can see already that it is a biased dataset). And only 162 of those actually answered the very specific question that the quote is referring to. It is utterly irresponsible to speak of “50% of AI researchers” given that <strong>it is literally just 80 people from a biased survey</strong>.</p><p>Raskin and Harris also don’t further elaborate what “uncontrolled” means or <strong>in what way</strong> the extinction of <em>humanity </em>would actually happen.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*7-_Uv44OQZIffDao" /></figure><h3>We’re really NOT talking about the AGI apocalypse. Really, really, really not!</h3><p>One of the preposterous things about their talk is that they <strong>really, really, really</strong> want us to know that they are <strong>not talking</strong> about the “AGI apocalypse” (13:08, 42:20), also known as the “AI takeoff” scenario. They describe this scenario as follows: <em>“AI becomes smarter than humans in a broad spectrum of things, it begins the ability to self-improve, then we ask it to do something — you know the old standard story of ‘be careful what you wish for because it will become true in an unexpected way’ — you wish to be the richest person so the AI kills everyone else”</em></p><p>But remember, this is <strong>not</strong> what they are here to talk to us about!</p><p>This is very confusing, because the “AI apocalypse” or “AI takeoff” scenario is exactly what motivated the <strong>flawed survey</strong> they love so much that they show it 3 times.</p><p>The organization behind the survey has the discrete name <a href="https://aiimpacts.org">“AI Impacts”</a> and belongs to the “Machine Intelligence Research Institute” (MIRI), formerly called “Singularity Institute for AI” located in Berkeley, California.</p><p>“As part of the broader Effective Altruism community, we prioritize inquiry into high impact areas like existential risk” can be read on their <a href="https://aiimpacts.org/jobs/">website</a>, as well as a listing of their sponsors, among them the <a href="https://futureoflife.org/">Future of Life Institute</a> (FLI) and the <a href="https://www.fhi.ox.ac.uk/">Future of Humanity Institute</a> (FHI). The FHI was founded by Nick Bostrom, the author of the famous book “Superintelligence” and inventor of the term “existential risk”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*jOhNCREfxWTzhClC" /></figure><h3>So what are the actual problems with AI?</h3><p>Here’s a list of things they consider problems due to AI:</p><p>Reality collapse, Fake everything, Trust collapse, Collapse of law contracts, Automated fake religions, Exponential blackmail, Automated cyberweapons, Automated exploitation of code, Automated lobbying, Biology automation, Exponential scams, A-Z testing of everything, Synthetic relationships, AlphaPersuade</p><p>While some of these points represent actual and even short-term dangers to Western society and will make the internet a more chaotic and hostile place, their listing seems quite arbitrary and, most importantly, it’s a very privileged and very white view on the effects of AI on <em>humanity</em>.</p><p>The fact that they call social media the “first contact of humanity with AI” is plainly ignorant: People, especially marginalized groups have much longer been affected by AI algorithms in negative ways. The automation of inequality is a reality: Biased algorithms in areas such as policing, social welfare, finance and recruiting have had and are still having huge impacts on the realities of real human lives.</p><p>They also completely ignore the production conditions behind AI and present it as something that comes out of research and then just needs to be deployed by companies. This view ignores the environmental impacts of training these huge models, as well as issues with data privacy and IP.</p><p>“The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.” as <a href="https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey">Ted Chiang</a> puts it in the New Yorker.</p><p>Last but not least, their argumentation does not even make sense. They don’t explain how the types of problems that they care about (synthetic media, manipulation) would lead to the extinction of humanity.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ONdRndIXVLZk38CmCGzjAw.png" /><figcaption>The audience must take a deep breath now</figcaption></figure><h3>Background: X-Risk doomerism — a well-funded brand of “AI safety”</h3><p>There are 2 camps for how academics, politicians and tech people think about the risks and harms posed by AI. And there’s quite a rift between them.</p><p>On the one hand, there is the group commonly referred to as AI ethicists who are concerned with the risks and impact of AI systems <strong>in the here and now</strong>. Take for example AI researcher Timnit Gebru, a former ethicist at Google, and her <a href="https://dl.acm.org/doi/10.1145/3442188.3445922">paper</a> that focused on the problems with large language models, such as their environmental and financial costs as well as biased and discriminatory outcomes and potentials for deception.</p><p>On the other hand, there is a growing group of people whose main concern is that superintelligent AI might terminate our human civilization. This group often refers to itself as “AI safety” people. A big stakeholder of this ideology is the “Effective Altruism” community, a predominantly white male group of people who have increasing influence in politics.</p><p>A more extreme version of “Effective Altruism” is “Longtermism” a <a href="https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo">dangerous ideology</a> which prioritizes the long-term future of humanity and de-prioritizes short-term problems. Their goal for humanity is to become “technologically enhanced digital posthumans inside computer simulations spread throughout our future lightcone” (<a href="https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo">Aeon</a>).</p><p>Climate change and the increasing gap between the rich and the poor are seen as negligible problems. Nick Bostrom, the author of the book “Superintelligence”, called alleviating global poverty or reducing animal suffering “feel-good projects of suboptimal efficacy”.</p><p>Both Effective Altruism and Longtermist movements are backed by big money: Among many tech billionaires, Peter Thiel, Elon Musk and Sam Bankman-Fried have pumped money into Effective Altruism organisations. In 2021 the Effective Altruism movement was backed by <a href="https://80000hours.org/2021/07/effective-altruism-growing/">$46 billion in funding</a>.</p><p>Effective Altruism has an increasing impact on AI research: OpenAI was funded by Elon Musk and Peter Thiel. And last year, <a href="https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/">Sam Bankman-Fried proposed 100.000$</a> at the AI conference NeurIPS for papers on the topic of “AI safety”. Timnit Gebru summarizes:</p><p><em>“Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.” (</em><a href="https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/">Wired</a><em>)</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WF_LRWCUnZbHCdQdqsg87g.png" /></figure><h3>More “AGI apocalypse” rhetoric and suggestions</h3><p>Intended or not, the “AI dilemma” campaign is dripping with fear-introducing suggestions of existential risk (X-Risk) and the AGI apocalypse (something they are <strong>not</strong> talking about, remember?)</p><ol><li>They compare the danger of AI with the danger of <strong>nuclear weapons</strong> multiple times.</li><li>The narrative of “<strong>emergent capabilities</strong>” portrays the AI models as ticking bombs: “Suddenly”, GPT learned to speak Persian — “And no one knows why” Raskin breathes in the voice of a scary storyteller.</li><li>The phrase „Silently taught themselves research grade chemistry” suggests that these models have the <strong>autonomy</strong> to teach themselves and reinforces <a href="https://www.aimyths.org/ai-has-agency">the myth that AI algorithms have agency</a>.</li><li>The phrase “They make themselves stronger” reinforces the “<strong>AI takeover</strong>” myth. They say this makes them more dangerous than “nukes”.</li><li>When explaining RLHF (Reinforcement Learning from Human Feedback) they say it’s “about “How do you make AIs <strong>behave</strong>”. And they compare it to clicker training for dogs. This metaphor is already problematic because they compare an ML model to an intelligent animal, but they go further: When you go out of the room, the dog will do what it wants. This suggests that if you leave AI systems alone (“uncontrolled”), they will forget what you tell them and go rogue: “As soon as you leave the room they’re gonna not do what you ask them to do” (33:23). This is such a wrong and deeply problematic narrative, suggesting that these models have a free will on their own and need to be “tamed”.</li><li>Lastly, they chose the cheeky acronym “Gollem” to describe AI technology — in Jewish folklore, Golem refers to a human-like being created from inanimate matter. That’s another reference to AGI.</li></ol><h3>Conclusion</h3><p>The “AI Dilemma” campaign is amalgaming X-Risk style alarmist rhetoric with a quite one-sided (social-media rooted) perspective on AI risks, especially regarding generative models. They mention many here and now risks, such as the proliferation of fake content and the speed in which these models are released to the public without enough assessment of the safety of the results. But Raskin and Harris are ignoring the long history and presence of negative effects of AI technology on marginalized groups.</p><p>Raskin and Harris are “tech designers turned media-savvy communicators” (<a href="https://www.wired.com/story/plaintext-how-to-start-an-ai-panic/">Wired</a>). They are masters in storytelling and persuasion. Tristan Harris started his career as a magician and then studied “persuasive technology” at Stanford. It is quite surprising though that the critique of persuasion and manipulation through social media has been a core theme in their work, however they apply persuasion mechanisms happily themselves.</p><p>It’s important to note that Raskin and Harris are no AI specialists. They fall victim to the same hype and misleading AI narratives as the general public does. Especially when these narratives are backed by big money. As mentioned earlier, the lobby behind X-Risk AGI doomerism is strong.</p><p>They say that AI is an abstract topic, and we are lacking metaphors to help us think about AI. This is something I can 100% agree with. They say that they want to provide the audience with more metaphors that are grounded in real life to give “a more visceral way of experiencing the exponential curves we are heading into”. If visceral means scaring people and then asking them to do breathing exercises to turn down their blood pressure, this is bad.</p><p>The “Social Dilemma” has shown that there is an audience for their flavour of one-sided populist technology criticism. But this is no education. It is highly questionable if tech criticism needs to be “entertaining”, and we should question who benefits from this framing.</p><h3>Some ideas on how to stay informed on problems and risks associated with AI</h3><p>If you want to broaden your knowledge about the real harms of AI, here’s a list of things you can do:</p><ul><li>Subscribe to the newsletter of AlgorithmWatch, a non-profit research and advocacy organization from Germany: <a href="https://algorithmwatch.org">https://algorithmwatch.org</a></li><li>Follow these AI ethics researchers on Twitter: <a href="https://twitter.com/timnitGebru">Timnit Gebru</a>, <a href="https://twitter.com/mmitchell_ai">Margaret Mitchell</a>, <a href="https://twitter.com/emilymbender">Emily Bender,</a> <a href="https://twitter.com/1Br0wn">Ian Brown</a> on AI regulation/policy</li><li>Watch “Coded Bias” — it’s on Netflix ;-)</li><li>Interactive explanations and technical background for topics such as bias, fairness and privacy: <a href="https://pair.withgoogle.com/explorables/">https://pair.withgoogle.com/explorables/</a></li><li>Read the book “Atlas of AI” by Kate Crawford: <a href="https://www.katecrawford.net/">https://www.katecrawford.net/</a></li><li>Have a look at the AIAAIC database, that holds 1,000+ incidents and controversies driven by and relating to AI, algorithms, and automation: <a href="https://www.aiaaic.org/aiaaic-repository">https://www.aiaaic.org/aiaaic-repository</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6c4de6a5d282" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Opening speech at “Reshape forum for Artificial Intelligence in Art and Design” (May 2023)]]></title>
            <link>https://alexasteinbruck.medium.com/opening-speech-at-reshape-forum-for-artificial-intelligence-in-art-and-design-may-2023-cea7c9f70c81?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/cea7c9f70c81</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[ux]]></category>
            <category><![CDATA[education]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Mon, 05 Jun 2023 16:47:16 GMT</pubDate>
            <atom:updated>2023-06-05T17:14:14.678Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IR_Xs-OsgG8R-_-hlYdlxQ.jpeg" /><figcaption>Day 2 of Reshape Forum at Hochschule für Gestaltung Schwäbisch Gmünd (photo: eignerframes)</figcaption></figure><p><em>In spring 2023 I had the opportunity to curate a conference at the Hochschule für Gestaltung Schwäbisch Gmünd as part of my researcher position at AI+D Lab and KITeGG.</em></p><p><em>From May 10–12, 2023, the third KITeGG summer school took place there. Under the title “reshape — forum for AI in Art and Design” we invited numerous international experts to get an overview of the many ways in which AI is relevant for designers.</em></p><p><em>The following text is a speech I gave at the opening of the conference on May 10 in the auditorium of the HfG!</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NG-HeX17A8iVjzYn1GNGAw.jpeg" /></figure><h3><strong>reshape (1)</strong></h3><p>reshape is the name of a function in the Python programming language, or more precisely in the <strong>NumPy library</strong>, which is used in virtually all AI programs.</p><p>What you can do with NumPy is number crunching: it contains functions for working with vectors, i.e. lists of numbers and matrices. In machine learning, the whole world is mapped into numbers, words as well as images, sounds and movements. And these vectors are what goes into a neural network. The reshape function can change the shape of these vectors, for example, turn a 1-dimensional vector into a 2-dim vector, or a 3-dim into a 1-dim vector.</p><p>The reshape function <em>“changes the shape without changing the content”</em> says Numpy’s documentation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/533/1*rFgIgmcV4D_N610RvQSY5A.png" /><figcaption>Source:<a href="https://www.w3resource.com/numpy/manipulation/reshape.php"> https://www.w3resource.com/numpy/manipulation/reshape.php</a></figcaption></figure><h3><strong>reshape (2)</strong></h3><p>The conference you are at right now is also called reshape. Our slogan <em>“reshape the landscape of art and design”</em> — fits easily into the <strong>rhetoric</strong> we are surrounded by more often lately when it comes to AI: “disruption”, “revolutionize”, “blowing up”, “turn upside down”, “massive news” — <strong>We are experiencing a new AI hype right now</strong>.</p><p>It’s a bit surprising and feels like <em>Déjà vu</em>, because the last wave of AI hype/AI summer was not that long ago, circa 2014 with the breakthrough of the Deep Learning technique.</p><p>At the center of this new AI summer is “Generative AI” — systems like Stable Diffusion or ChatGPT that are capable of generating realistic artefacts like images or text.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*QO15VRL2aG-1QKsQlR8uMA.gif" /><figcaption>Karl Sims: Evolved Virtual Creatures</figcaption></figure><h3><strong>Generative AI is nothing new</strong></h3><p>“Generative AI” has also been around for a long time, and artists and designers have always worked with it (albeit with changing technical foundations):</p><ul><li><strong>In the 1980s</strong> — <strong>Harold Cohen</strong> &amp; his program “AARON”. The first time that AI technologies were introduced into the world of computer art. The program could understand and generate colors and shapes. In this picture we see Cohen coloring the generated shapes by hand.</li><li><strong>In the 1990s</strong>: <strong>Karl Sims</strong> “Evolved Virtual Creatures”. Sims used evolutionary/genetic algorithms</li><li><strong>From 2015: Deep Learning early adopters</strong>: Addie Wagenknecht, Alex Champandard, Alex Mordvintsev, Alexander Reben, Allison Parrish, Anna Ridler, Gene Kogan, Georgia Ward Dyer. Golan Levin, Hannah Davis, Helena Sarin, Jake Elwes, Jenna Sutela, Jennifer Walshe, Joel Simon, JT Nimoy, Kyle Mcdonald, Lauren McCarthy, Luba Elliott, Mario Klingemann, Mike Tyka, Mimi Onuoha, Parag Mital, Pindar Van Arman, Refik Anadol, Robbie Barrat, Ross Goodwin, Sam Lavigne, Samim Winiger, Scott Eaton, Sofia Crespo, Sougwen Chung, Stephanie Dinkins, Tega Brain, Terence Broad and Tom White.</li></ul><p>The artist &amp; researcher <strong>Memo Akten</strong> was one of those “early adopters”. His video work “Learning to see” (2017) is a good example: Here you can see how Akten feeds his video input into a neural network in real-time, which then interprets this image data. A kind of semantic style transfer.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F260612034%3Fh%3D1cf903469e%26app_id%3D122963&amp;dntp=1&amp;display_name=Vimeo&amp;url=https%3A%2F%2Fvimeo.com%2F260612034&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F1604699068-89cc1952bce06668bac2f0dde8dab2c39eda16184d06fb4ca2d3328742b7fcb4-d_1280&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/cdc5210f49355ba6e680ba3b35022e1b/href">https://medium.com/media/cdc5210f49355ba6e680ba3b35022e1b/href</a></iframe><p>The work of artists has also always been an experiment with the <strong>shortcomings, the gaps and the glitches</strong> of these technologies, which often came directly from academic AI research and were “misappropriated” by them.</p><p>They often had to have a <strong>deep technical understanding</strong> of these systems to be able to bend and twist them in such a way. Even if algorithms were available as open source, designers had to really dig into it deeply.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TnNQBbeKYtcjfdnFX28OqQ.jpeg" /><figcaption>Transform a low-fidelity website sketch into functional HTML (possible with GPT-4)</figcaption></figure><h3><strong>A revolution of accessibility</strong></h3><p>What the current revolution is all about is a revolution in <strong>accessibility and availability</strong>. Technologies have opened up to a breadth of people.</p><p>They have been given <strong>a new interface</strong> that is usable by everyone, and that interface is called <strong>natural language</strong>.</p><p>This AI hype now even feels almost justified! Concrete consequences are already noticeable in various areas. It is an enormous push of innovations with an unbelievable speed.</p><p>A few innovations from the last 2 months (March — April 2023):</p><ul><li>VQGAN-CLIP: Here is a comparison of the quality of generated images 1 year ago — and the state of the art in April 2023 called “stable diffusion XL”.</li><li>NVIDIA Video generation: <a href="https://www.youtube.com/watch?v=3A3OuTdsPEk">https://www.youtube.com/watch?v=3A3OuTdsPEk</a> Here a resolution of 2000x1000 pixel is achievable.</li><li>GPT-4 has been released. This language model is multimodal, i.e. it can “understand” images (for example: sketch of website → working website code, or even derive complete recipes based on photos of meals).</li><li>The Llama language model from Meta (comparable to GPT-3) runs on laptop CPU, smartphone and even on a Raspberry Pi (<a href="https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/">Link</a>)</li></ul><p>At the last <strong>KITeGG summer school</strong> (November 2022, HfG Offenbach), <strong>Stable Diffusion </strong>(released in August 2022) was the technology and the revolution everyone was talking about: Finally, everyone could generate arbitrary images simply by a text description.</p><p>Three months later, on November 22nd, ChatGPT came out and has since turned everything upside down. With ChatGPT, we have <strong>experienced the “Stable Diffusion Moment”</strong>, only for <strong>Large Language Models</strong> (LLMs).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mGLLM67BfADaEHM5Y1qkjA.png" /><figcaption>Ok wow: GPT-3 competitor Llama from Meta even runs on a Raspberry Pi</figcaption></figure><h3><strong>Large Language Models (LLMs)</strong></h3><p>Large Language Models are neural networks with several billion parameters that have been trained using large amounts of text. LLMs find patterns in these large amounts of text and learn the statistical probability with which a word follows the next word.</p><p>This technique sounds as mundane as the autocomplete function on our cell phones, but it adds an astonishing level of complexity:</p><p>LLMs can summarize texts, translate, generate essays, write scientific papers, write working code, and generate ideas. You yourself will probably be able to tell your very own story of how you used ChatGPT and were surprised!</p><p>Some even go so far as to say that these systems are capable of “<strong>reasoning</strong>” — one of the holy grails of AI research.</p><p>These surprising capabilities have led to a number of people now claiming we have achieved or are at least close to achieving <strong>“AGI” (Artificial General Intelligence)</strong>. And from this point in the discourse, it is hard to distinguish from science fiction. People talk about AI as if it were a being with its own will, and its own agenda.</p><p>You may have heard about the Open letter “Pause Giant AI Experiments” signed by prominent people. It calls for large AI labs to pause their research so that society and regulators can keep up.</p><p>This letter has drawn a lot of criticism; Emily Bender, a well-known AI researcher in the field of Natural Language Processing, wrote on Twitter that it was dripping with AI hype and myths. It also stems from an ideology called “Longtermism”, who have a very specific agenda for the “future of humanity”.</p><p>If we suspect a real counterpart, a thinking being, in these AI systems of automated pattern recognition, then perhaps it is like animals looking into a mirror. The journalist James Vincent calls this the <a href="https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test"><strong>“AI mirror test”</strong></a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*5XHOm7yDvpKuRYpDTfHY6A.gif" /><figcaption>GIF:<a href="https://youtu.be/tz0avWZoqjg"> Xavier Hubert-Brierre</a> via<a href="https://tenor.com/view/funny-animals-monkey-gorilla-mirror-fight-me-gif-7936997"> Tenor</a></figcaption></figure><h3><strong>Do we pass the “AI mirror test”?</strong></h3><p>The mirror test is used in <strong>behavioral psychology</strong> to find out whether a creature/animal has ego consciousness. There are a few variations of this test but the core question is: Does the creature recognize itself in the mirror or does it think it is another creature?</p><p>We as humanity are collectively facing a mirror test right now, and <strong>the mirror is called Large Language Models</strong>.</p><blockquote>“The reflection is humanity’s wealth of language and writing, which has been strained into these models and is now reflected back to us. We’re convinced these tools might be the superintelligent machines from our stories because, in part, they’re trained on those same tales. Knowing this, we should be able to recognize ourselves in our new machine mirrors, but instead, it seems like more than a few people are convinced they’ve spotted another form of life.”</blockquote><blockquote><em>Source: </em><a href="https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test"><em>“Introducing the AI mirror test, which very smart people keep failing”, James Vincent, The Verge 02/2023</em></a></blockquote><p>If you compare this kind of discourse about AI with the (technical) reshape concept from the beginning, there are worlds in between! And between these 2 extreme poles are now also designers and artists.</p><p>I think it’s a <strong>pretty</strong> <strong>tumultuous time</strong> for creatives right now.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qM3Gpyl0rb3KK71Ibwq1Dg.jpeg" /></figure><h3><strong>How does this all feel for designers?</strong></h3><p>On the one hand, we would like to see <strong>AI as a tool</strong>: Image generators are handy tools to visualize ideas or create renderings. Language models can help interaction designers code so they can build prototypes faster, etc. And new tools/integrations/improvements are popping up every day. The speed of innovation is immense.</p><p>On the other hand, we are repeatedly confronted in the media with the narrative of <strong>a general intelligence</strong>, a precursor to superintelligence, that solves complex design tasks with more effectiveness and creativity than one could ever do oneself. Being <strong>“replaced by AI”</strong> has recently become a concern of creative professionals like designers and software developers.</p><p>It really is a <strong>paradox</strong>: on the one hand, technology promises <strong>“superpowers for creatives”</strong>, on the other hand, those same creatives fear for their relevance and future.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GOXTLc6BGlY8LkwOJTzwIg.jpeg" /></figure><h3><strong>What does the Reshape Symposium want?</strong></h3><p>With this conference we want to take a more differentiated look at the various points of contact between design and AI technology, away from the newly strengthened AI myths and black-and-white views of AI replacement.</p><p>We would like to take a differentiated look at AI systems and their technical properties instead of speaking in general terms of “an AI”.</p><p>We ask ourselves: Where does the responsibility of designers lie, what is their role, and how can they influence the course of AI development?</p><p>The conference will look at these issues from 3 axes:</p><ul><li>Designing for AI (Designing AI systems)</li><li>AI for Design (Creative AI)</li><li>Teaching AI (to creatives)</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8TgG1nxmHycRf6oyRIF3rg.jpeg" /></figure><h3><strong>Designing for AI (Designing AI systems)</strong></h3><p>This involves the design of AI-based interfaces and products, for example systems that work via gesture recognition or voice input, or the design of generative interfaces themselves and integrations.</p><p>What are the challenges and opportunities here? What is the broader social context of these technologies? What do designers need to know in order to design these systems responsibly?</p><p>We have prepared a series of talks to address these questions:</p><ul><li><strong>Nadia Piet </strong>— First thing in the morning, Nadia Piet talks about practices to design the user experience for AI-based systems and interfaces</li><li><strong>Catherine Breslin</strong> — Next up is a talk by Catherine Breslin on conversational design, how machines and humans are conversing, and how LLMs will change the future of voice assistants</li><li><strong>Ploipailin Flynn</strong> — Then Ploipailin Flynn talks about the dark side of pattern recognition and how social patterns like racism are notoriously reproduced by AI-based systems, and design strategies to deal with it</li><li><strong>Emily Saltz</strong> talks about synthetic media, AI-generated artefacts that will become more and more a part of our everyday lives, and what that means for product design</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VhHjvv6Sbyd3bUFT3exgXQ.jpeg" /></figure><h3><strong>AI for Design (Creative AI)</strong></h3><p>How can AI technologies be integrated as tools in the creative toolbox of artists and designers? How can AI serve idea generation and foster human creativity instead of narrowing and flattening it? How do these tools “fit in the palm of your hand”?</p><p>Here we look forward to a talk on Thursday afternoon by <strong>design studio oio</strong>, who work out of a chic tiny house in the middle of London. In their workflows, they focus on “post-human collaboration” and develop products and tools for a “less boring future”.</p><p>Also, <strong>Tom White</strong> — one of those early adopters of AI technologies for creative use, will tell us about his experiments with Machine Vision and his latest projects.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*svRcKzpD6VK55Y0lv_4MXg.jpeg" /></figure><h3><strong>Teaching AI (to creatives)</strong></h3><p>That is the main question of the KITeGG project: How can the topic of AI be communicated to creative people, especially to design and art students? What knowledge, and skills are important?</p><p>On which level of complexity do we move? From technical basics (we remember the Python function <em>reshape</em>) to high-level concepts and ethical issues like bias, privacy or IP rights: What level of depth is realistic to achieve?</p><p>How to develop an intuition for AI usecases? How to convey the ability to consciously assess the benefits and risks and decide when AI-based technologies should not be used?</p><p>To tackle these topics, we have prepared 2 panels: “AI Industry” and “KITeGG — Learnings from 1 year of AI education at design schools”.</p><p>And not to forget the workshops from earlier this week, the results of which will be presented on Friday.</p><p>I am looking forward to the upcoming 2.5 days of the conference with you!</p><ul><li><strong>Reshape</strong> Konferenz Website:<a href="https://reshapeforum.hfg-gmuend.de/"> https://reshapeforum.hfg-gmuend.de/</a></li><li>More about <strong>KITeGG</strong>, a German research project about the integration of AI into Art + Design Education:<a href="https://gestaltung.ai/#/"> https://gestaltung.ai</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cea7c9f70c81" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Opening speech at “Reshape symposium — forum for AI in Art and Design” (KITeGG, May 2023)]]></title>
            <link>https://alexasteinbruck.medium.com/opening-speech-at-reshape-symposium-forum-for-ai-in-art-and-design-kitegg-may-2023-36a4e73e47c9?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/36a4e73e47c9</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[technology-trends]]></category>
            <category><![CDATA[generative-art]]></category>
            <category><![CDATA[llm]]></category>
            <category><![CDATA[design]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Mon, 05 Jun 2023 12:30:02 GMT</pubDate>
            <atom:updated>2023-06-05T14:03:10.518Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HXq-FR0y_6hnDBm4HaJfLQ.jpeg" /><figcaption>Tag 2 des Reshape Forums an der Hochschule für Gestaltung Schwäbisch Gmünd (photo: eignerframes)</figcaption></figure><h3>Eröffnungsrede von “Reshape forum for Artificial Intelligence in Art and Design” (Mai 2023)</h3><p><em>Im Frühjahr 2023 hatte ich die Gelegenheit eine Konferenz an der Hochschule für Gestaltung Schwäbisch Gmünd zu kuratieren.</em></p><p><em>Vom 10.-12. Mai 2023 fand dort die dritte KITeGG summer school statt. Unter dem Titel “reshape — forum for AI in Art and Design” luden wir zahlreiche internationale Expert*innen ein um einen Überblick über die zahlreichen Arten zu erhalten, in denen KI für Designer*innen relevant ist.</em></p><p>Der folgende Text ist eine Rede, die ich anlässlich der Eröffnung der Konferenz am 10. Mai in der Aula der HfG hielt!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NG-HeX17A8iVjzYn1GNGAw.jpeg" /></figure><h3><strong>reshape</strong> <strong>(1)</strong></h3><p>reshape ist der Name einer Funktion in der Programmiersprache Python, genauer gesagt in der Programmierbibliothek <strong>numpy</strong>, die bei so gut wie allen KI-Programmen Einsatz findet.</p><p>Was man mit <strong>numpy</strong> machen kann, ist <strong>Zahlenakrobatik: </strong>Es enthält Funktionen für das Arbeiten mit Vektoren, also Listen von Zahlen und Matritzen. Im Machine Learning wird alles in Zahlen abgebildet, Wörter genauso wie Bilder, Klänge und Bewegungen. Und diese Vektoren sind das, was in ein neuronales Netz hineingeht. Die Funktion reshape kann die Form dieser Vektoren verändern, z.B. aus einem 1-dim Vector, einen 2-dim Vector machen oder einen 3-dim Vector.</p><p>Die Funktion <strong>reshape</strong> <em>“ändert die Form, ohne den Inhalt zu ändern”</em> heißt es in der Dokumentation von numpy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/533/1*rFgIgmcV4D_N610RvQSY5A.png" /><figcaption>Source: <a href="https://www.w3resource.com/numpy/manipulation/reshape.php">https://www.w3resource.com/numpy/manipulation/reshape.php</a></figcaption></figure><h3><strong>reshape (2)</strong></h3><p>Die Konferenz auf der Sie gerade sind heißt auch reshape — Unser Slogan <em>“reshape the landscape of art and design”</em> — reiht sich umstandslos ein in die <strong>Rhetorik</strong> von der wir in letzter Zeit häufiger umgeben sind wenn es um KI geht: “disruption”, “revolutionize”, “blowing up”, “turn upside down”, “massive news” — <strong>Wir erleben gerade einen neuen KI-Hype.</strong></p><p>Es ist ein wenig verwunderlich und fühlt sich an wie ein Deja Vu, denn die letzte Welle des KI-Hypes/KI-Sommers ist noch gar nicht so lang her, ca. 2014 mit dem Durchbruch der Technik des Deep Learnings.</p><p>Im Zentrum dieses neuen KI-Sommers steht <strong>“Generative AI”</strong>, also Systeme wie Stable Diffusion oder ChatGPT, die in der Lage sind realistische Artefakte wie Bilder oder Text, zu generieren.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*QO15VRL2aG-1QKsQlR8uMA.gif" /><figcaption>Karl Sims: Evolved Virtual Creatures</figcaption></figure><h3>Generative AI is nothing new</h3><p>Aber auch “Generative AI” gibt es schon länger und Künstler*innen und Designer*innen haben von jeher damit gearbeitet (wenn auch mit wechselnden technischen Grundlagen):</p><ul><li><strong>1980-er</strong> Jahre: Harold Cohen + sein Programm “AARON”, das erste mal dass KI-Technologien in die Welt der Computerkunst eingeführt wurden. Das Programm konnte Farben und Formen verstehen und generieren. Hier sieht man Cohen beim “ausmalen”</li><li><strong>1990er</strong>-Jahre: <strong>Karl Sims</strong> — Evolved Virtual Creatures (benutzt Evolutionäre Algorithmen)</li><li><strong>2015 Deep Learning</strong>, early adopters: Addie Wagenknecht, Alex Champandard, Alex Mordvintsev, Alexander Reben, Allison Parrish, Anna Ridler, Gene Kogan, Georgia Ward Dyer. Golan Levin, Hannah Davis, Helena Sarin, Jake Elwes, Jenna Sutela, Jennifer Walshe, Joel Simon, JT Nimoy, Kyle Mcdonald, Lauren McCarthy, Luba Elliott, Mario Klingemann, Mike Tyka, Mimi Onuoha, Parag Mital, Pindar Van Arman, Refik Anadol, Robbie Barrat, Ross Goodwin, Sam Lavigne, Samim Winiger, Scott Eaton, Sofia Crespo, Sougwen Chung, Stephanie Dinkins, Tega Brain, Terence Broad and Tom White.</li></ul><p>Der Künstler+Researcher Memo Akten war einer dieser “early adopters”. Seine Videoarbeit “Learning to see” (2017) ist ein gutes Beispiel: Hier sieht man wie Akten in Echtzeit sein Videoinput in ein Neuronales Netz füttert, was diese Bilddaten daraufhin interpretiert. Eine Art semantischer Style-Transfer.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F260612034%3Fh%3D1cf903469e%26app_id%3D122963&amp;dntp=1&amp;display_name=Vimeo&amp;url=https%3A%2F%2Fplayer.vimeo.com%2Fvideo%2F260612034&amp;image=https%3A%2F%2Fi.vimeocdn.com%2Fvideo%2F1604699068-89cc1952bce06668bac2f0dde8dab2c39eda16184d06fb4ca2d3328742b7fcb4-d_1280&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=vimeo" width="1920" height="1080" frameborder="0" scrolling="no"><a href="https://medium.com/media/ccd5b7dcbe7fd24058758b030524c307/href">https://medium.com/media/ccd5b7dcbe7fd24058758b030524c307/href</a></iframe><p>Die Arbeit von Künstler*innen war auch immer ein <strong>Experiment mit den Unzulänglichkeiten</strong>, den Gaps und den Glitches dieser Technologien, die oft direkt aus der akademischen KI-Forschung kamen und von ihnen “zweckentfremdet” wurden.</p><p>Oft mussten sie über ein <strong>technisches Tiefenverständnis</strong> dieser Systeme verfügen um sie derartig benden und twisten zu können. Auch wenn Algorithmen open source verfügbar waren, so musste man als Designer*in sich doch relativ tief “hinein-nerden”.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TnNQBbeKYtcjfdnFX28OqQ.jpeg" /><figcaption>Transformiere eine low-fidelity Website-Skizze in funktionales HTML (möglich mit GPT-4)</figcaption></figure><h3>Eine Revolution der Zugänglichkeit</h3><p>Was die <strong>jetzige Revolution</strong> ausmacht, ist eine Revolution der <strong>Zugänglichkeit und Verfügbarkeit</strong>. Die Technologien haben sich der Breite der Menschen geöffnet.</p><p>Sie haben ein neues <strong>Interface</strong> bekommen, was für jede*n benutzbar ist, und dieses Interface heißt <strong>menschliche Sprache</strong> (Natural Language).</p><p>Dieser KI-Hype fühlt sich nun sogar fast berechtigt an! Konkrete Folgen sind in verschiedenen Bereichen bereits spürbar. Es ist ein enormer Schub an Innovationen mit einer unglaublichen Geschwindigkeit .</p><p>Ein paar Innovationen aus den letzten 2 Monaten (März — April 2023):</p><ul><li>VQGAN-CLIP: Hier ist ein Vergleich der Qualität generierter Bilder vor 1 Jahr — und dem state of the art im April 2023 namens “stable diffusion XL”</li><li>NVIDIA Video generation: <a href="https://www.youtube.com/watch?v=3A3OuTdsPEk">https://www.youtube.com/watch?v=3A3OuTdsPEk</a> Hier ist eine Auflösung von 2000x1000 pixel erreichbar</li><li>GPT-4 wurde veröffentlicht. Dieses Sprachmodell ist multimodal, d.h. es kann Bilder “verstehen” (Beispiel Skizze von Website → funktionierender Website Code, oder auch komplette Rezepte auf der Basis von Fotos von Mahlzeiten ableiten)</li><li>Das Llama Sprachmodell von Meta (mit GPT-3 vergleichbar) läuft auf Laptop CPU, Smartphone und sogar auf einem Raspberry Pi (<a href="https://arstechnica.com/information-technology/2023/03/you-can-now-run-a-gpt-3-level-ai-model-on-your-laptop-phone-and-raspberry-pi/">Link</a>)</li></ul><p>Bei der letzten <strong>KITeGG summerschool (November 2022, </strong>HfG Offenbach<strong>)</strong>, war <strong>Stable Diffusion</strong> (veröffentlicht im August 2022) die Technologie und die Revolution über die alle sprachen: Endlich konnte jedermensch einfach durch eine Textbeschreibung beliebige Bilder generieren.</p><p>3 Monate später, im November 22 kam <strong>ChatGPT</strong> raus und hat seither alles auf den Kopf gestellt. Mit ChatGPT haben wir quasi den “Stable Diffusion Moment” erlebt, nur für <strong>Large Language Models (LLMs).</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mGLLM67BfADaEHM5Y1qkjA.png" /><figcaption>Ok wow, GPT-3 Konkurrent Llama von Meta läuft sogar auf einem Raspberry Pi</figcaption></figure><h3>Large Language Models (LLMs)</h3><p>Large Language Models sind neuronale Netze mit mehreren Milliarden Parametern die mithilfe von großen Mengen Text trainiert wurden. LLMs finden Muster in diesen großen Mengen Text und lernen die statistische Wahrscheinlichkeit, mit der ein Wort auf das nächste Wort folgt.</p><p>Diese Technik klingt ersteinmal so banal wie die Autocomplete-Funktion auf unseren Handys, und doch ergibt sich eine erstaunliche Komplexität daraus:</p><p>LLMs können Texte zusammenfassen, übersetzen, Essays generieren, wissenschaftliche Papers verfassen, funktionierenden Code schreiben, Ideen generieren, und Sie selbst werden wahrscheinlich ihre ganz eigenen Geschichte erzählen können, wie sie ChatGPT genutzt haben und überrascht waren!</p><p>Man geht teilweise sogar so weit davon zu sprechen, dass diese Systeme <strong>“Reasoning” </strong>beherrschen (also logische Schlussfolgerungen ziehen können) — ein heiliger Gral der KI Forschung.</p><p>Diese überraschenden Fähigkeiten haben dazu geführt, dass es nun eine Anzahl von Leuten gibt die behaupten wir hätten <strong>“AGI” (Artificial General Intelligence)</strong> erreicht oder würden zumindest kurz davor stehen. Und ab diesem Punkt im Diskurs ist es schwer von Science Fiction zu unterscheiden. Man spricht von KI, als hätte wäre es ein Wesen mit einem eigenen Willen, und einer eigenen Agenda.</p><p>Vielleicht haben Sie von dem Open letter gehört “Pause Giant AI Experiments”, der von prominenten Personen unterschrieben wurde. Darin wird gefordert dass große KI-Labore eine Pause in ihrer Forschung einlegen, so dass die Gesellschaft und Regulatoren Schritt halten können.</p><p>Dieser Brief hat viel Kritik geerntet, Emily Bender, eine bekannte KI-Forscherin im Bereich Natural Language Processing, schrieb auf Twitter, er würde nur so von KI-Hype und Mythen triefen. Außerdem rührt er von einer Ideologie her, die sich “Longtermism” nennen, die eine ganz spezielle Agenda für die Zukunft der Menschheit haben.</p><p>Wenn wir in diesen KI-Systemen der automatisierten Mustererkennung ein echtes Gegenüber, ein denkendes Wesen vermuten, dann ist es vielleicht so wie wenn Tiere in einen Spiegel schauen. Der Journalist James Vincent nennt das den <strong>“AI mirror test”</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*5XHOm7yDvpKuRYpDTfHY6A.gif" /><figcaption>GIF: <a href="https://youtu.be/tz0avWZoqjg">Xavier Hubert-Brierre</a> via <a href="https://tenor.com/view/funny-animals-monkey-gorilla-mirror-fight-me-gif-7936997">Tenor</a></figcaption></figure><h3>Bestehen wir den “KI Spiegel Test”?</h3><p>Der Spiegeltest wird in der <strong>Verhaltenspsychologie</strong> dazu benutzt herauszufinden, ob ein Lebewesen/Tier über ein Ich-Bewusstsein verfügt. Es gibt ein paar Variationen dieses Tests aber im Kern ist die Frage: Erkennt sich das Lebewesen im Spiegel selbst, oder denkt es, es wäre ein anderes Lebewesen?</p><p>Wir sind als Menschheit gerade kollektiv mit einem Spiegeltest konfrontiert, und der Spiegel nennt sich Large Language Models.</p><p><em>“The reflection is humanity’s wealth of language and writing, which has been strained into these models and is now reflected back to us. We’re convinced these tools might be the superintelligent machines from our stories because, in part, they’re trained on those same tales. Knowing this, we should be able to recognize ourselves in our new machine mirrors, but instead, it seems like more than a few people are convinced they’ve spotted another form of life.”</em></p><p>Vergleicht man diese Art des Diskurses über KI mit dem (technischen) reshape-Begriff vom Anfang, so liegen da doch Welten dazwischen! Und zwischen diesen 2 Extrem-Polen stehen nun auch Designer*innen und Künstler*innen.</p><p>Ich denke, es ist gerade eine ziemlich <strong>aufwühlende Zeit</strong> für Kreative.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qM3Gpyl0rb3KK71Ibwq1Dg.jpeg" /></figure><h3>Wie fühlt sich das alles für Designer*innen an?</h3><p>Auf der einen Seite möchten wir gern <strong>KI als ein Tool</strong> sehen: Bildgeneratoren sind praktische Tools um Ideen zu visualisieren oder renderings zu erzeugen. Sprachmodelle können Interaction-Designer*innen beim Coden helfen, damit sie schneller Prototypen bauen können, etc. Und jeden Tag schießen neue Tools/Integrationen/Verbesserungen aus dem Boden. Die Innovationsgeschwindigkeit ist immens.</p><p>Auf der anderen Seite sehen wir uns medial immer wieder konfrontiert mit dem <strong>Narrativ einer allgemeinen Intelligenz, einer Vorstufe zur Superintelligenz</strong>, die komplexe Designaufgaben mit einer Effektivität und Kreativität löst als man es selber je tun könnte. Von “KI ersetzt zu werden” ist seit kurzem ein Sorge von kreativen Berufen geworden wie Designer*innen und Software Entwickler*innen.</p><p>Es ist wirklich <strong>paradox</strong>: Einerseits verspricht die Technologie <strong>“Superkräfte für Kreative”</strong>, auf der anderen Seite fürchten eben jene Kreative um ihre Relevanz und Zukunft.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GOXTLc6BGlY8LkwOJTzwIg.jpeg" /></figure><h3>Was will das Reshape Symposium?</h3><p>Wir wollen mit dieser Konferenz einen differenzierteren Blick einnehmen auf die verschiedenen Berührungspunkte von Design und KI-Technologie, abseits von den neu erstarkten KI-Mythen und schwarz-weiß Sichten von KI-Replacement.</p><p>Wir möchten KI-Systeme und ihre technischen Eigenschaften differenziert betrachten, statt generalisierend von “einer KI” zu sprechen.</p><p>Wir fragen uns: Wo liegt die Verantwortung von Designer*innen, was ist ihre Rolle, und wie können sie den Verlauf der KI-Entwicklung mitbestimmen?</p><p>Die Konferenz wird diese Themen von 3 Achsen her beleuchten:</p><ol><li>Designing for AI (Designing AI systems)</li><li>AI for Design (Creative AI)</li><li>Teaching AI (to creatives)</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8TgG1nxmHycRf6oyRIF3rg.jpeg" /></figure><h3>Designing for AI (Designing AI systems)</h3><p>Hier geht es das Design von KI-basierten Interfaces und Produkten, beispielsweise Systeme, die über Gestenerkennung oder Spracheingabe funktionieren, oder das Design von generativen Interfaces selbst und Integrationen.</p><p>Was sind hier die Herausforderungen und Möglichkeiten? Was ist der breitere gesellschaftliche Kontext dieser Technologien? Was müssen Designer*innen wissen, um diese Systeme verantwortungsvoll zu gestalten?</p><p>Für diese Fragen haben wir eine Reihe von Talks vorbereitet:</p><ul><li><strong>Nadia Piet </strong>— Gleich morgen früh spricht Nadia Piet<strong> </strong>über Praktiken, um die User Experience für KI-basierte Systeme und Interfaces zu gestalten</li><li><strong>Catherine Breslin</strong> — Anschließend kommt ein Talk von Catherine Breslin über Conversational Design, darüber wie Maschinen und Menschen konversieren, und wie LLMs die Zukunft von Sprachassistenten verändern werden</li><li><strong>Ploipailin Flynn</strong> — Anschließend geht es bei Ploipailin Flynn um die dunklen Seiten von Mustererkennung und wie gesellschaftliche Muster wie Rassismus notorisch von KI-basierten Systemen reproduziert werden, und um Design-Strategien um damit umzugehen</li><li><strong>Emily Saltz</strong> spricht über <strong>synthetic media</strong>, also KI-generierte Artefakte, die in Zukunft immer mehr Teil unseres Alltags werden und was das für das <strong>Produktdesign</strong> bedeutet</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*VhHjvv6Sbyd3bUFT3exgXQ.jpeg" /></figure><h3>AI for Design (Creative AI)</h3><p>Wie lassen sich KI-Technologien als Werkzeuge in den kreativen Werkzeugkasten von Künstler*innen und Designer*innen integrieren? Wie kann KI der Ideenfindung dienen, und menschliche Kreativität fördern, anstatt sie zu einzuengen und zu verflachen? Wie “liegen diese Werkzeuge in der Hand”?</p><p>Hier freuen wir uns auf einen Talk des Design-Studios <strong>oio </strong>am Donnerstag nachmittag, die aus einem schicken tinyhouse mitten in London heraus arbeiten. In ihren Workflows setzen sie auf “post-human collaboration” und entwickeln Produkte und Tools für eine “less boring future”.</p><p>Außerdem wird uns <strong>Tom White — </strong>einer dieser “early adopters” von KI-Technolgien für kreative Nutzung, wird uns von seinen Experimenten mit Machine Vision erzählen und seinen neuesten Projekten.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*svRcKzpD6VK55Y0lv_4MXg.jpeg" /></figure><h3>Teaching AI (to creatives)</h3><p>Das ist ja die Leitfrage von dem KITeGG Projekt: Wie kann man das Thema KI an Kreative, insbesondere an Design und Kunst studierende vermitteln? Welches Wissen, welche Skills sind wichtig?</p><p>Auf welcher Komplexitätsebene bewegt man sich? Von den technischen Basics (Wir erinnern uns an die Python-Funktion reshape) hin zu high-level Konzepten und ethischen Fragen wie Bias, Privatsphäre oder IP Rechten: Welcher Grad von Tiefe ist realistisch zu erreichen?</p><p>Wie kann man ein Gespür für KI-Usecases entwickeln? Wie vermittelt man die Fähigkeit die Vorteile und Risiken bewusst einzuschätzen und zu entscheiden, wann KI-basierte Technologien nicht genutzt werden sollten?</p><p>Dafür haben wir 2 Panels vorbereitet: “AI Industry” und “KITeGG — Learnings from 1 year of AI education at design schools”. Nicht zu vergessen sind auch die Workshops von Anfang der Woche, deren Ergebnisse am Freitag vorgestellt werden!</p><p>Ich freue mich auf die kommenden 2,5 Tage Konferenz mit Ihnen! Auf tolle Gespräche und Austausch zwischen verschiedenen Disziplinen!</p><p>Reshape Konferenz Website: <a href="https://reshapeforum.hfg-gmuend.de/">https://reshapeforum.hfg-gmuend.de/</a></p><p>Mehr zu KITeGG: <a href="https://gestaltung.ai/#/">https://gestaltung.ai</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=36a4e73e47c9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Learnings after 2.5 years of running an AI lab at an Art University (XLab)]]></title>
            <link>https://alexasteinbruck.medium.com/learnings-after-2-5-years-of-running-an-ai-lab-at-an-art-university-xlab-c4fbd5179ed8?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/c4fbd5179ed8</guid>
            <category><![CDATA[creativeai]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[digital-art]]></category>
            <category><![CDATA[design]]></category>
            <category><![CDATA[education]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Mon, 16 Jan 2023 11:00:21 GMT</pubDate>
            <atom:updated>2023-02-08T13:48:55.562Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zxoLO27N9zbUFFTexmYlgg.png" /><figcaption>The XLab at University of art and design Burg Giebichenstein started in spring 2020 (together with Corona). Simon Maris (left) and Alexa Steinbrück (right)</figcaption></figure><ol><li>🎸 It’s a wild time for Generative AI</li><li>🌄 A new era of (creative) AI has begun</li><li>🗣️ Language input has democratized AI</li><li>🔲 Foundation Models are here to stay</li><li>🖼️ “AI art” is now mainstream</li><li>😎 AI myths/bullshit &amp; how to stay cool</li><li>🧑‍🏫 What to teach? Expertise/Skillset/Literacy</li><li>🧠 Building up ML Intuition</li><li>🦮 Help for self-help</li><li>🤓 Technical soft skills are equally important</li><li>⚔️ Teaching a critical attitude towards AI</li><li>🕹️ The evolution of no-code tools</li><li>🥨 AI education needs constant and diverse integration into teaching</li><li>👩‍🎓 There will always be these 3 types of students</li><li>📚 The problem with knowledge bases</li><li>🌎 The institutional landscape for Creative AI</li><li>Wrap up</li></ol><p>In spring 2020 the University of Art and Design <a href="https://www.burg-halle.de/">Burg Giebichenstein</a> launched the <strong>XLab</strong> — the first dedicated laboratory for Artificial Intelligence &amp; Robotics at an Art University in Germany. My focus was on (creative) AI/ML, while my colleague<a href="https://simonmaris.com/"> Simon Maris</a> was responsible for everything related to (creative) robotics.</p><p>What is a “lab”? When one thinks of a lab, one usually thinks of <strong>research</strong>. This report, however, will be about <strong>teaching</strong>, namely, what it means to teach Artificial Intelligence at a university that trains future artists and designers.</p><p>Before we start — Is there a need to explain<strong> why AI should be taught</strong> at an art school in the first place? The question has many different answers to it, check out my article <a href="https://alexasteinbruck.medium.com/ai-at-the-art-school-critical-and-creative-ai-research-at-the-xlab-4516f053484f">here</a>!</p><p>At XLab we developed different formats, from a <a href="https://open.spotify.com/show/3m3R68eu4hwM8xI3VDUCGJ">podcas</a>t to student lab “residencies”. In the summer of 2022, we exhibited some of our research results at <a href="https://www.burg-halle.de/burglabs-present/">Futurium</a> in Berlin.</p><p><strong><em>UPDATE 11/2022: I am working at the AI+D Lab at Hochschule für Gestaltung Schwäbisch-Gmünd, as part of </em></strong><a href="https://gestaltung.ai"><strong><em>KITeGG</em></strong></a><strong><em>, a joint project of five German universities on the integration of AI in design teaching.</em></strong></p><h3>🎸1. It’s a wild time for Generative AI</h3><p>The (research) field of Generative machine learning has seen a crazy boost in the last 2,5 years. The synthetic creation of text, sound and images is getting more realistic and accessible at an unprecedented speed.</p><ul><li>You can now create photo-realistic images based on your wildest imagination — right on your laptop. (Stable Diffusion)</li><li>You can tell your editor to write code for you instead of typing it yourself, e.g. to build a website (CoPilot)</li><li>You can even create realistic video sequences without ever using a camera lens</li><li>Have you been to a <a href="https://twitter.com/alexabruck/status/1587108608482869251">Prompt Battle</a> yet?</li></ul><p>What does this mean to fields like art and design mostly concerned with creating (generating) things?</p><h3>🌄 2. A new era of (creative) AI has begun</h3><p>During the last 2 years, some things have changed fundamentally. It’s a development that started with GPT-3 and culminated with DALLE-2 and Stable Diffusion.</p><ol><li>Language input has democratized AI</li><li>Foundational models</li><li>“AI art” is now mainstream</li></ol><h3>🗣 3. ️ Language input has democratized AI</h3><p>The text-to-x paradigm enables a new democratization of AI tooling — the capability of writing in a natural language (mostly English) is everything that’s needed (and of course access to a computer and the internet)</p><h3>🔲 4. Foundational Models are here to stay</h3><p>The shift towards <a href="https://arxiv.org/pdf/2108.07258.pdf">foundation models</a> is a game changer. They are big models based on gigantic datasets that no single individual would be able to train on their own. These foundational models will not go away, they will only become more powerful.</p><p>What does this mean for us as creators? In the long term, it might shift our agency towards:</p><ul><li>Finetuning these models</li><li>Combining different models</li><li>Building integrations, interfaces and applications on top of these models</li></ul><h3>🖼️ 5. “AI art” is now mainstream</h3><p>The conception that a computer can generate art has become commonplace. 2022 was the year that AI Art * became mainstream — mainly through the breakthrough of text2image tools like DALLE and Stable Diffusion.</p><p>Is this a bad thing? No. but it might require a redefinition of the identity of a lab that calls itself an “AI lab”. Or does it?</p><p>Many people don’t know that AI art existed long before 2022! There is a fascinating history of AI art, and there are entire research fields such as <a href="https://computationalcreativity.net/home/conferences/">Computational Creativity</a> that have existed for decades! One of my goals is to raise awareness of this history and the foundational work that has been done before the hype!</p><p>(*) At least the <em>generative side</em> of AI art. There are of course many creative applications that leverage other ML capabilities, like classification, e.g. with sensor data</p><h3>😎 6. AI myths/bullshit &amp; how to stay cool</h3><p>AI Myths are still prevalent — even among the smart and critical species of art students! While the demystification of AI should be a core part of AI education, it’s also important to be patient.</p><p>At the opening presentation of the XLab in 2020, one student in the audience raised his hand and asked: <em>“And where can I now </em><strong><em>borrow</em></strong><em> </em><strong><em>the AI</em></strong><em>”</em>? Implying that there was an embodied entity sitting in some corner of our lab.</p><p>I admit — It used to put me in rage every time I heard the phrase <em>“an AI”</em> or <em>“the AI”</em> because the wording implied that there really was something like “an intelligence” — <a href="https://www.aimyths.org/ai-has-agency">a sentient being that acts according to its own goals and intentions</a>.</p><p>This isn’t just a grammar quirk, but <strong>a categorical error</strong>: It’s confounding <strong>narrow AI</strong> (what we have) for <strong>AGI </strong>or <strong>strong AI</strong> (sci-fi dream). In consequence, this misconception influences what we expect from AI technology and how we interact with it — also creatively.</p><p>But it’s also important to find the right moments for doing demystification. Hands-on interaction with AI technology is probably the best way of demystifying:</p><blockquote>“Oh really? GPT can not even count properly??!”</blockquote><h3>🧑‍🏫 7. What to teach? Expertise/Skillset/Literacy</h3><p>It can&#39;t be the goal to replicate a computer science curriculum at an art university. That’s simply not feasible and also not desirable! So what are the skills and knowledge that need to be taught? What should an art design student know about AI?</p><p>Fact is: Machine Learning is <strong>orders of magnitude more complex</strong> than, let’s say, Frontend-Development. Is that even a fair/just comparison? I think yes because both are <strong>sets of technologies</strong> instead of just one technology.</p><p>Doing machine learning “from scratch” does not only require some proficiency in the basic software-dev stack (Python, command line, git), but also machine learning skills and some math and statistics (which is what all ML is, essentially).</p><p><strong>But what does “from scratch” even mean?</strong> It certainly means a different thing today than 5 years ago. There are always abstractions in various degrees.</p><p>In the same way, as there are drag-drop no-code tools to build websites, there are drag-drop no-code tools to do machine learning. But abstractions come with the drawback of limited freedom/customization.</p><p>In my teaching experience, I put an emphasis on explaining <strong>general concepts</strong> like supervised learning (lots of <strong>labelled</strong> data, correlations, optimization), instead of going deep into implementation details (gradient descent, etc.)</p><h3>🧠 8. Building up ML Intuition</h3><p>It&#39;s easy to teach the use of AI tools, but it&#39;s quite hard to teach and train intuition for the general capabilities of ML, when it is useful and when not.</p><p>I always cite Rebecca Fiebrink at this point. She asks:</p><blockquote>“When and why is it creatively useful to find patterns, make predictions and generate new data?”</blockquote><p>(Rebecca Fiebrink at her 2018 Eyeo Talk, <a href="https://vimeo.com/287094397">Video</a>)</p><p>Often students came to our lab who simply wanted to somehow integrate AI into their projects. Like a vitamin kick. Or even worse: With the hope that “the AI” will automatically spit out a whole project, like the design for a website, in an end-to-end manner.</p><h3>🦮 9. Help for self-help</h3><p>Often it is helpful to just mention and explain certain terminology, then the students can do an internet search themselves:</p><p><em>Sentiment Analysis, Facial Landmarks, Latent Space Walk</em></p><p>Terminology is key to helping students to navigate the field independently. And being able to navigate independently is key because teachers and a lab have limited capacities for individual support.</p><h3>🤓 10. Technical soft skills are equally important</h3><p>Abstracting your usecase, use right terminology in order to search for tutorials and code on a similar problem and adapting the code to your usecase.</p><p>Debugging skills, strategic googling, frustration tolerance. Desensitization towards the red colour of error messages.</p><p>These are “soft” skills (or should we call them “essential” skills?) that are equally important.</p><h3>⚔️ 11. Teaching a critical attitude towards AI</h3><p>My goal was and still is to train students to participate in the social discourse about AI as critical thinkers.</p><p>This includes raising awareness about the real problems with AI: Worker rights instead of robot rights. Bias issues and the <a href="https://virginia-eubanks.com/automating-inequality/">automation of inequality</a>.</p><p>For artists and designers this is equally important!</p><h3>🕹️ 12. The evolution of no-code tools</h3><p>In early 2020, <a href="https://help.runwayml.com/hc/en-us/categories/1500001962941-ML-Lab">RunwayML</a> was our go-to starting point. It was the no-code AI tool that was easiest to use. Besides being like an “app store” for pre-trained ML models, RunwayML also offered the possibility to train custom models with your own dataset. That turned out to be a good instrument and platform for hands-on teaching. However, students required more flexibility.</p><p><strong>Google Colab</strong> was a platform that we also used in teaching. Students could run open-source repositories they found on Github, without the need for an (often complicated) local setup. Computing power (GPU) could be rented on demand. Drawback: Google Technology.</p><p>There is a tradeoff between high-level GUI tools (RunwayML, Teachable Machine) and low-level toolchains (individual Python scripts, Colab Notebooks, etc.) in terms of flexibility</p><p>Now, text2x tools (like GPT and DALLE) are the newest generation of no-code AI tools!</p><h3>🥨 13. AI education needs constant and diverse integration into teaching</h3><p>It isn’t ideal to teach AI as an isolated subject. Instead, it needs constant integration in teaching, it needs consolidation. It should be taught as “salt” to different applications, “rather than its own food group” (as mimi onuoha and mother cyborg put it in their <a href="https://mimionuoha.com/a-peoples-guide-to-ai">“People’s guide to AI”</a>). For example:</p><ul><li>ML for film/video: video editing, semantic footage management, etc.</li><li>ML for drawing: assisted drawing, etc.</li><li>ML for interface design: making sense of sensory data, etc.</li><li>and many more…</li></ul><h3>👩‍🎓 14. There will always be these 3 types of students</h3><ol><li>Those with no knowledge of programming and no ambition to learn it → use nocode tools</li><li>Those with some experience in programming but are missing ML foundations, can work on code level up to a certain degree</li><li>A few who are willing to tenaciously learn what is needed to build custom things</li></ol><p>There will always be individual students vehemently committed to nerding out on something, and who go very far in that respect at certain points.</p><h3>📚 15. The problem with knowledge bases</h3><p>Knowledge bases can be a useful building block of AI education. However, knowledge bases require permanent maintenance because of the speed of development in the field. An up-to-date Google search usually brings more useful results than outdated knowledge. Persisted knowledge becomes stale.</p><h3>🌎 16. The institutional landscape for Creative AI (Institutions and Ecosystem in Germany and Europe)</h3><p>When we started the lab in 2020, there was only a small amount of German institutions that had an interest in creative AI, mostly as part of a broader critical AI discourse:</p><ul><li>KIM (Karlsruhe University of Arts and Design)</li><li>Schaufler Lab (Technical University Dresden)</li></ul><p>In the UK there were</p><ul><li>the Creative AI lab (a collab of Serpentine R&amp;D &amp; King’s College London)</li><li>UAL Creative Computing Institute (with Creative AI education pioneer Rebecca Fiebrink)</li></ul><p>In the US, there were many pioneering institutions and communities that very early acknowledged the potential of ML for art and design: The school of Poetic computation, Frank Ratchye Studio for Creative Inquiry, the Processing community, …</p><p>Not to forget, the online community <a href="https://www.aixdesign.co/">AIxDesign</a> was already striving, and still is! Check them out :-)</p><p>In December 2021 the German research project <a href="https://gestaltung.ai">KITeGG</a> (“Making AI Tangible and Comprehensible: Connecting Technology and Society Through Design”) was launched as a collaboration of 5 German art universities. One of its outstanding goals is to develop its own computing infrastructure that frees users from being reliant on Google’s cloud infrastructure.</p><h3>17. Wrap up</h3><p>I am far from having all the answers to the question: How do you teach AI at a university of the arts? And the answers have to be found anew in parallel to the constant developments.</p><p>We need to talk about what is considered <strong>basic or essential AI skills</strong> in art and design education:</p><h4><strong>How important are technical implementation skills really?</strong></h4><ul><li>Coding skills and proficiency in coding ecosystems</li><li>Deep learning knowledge, including math and statistics</li></ul><h4><strong>How much should we invest in “discourse skills”?</strong></h4><ul><li>AI ethics (bias, intellectual property, power structures, worker rights, environmental issues)</li><li>Some immunity against AI bullshit in the media</li><li>An intuition for the limits and drawbacks of AI solutions, when to say ‘no’</li></ul><p>Fortunately, there is <a href="https://gestaltung.ai">KITeGG</a>, where implicit and explicit research is being done on these questions! And I am excited to be a part of that!!</p><h3>Appendix</h3><h4>XLab</h4><p>The XLab was part of the EFRE-funded project “BurgLabs” together with its two sister labs for sustainability (SustainLab) and biotechnology (BioLab) at the University of Art and Design Burg Giebichenstein in Halle, Germany. It started in May 2020.</p><h4>AI+D Lab</h4><p>Lab for AI and Design at the <a href="https://www.hfg-gmuend.de/en/">University of Design Schwäbisch-Gmünd</a>, within the framework of the <a href="https://gestaltung.ai">KITeGG</a> research project.</p><h4>KITeGG</h4><p>A four-year BMBF-funded research project on design and artificial intelligence that started in December 2021. Behind <a href="https://gestaltung.ai">KITeGG</a> are 5 German (art) universities. Their research goal is among others to develop teaching concepts related to the topics of “Creative Machine Learning”.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c4fbd5179ed8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Explaining the code of the popular text-to-image algorithm (VQGAN+CLIP in PyTorch)]]></title>
            <link>https://alexasteinbruck.medium.com/explaining-the-code-of-the-popular-text-to-image-algorithm-vqgan-clip-a0c48697a7ff?source=rss-8e980e537c2b------2</link>
            <guid isPermaLink="false">https://medium.com/p/a0c48697a7ff</guid>
            <category><![CDATA[vqganclip]]></category>
            <category><![CDATA[creativeai]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[ai-art]]></category>
            <dc:creator><![CDATA[Alexa Steinbrück]]></dc:creator>
            <pubDate>Mon, 11 Apr 2022 20:24:34 GMT</pubDate>
            <atom:updated>2022-06-08T13:46:54.276Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EXk0kttK8Om3zaRahumVGg.jpeg" /></figure><p>This article explains VQGAN+CLIP, a specific text-to-image architecture.</p><p>You can find a general high-level introduction to VQGAN+CLIP in my previous blog post <a href="https://alexasteinbruck.medium.com/vqgan-clip-how-does-it-work-210a5dca5e52">“VQGAN+CLIP — How does it work?”</a></p><p>Here I am looking at the specific VQGAN+CLIP implementation written by artist/programmer Katherine Crowson (aka @RiversHaveWings) which went viral in the summer of 2021.</p><p>📍 To be exact, I am looking at <a href="https://colab.research.google.com/drive/1_4Jl0a7WIJeqy5LTjPJfZOwMZopG5C-W">this Google Colab notebook</a>. (Be aware that there might be newer versions of this notebook with more cool optimizations.)</p><p>↕️ <strong>Tip:</strong> I suggest turning on <strong>line numbering</strong> in Google Colab: <em>Tools → Settings → Editor → Show line numbers</em></p><p>🤓 <strong>Extra:</strong> There is a little <strong>dictionary of Machine Learning terms</strong> at the bottom of this article: general terms and more specific terms from this implementation. Because I love dictionaries.</p><h3>General facts about this notebook</h3><ul><li>It uses PyTorch, a popular machine learning framework written in Python</li><li>It connects two existing (open-source, pretrained) models: <a href="https://github.com/openai/CLIP">CLIP</a> (OpenAI) and <a href="https://github.com/CompVis/taming-transformers">VQGAN</a> (Esser et al. from Heidelberg University)</li><li>It is structured in the following sections/cells:<br>— Setup, Installing libraries<br> — Selection of models to download<br> — Loading libraries and definitions<br> — Implementation tools<br> — Execution</li></ul><h3>A high-level overview of the algorithm</h3><p>From my older blog post “VQGAN+CLIP: How does it work?”: <em>“CLIP guides VQGAN towards an image that is the best match to a given text.”</em></p><p>Because CLIP is able to represent both text and images in the same feature space, we can easily calculate the distance between these two.</p><p>Here’s a simple visualization of the algorithm. The cycle represents one optimization iteration.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IOOGa1YmHUo0P4ntmzmUjw.png" /><figcaption>A high level overview of the VQGAN+CLIP architecture (image licenced under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a>)</figcaption></figure><h3>A core concept: Inference-by-optimization</h3><p>In Machine Learning there is this core distinction between training and inference:</p><ul><li><strong>Training</strong> is the optimization process of finding the right weights of your model in order to minimize a loss function.</li><li><strong>Inference</strong> is the process of using a pre-trained model to make predictions</li></ul><p>Training is most often the resource-intensive part requiring a GPU for effective computation. Inference is, for most models, a rather light operation, it could run on a CPU and sometimes even on edge devices (such as a mobile phone or a Raspberry Pi).</p><p>The VQGAN-CLIP architecture kind of blurs the distinction of training-vs-inference, because when we “run” VQGAN-CLIP we’re kind of doing inference, but we’re also optimizing.</p><p>This special case of inference has been called <em>“inference-by-optimization”</em>. That’s why we need a GPU to run VQGAN-CLIP.</p><h3>Variable naming choices and what they refer to</h3><ul><li>Perceptor → CLIP model</li><li>Model(also sometimes named the “Generator”) → VQGAN model</li><li>Prompt → the model we’re going to train when we run the notebook</li><li>z → A vector as input for VQGAN for synthesizing an image</li><li>iii → A batch of CLIP-encoded image cutouts</li></ul><h3>The notebook step by step</h3><h4>STEP 0. Downloading the pre-trained models (CLIP &amp; VQGAN)</h4><p>First CLIP and VQGAN repositories are git-cloned (Cell <em>“Setup, Instaling Libraries” </em>lines 6 and 9).</p><p>Then we also download a pre-trained VQGAN model (Cell <em>“Selection of models to download”</em>): For every model there is a .yaml file containing basic model parameters and a .ckpt file that contains the weights of the pre-trained model (called a <em>checkpoint</em>).</p><p>The pretrained CLIP model download is a bit harder to spot: It happens in the clip.load() function (cell “Excecution”, line 16), which is documented in <a href="https://github.com/openai/CLIP#cliploadname-device-jitfalse">CLIP’s Github repository</a> as follows: <em>“Returns the model (…). It will download the model as necessary. The name argument can also be a path to a local checkpoint.”</em></p><h4>STEP 1. Generating the initial z vector (Cell “Excecution”, line 29–36)</h4><p>We generate an intitial VQGAN-encoded image vector. In case we’ve input a starting image this will be the VQGAN embeddings ( model.encode ) for this image. In case the user hasn’t provided a starting image it will be a tensor filled with random integers (torch.randint ), aka a random noise vector. This VQGAN-encoded image vector is referred to as z.</p><h4>STEP 2. Initializing the optimizer with z (Cell “Execution”, line 39)</h4><p>opt = optim.Adam([z], lr=args.step_size)<br>The first argument in the constructor contains the parameters that you wish to optimize, in our case it’s z, aka the image vector with which we start the optimization process.</p><h4>STEP 3. Instantiating the Prompt models for every text prompt (Cell “Execution”, line 46–49)</h4><p>For every text prompt provided by the user:<br> — we encode it with CLIP<br> — and with this encoding, we create our own Prompt model. These models are what we’re going to train when we run the notebook<br> — we add this model to an array named pms (I guess this stands for <em>“prompt models”</em>)</p><h4><strong>STEP 4. The actual optimization Loop (Cell “Execution” line 134–144)</strong></h4><p>This simple loop does nothing more than call the train() function as many times as defined in max_iterations(as set by you the user in the cell named <em>“Parameters”</em>). Note: if it’s set to -1 (the default) the loop will go on forever (until you stop the cell manually or an error occurs).</p><h3>The actual optimization procedure</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*v5wxudC0iRuSgSSmH5CN6w.jpeg" /><figcaption>More detailed view on the inference/optimization process: forward pass + backward pass. (image licenced under <a href="https://creativecommons.org/licenses/by/4.0/">CC-BY 4.0</a>)</figcaption></figure><p><strong>Forward pass:</strong> We start with z, a VQGAN-encoded image vector, pass it to VQGAN to synthesize/decode an actual image out of it, then we cut it into pieces, then we encode these pieces with CLIP, calculate the distance to the text prompt and get out some loss(es).</p><p><strong>Backward pass:</strong> We backpropagate through CLIP and VQGAN all the way back to the latent vector z and then use <em>gradient ascent</em> to update z.</p><p>ascend_txt()<br> Synthesizes an image with VQGAN, cut the image into pieces, encode the image pieces with CLIP, then pass them to the Prompt model(s) to calculate the loss(es), and finally saving the image to disk. (<em>see detailed explanation of ascend_txt below)</em></p><p>loss.sum() <br>Summing the losses. Remember, loss is an array of tensors. This works in PyTorch: If you have multiple losses you can sum them up and then only call backward() once.</p><p>loss.backward() <br>Computing the gradients for all losses (it does not yet update the weights)</p><p>opt.step() <br>Calling the optimizer (which we initialized in step 2) to update z.</p><h3>⛰️ What happens in ascend_txt ?</h3><p>This function formulates the loss terms for optimization. It returns an array of losses per prompt.</p><p>Why is it named ascend_txt? It refers to <em>gradient ascent</em>, which works the same manner as gradient descent (just with a different goal: maximization instead of minimization of some function).</p><p>Here’s what’s happening in ascend_txt:</p><p>out = synth(z)<br>We’re synthesizing an image with VQGAN ( model.decode ): based on z, the vector that is being optimized in every training step</p><p>iii = perceptor.encode_image(normalize(make_cutouts(out)))float()<br>We create a batch of cutouts from this image and encode them with CLIP (<em>see detailed explanation about MakeCutouts below</em>)</p><p>for prompt in pMs:<br> result.append(prompt(iii))<br>We go over each of our “Prompt” models (instances of the Prompt class) and pass the cutout-batches through it in order to calculate the loss per prompt <em>(see detailed explanation about the Prompt model below)</em></p><p>imageio.imwrite(filename, np.array(img))<br>add_stegano_data(filename)<br>We save the image and add metadata to it via steganography <em>(see detailed explanation below about Steganography)</em></p><p>return result<br>We return an array of losses (loss per prompt).</p><h3>🔥 Is it CLIP? Is it VQGAN? What exactly is being trained or optimized in this notebook?</h3><p>We’re not training a VQGAN model and we’re also not training a CLIP model. Both models are already pretrained and their weights are frozen during the run of the notebook.</p><p>What’s being optimised (or “trained”) is z , the latent image vector that is being passed as an input to VQGAN.</p><h3>🔥 The Prompt class</h3><p>A model called “Prompt” is the place where we calculate how similar image and text are (the loss). There might be more than one of these models in case that a) there was more than one prompt in the user input, or b) a destination image was defined by the user.</p><p>The class Prompt subclasses the PyTorch NN module base class. <strong>Note:</strong> When calling an instance (let’s say we named itprompt ) of the Prompt class withprompt()we are actually calling the forward() method of the Prompt class.</p><p>The forward() function of the Prompt model is the core of the algorithm, it calculates how similar image and text are.</p><p>It’s important to understand what is referred to by input , self.embed and dist here!input is the (CLIP encoded) image. Or more accurately: a batch of image cutouts.self.embed is the (CLIP encoded) text prompt that the model was instantiated with.dists stands for “distance” and refers to the mathematical distance between the embeddings of input and text.</p><p>The return value (the actual loss) of the forward function refers to this distance. It is actually a tensor representing the loss that looks like this:</p><p><strong>tensor(1.0011, device=’cuda:0&#39;, grad_fn=&lt;MulBackward0&gt;)</strong></p><p>(btw, the grad_fn property means that a previous function (MulBackward0) resulted in having the gradients calculated. History is always maintained in these PyTorch tensors, unless you specify otherwise)</p><h3>✂️ MakeCutouts</h3><p>CLIP can only deal with low-resolution images as input. However, VQGAN is capable of creating high-resolution images. In order to compromise between the two, we cut the image into pieces and pass them to CLIP as a batch.</p><p>On top of cutting the images into pieces MakeCutouts does a couple of image transformations: Distortions, horizontal flip, add blur, and more. It uses <a href="https://github.com/kornia/kornia">kornia</a> for this, a computer vision library that provides functions for image augmentations.</p><p>Why these transformations? In <a href="https://moultano.wordpress.com/2021/08/23/doorways/">his wonderful blogpost</a> Ryan Moulton explains:</p><p><em>“Much like how digital artists flip their canvas to double check their proportions, and artists in traditional media will rotate around their canvas to view it from different angles as they’re working, giving CLIP randomly rotated, skewed, slightly blurred images produces much better results.”</em></p><h3>Other interesting aspects of this notebook (Steganography, etc.)</h3><h4>🕵️‍♀️ Steganography</h4><p>The notebook uses steganography to add metadata to each generated image saved on disk, such as: text prompt, the type of model, the random seed, the iteration number and more.</p><p>This is handy in case you want to keep track of your experiments, reproduce some of your own results or see how other people created the images they published on the internet.</p><p>Steganography is the practice of concealing messages in a file or a physical object that can not be noticed by the naked eye. This notebook uses the “LSB” steganography technique, which stands for “least significant bit”. The idea is to store the hidden message by overwriting the least significant bit of each pixel of the image. Here’s a quick <a href="https://medium.com/r?url=https%3A%2F%2Fitnext.io%2Fsteganography-101-lsb-introduction-with-python-4c4803e08041">explainer of LSB</a>.</p><p>Want to quickly check which information is hidden in an image? You can use an online Stegano Decoder like <a href="https://stylesuxx.github.io/steganography/">this one</a>.</p><p>Be aware that once you edit the image (e.g. change the filesize or do some Photoshop edits) this hidden information will be gone.</p><h3>A little dictionary</h3><h4><strong>Checkpoint</strong></h4><p>A capture of the models internal state (weights and other parameters) at a certain time in training. Necessary for inference or for resuming training.</p><h4><strong>Embedding</strong></h4><p>A low-dimensional, learned vector representation into which you can translate high-dimensional vectors. Generally, embeddings make ML models more efficient and easier to work with.</p><h4><strong>Loss</strong></h4><p>A measure of how different a model’s predictions are compared to the actual label of the data. Basically a measure of how bad the model is. To determine the loss, a model must define a loss function.</p><h4><strong>Loss function</strong></h4><p>A function for calculating the loss.</p><h4><strong>One-hot encoding</strong></h4><p>A technique for representing categorical data. The encoding has the shape of a matrix with binary data.</p><h4><strong>Seed</strong></h4><p>The number used to initialize the state of a (pseudo-) random number generator. If you use the same seed the generator will produce the same output. This is useful for reproducibility.</p><h4><strong>Steganography</strong></h4><p>Steganography is the practice of concealing a messages in a file or a physical object that can not be noticed by the naked eye.</p><h4><strong>Tensor</strong></h4><p>A type of data structure or mathematical object that is similar to a vector or a matrix. Mathematically, tensors are a superset of vectors. In PyTorch it’s the core data structure: all the inputs and outputs of a model, as well as the model’s parameters and learning weights, are expressed as tensors.</p><ul><li>Video: <a href="https://www.youtube.com/watch?v=f5liqUk0ZTw">Mathematical explanation</a></li><li>Video: <a href="https://www.youtube.com/watch?time_continue=55&amp;v=r7QDUPb2dCM&amp;feature=emb_logo">Tensors in PyTorch</a></li></ul><h4><strong>Vector Quantization</strong></h4><p>A technique for easing computation (which also minimizes carbon footprint). It replaces floating points with integers inside the network.</p><h4><strong>Z-Vector</strong></h4><p>A vector containing random values from a Gaussian (normal) distribution. It is usually passed as the input into a pretrained GAN generator which results in generating a real-looking fake image.</p><h3>Cool Resources</h3><ul><li>Blogpost by Ryan Moulton on how he tweaked the code of VQGAN+CLIP to generate interesting panorama images: <a href="https://moultano.wordpress.com/2021/08/23/doorways/">https://moultano.wordpress.com/2021/08/23/doorways/</a></li><li>An explainer on the CLIP+BigGAN implementation by Hao Hao Tan: <a href="https://wandb.ai/gudgud96/big-sleep-test/reports/Image-Generation-Based-on-Abstract-Concepts-Using-CLIP-BigGAN--Vmlldzo1MjA2MTE">https://wandb.ai/gudgud96/big-sleep-test/reports/Image-Generation-Based-on-Abstract-Concepts-Using-CLIP-BigGAN--Vmlldzo1MjA2MTE</a></li><li>“VQGAN+CLIP how does it work” by Alexa Steinbrück <a href="https://alexasteinbruck.medium.com/vqgan-clip-how-does-it-work-210a5dca5e52">https://alexasteinbruck.medium.com/vqgan-clip-how-does-it-work-210a5dca5e52</a></li><li>“Alien Dreams: An Emerging Art Scene” by Charlie Snell: <a href="https://ml.berkeley.edu/blog/posts/clip-art/">https://ml.berkeley.edu/blog/posts/clip-art/</a></li><li>Interview (video) with Ryan Murdock (aka @advadnoun) by Derrick Schultz (Artificial Images): <a href="https://www.youtube.com/watch?v=OYWGSDeQYlc">https://www.youtube.com/watch?v=OYWGSDeQYlc</a></li><li>Twitter thread by Tanishq Mathew Abraham: <a href="https://twitter.com/iScienceLuvr/status/1468115406858514433">https://twitter.com/iScienceLuvr/status/1468115406858514433</a></li><li>“The illustrated VQGAN” by Lj Miranda, especially the Appendix with a diagram of the “family tree” of VQGAN+CLIP and how it came to be! <a href="https://ljvmiranda921.github.io/notebook/2021/08/08/clip-vqgan/">https://ljvmiranda921.github.io/notebook/2021/08/08/clip-vqgan/</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a0c48697a7ff" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>