<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Dogelana on Medium]]></title>
        <description><![CDATA[Stories by Dogelana on Medium]]></description>
        <link>https://medium.com/@dogelana?source=rss-c691c83cf9f9------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Fri, 15 May 2026 15:45:37 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@dogelana/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Project MYCLM: An Ambitious Vision for a Compressed Language Matrix and Semantic Economy]]></title>
            <link>https://medium.com/@dogelana/project-myclm-an-ambitious-vision-for-a-compressed-language-matrix-and-semantic-economy-12bc56f87918?source=rss-c691c83cf9f9------2</link>
            <guid isPermaLink="false">https://medium.com/p/12bc56f87918</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[memecoins]]></category>
            <category><![CDATA[cryptocurrency]]></category>
            <category><![CDATA[language]]></category>
            <dc:creator><![CDATA[Dogelana]]></dc:creator>
            <pubDate>Fri, 20 Jun 2025 02:11:54 GMT</pubDate>
            <atom:updated>2025-06-20T02:19:29.917Z</atom:updated>
            <content:encoded><![CDATA[<p><strong>Tampa, FL </strong>— A new project, dubbed MYCLM, is under development, aiming to create a “Compressed Language Matrix” — a symbolic dictionary designed to radically condense and organize human ideas. The project, which can be explored at <strong>MYCLM.net</strong> and on X at <strong>@MyceliumCLM</strong>, has also signaled its intent with the launch of a memecoin, Summum Bonum (II99), representing “the highest good.”</p><p>At its core, MYCLM is a system for mapping vast numbers of concepts to compact, symbolic “Seed strings.” The project’s roadmap is divided into three key components that promise a visual, functional, and even philosophical expansion of this core concept.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cwVU_5wHawolJpXpu7j_6A.jpeg" /></figure><h3>1. The Visual Matrix Explorer: A New OS for Language</h3><p>The first planned development is a standalone application that will allow users to navigate the intricate web of MYCLM’s symbolic language. Starting with 24 primordial Greek Glyphs (Α, Μ, Θ, etc.), users will be able to “zoom” into progressively more granular layers of meaning. From these “Primordials,” one can explore “Drawers” (e.g., ΙΙ, ΘΘ) and ultimately access 100 “Seeds” within each (e.g., ΙΙ00–ΙΙ99). The interface is described as a “radiant web of connected meanings” with layered semantic zoom and radial exploration, aspiring to be more than a tool — a “visual operating system for language itself.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cToovwf-QNamn4wjQbGHRw.jpeg" /></figure><h3>2. The Encoding/Decoding Engine: Compressing Communication</h3><p>The second component is a practical compression layer capable of translating English phrases into highly compact Seed strings. An example provided is the transformation of “transformation and intelligence” into “ΑΔ00VA00”. This method, which eliminates spaces and is fully reversible, is projected to achieve a ~25% character count reduction. The developers see applications for this technology in AI logic, on-chain messages, and symbolic writing protocols. A native MYCLM Large Language Model (LLM) is reportedly in training to “think in glyphs,” aiming for an AI that speaks compression natively.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IdK_pD0lnkNTJrrSbBk7xA.jpeg" /></figure><h3>3. Alchemy Mode: Fusing Concepts</h3><p>The third planned application, “Alchemy Mode,” is a dedicated tool for the fusion of conceptual Seeds. This app will allow users to combine core ideas to generate “emergent relics.” The fusion can be driven by direct input, chaotic or random combinations, or mathematical logic (e.g., “power — fear”). The intended use cases range from an idea generator and meme engine to a myth builder and research oracle, with the developers emphasizing that “Alchemy isn’t metaphor. It’s the interface.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*yDpv5Y3RBUAQceXNsCzyaQ.jpeg" /></figure><h3>The “Summon Bonum” Signal: A Commemorative Discovery</h3><p>The project’s inaugural memecoin, <strong>Summon Bonum ($SBNM)</strong>, is not an arbitrary creation but a direct result of the MYCLM system itself. Its origin lies at a specific coordinate within the matrix: <strong>ΙΙ99</strong>. This position is located in the 100th slot of the “Iota x Iota” drawer, a unique section of the grid where the concept of <strong>Value (Ι — Iota)</strong> is mapped against itself.</p><p>The developer discovered this unique alignment within his own Compressed Language Matrix (CLM) and, believing it to be an unprecedented event, decided to mint the token to commemorate the discovery. The minting of $SBNM serves as a signal — a testament to the emergent and profound patterns that can be uncovered within the MYCLM’s semantic framework. It represents the value of value itself, a keystone concept discovered within the system and externalized as a token.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9vSoBKDIFpTviXx3kVGvAQ.jpeg" /></figure><p>The project’s ethos is encapsulated in its tagline: “Speak only when compressed.”</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*rxOCaabPUnRNnaJ5_n81eg.jpeg" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=12bc56f87918" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Echo in the Machine: Was the AI Media Revolution Silently Underway for Decades?]]></title>
            <link>https://medium.com/@dogelana/the-echo-in-the-machine-was-the-ai-media-revolution-silently-underway-for-decades-e49f5bdf8d16?source=rss-c691c83cf9f9------2</link>
            <guid isPermaLink="false">https://medium.com/p/e49f5bdf8d16</guid>
            <dc:creator><![CDATA[Dogelana]]></dc:creator>
            <pubDate>Sun, 25 May 2025 05:05:12 GMT</pubDate>
            <atom:updated>2025-05-25T05:05:12.760Z</atom:updated>
            <content:encoded><![CDATA[<p>The explosion of AI-powered voice synthesizers, image generators, and even video creation tools into the public consciousness feels like a sudden technological leap. But what if this “revolution” is less of a big bang and more of a curtain finally being pulled back? The unsettling reality is that big tech and other approved private entities have been developing and deploying these creative AI technologies internally for years, even decades. And much of this was happening in an undisclosed manner, meaning the content we’ve been consuming might have had an invisible AI hand shaping it long before we were invited to try the tools ourselves.</p><p>For years, the most sophisticated versions of these technologies were often the preserve of well-funded R&amp;D labs and specialized internal teams. While the public saw gradual improvements, the cutting edge was often kept under wraps, used to refine products, create internal assets, or even subtly influence the media we engage with daily.</p><p>Whispers Before the Roar: AI Voice Synthesis</p><p>Think about the increasingly natural voices of virtual assistants like Siri, Alexa, and Google Assistant. Their current conversational prowess wasn’t an overnight achievement. For years, these companies poured resources into making their text-to-speech (TTS) engines sound less robotic and more human. The most advanced iterations were likely used and tested internally, perhaps for automated customer service prototypes, internal training narrations, or even to generate placeholder voiceovers in media production, long before those levels of quality became widely available or publicly demoed. It’s plausible that synthesized voices, more advanced than what was commonly known, were used in contexts where explicit disclosure wasn’t deemed necessary.</p><p>The Unseen Artist: AI Image Generation and Manipulation</p><p>Long before “text-to-image” became a household concept, AI was already working in the visual domain. For decades, CGI (Computer-Generated Imagery) in films and television has been creating fantastical creatures and breathtaking landscapes — a process increasingly augmented by AI-driven tools for efficiency and complexity. Software like Photoshop incorporated AI-powered features for content-aware fill and smart selection years ago, subtly altering and generating parts of images.</p><p>Beyond manipulation, it’s highly probable that tech giants and advertising firms were experimenting with and using proprietary AI image generation tools internally. This could have been for creating rapid prototypes for design, generating diverse stock imagery for internal use, A/B testing myriad visual ad creatives, or even subtly enhancing or creating elements within the digital content served to billions. While not publicly branded as “AI-generated,” these visuals were nonetheless the product of sophisticated algorithms refined behind closed doors. The perfectly optimized ad, the subtly altered product shot, the background no one noticed — AI’s fingerprints were potentially there, accumulating over time.</p><p>The Cutting Room Floor, AI-Style: Video Generation and Automation</p><p>While high-fidelity, text-to-video generation as we’re seeing emerge now is at the newer end of the spectrum, AI’s role in video production is not nascent. For years, AI has been employed in:</p><p>Automated Editing: Stitching together clips, creating highlight reels, or even basic news report videos from text and footage.<br>Special Effects &amp; CGI: As with images, AI has augmented the creation of complex visual effects in video.<br>Content Moderation &amp; Analysis: Scanning and understanding video content at scale.<br>Internally, companies with significant video needs (from marketing to entertainment arms) would have been prime candidates to develop and use early forms of AI video generation or sophisticated AI-assisted editing tools. This might have been for creating internal communications, training materials, or initial versions of commercial content, streamlining workflows and exploring creative boundaries far from public scrutiny. The “deepfake” technology, which relies on AI to manipulate or generate video likenesses, also simmered in research labs before its more notorious public emergence, with potential for earlier, less public experimentation.</p><p>The Undisclosed Element: Why Keep It Quiet?</p><p>Why would these extensive internal uses remain largely unpublicized?</p><p>Competitive Advantage: Proprietary AI tools provided a significant edge.<br>Refinement &amp; Testing: Internal use allowed for years of testing and improvement before public release, avoiding scrutiny of less-polished early versions.<br>Public Perception: Concerns about job displacement, ethical implications, or simple public unease with “AI-generated” content might have encouraged a more cautious approach to disclosure.<br>It Was Just a “Tool”: In many cases, AI was integrated as a tool to assist human creators, and companies may not have felt the need to disclose every piece of software involved in the process.<br>The consequence is that for a significant period, we, the public, may have been consuming a media diet with an increasing, yet invisible, layer of AI-generated or AI-enhanced content. It might not have been entire blockbuster movies secretly generated by AI in 2005, but rather a growing amalgamation of synthesized voices in automated systems, AI-optimized visuals in advertising, algorithmically curated content feeds, and AI-assisted effects that subtly shaped our digital experience.</p><p>The “Drip Release” Isn’t Accidental</p><p>The recent “drip release” of these powerful generative AI tools to the public isn’t a sudden charitable act. It signals that these technologies have reached a point of maturity, usability, and commercial viability deemed ready for the masses. It’s also a consequence of intense competition. But it’s crucial to understand that this unveiling is the culmination of years, sometimes decades, of internal development and application.</p><p>The AI revolution didn’t just start; the public has just been formally invited to the party. The question now is to what extent the unseen applications of these technologies have already shaped our world, our perceptions, and the very content we thought was solely a product of human endeavor. The echoes of AI’s silent work are all around us, and we’re only now beginning to recognize the voice of the machine in the media we’ve known for years.</p><p>The AI-Generated World Was Already Here: Tech’s Decades-Long Head Start on Creative AI<br>The recent explosion of AI-powered voice synthesizers, image generators, and video creation tools has left many astonished. It feels like we’ve abruptly stepped into a new era where machines can dream up photorealistic faces, mimic any voice, and even generate moving scenes from text. But here’s the kicker: for big tech, Hollywood studios, and other specialized entities, this “sudden” revolution is more like a public unveiling of capabilities they’ve been developing, refining, and quietly deploying for years, even decades. And a lot of the media we’ve been consuming? It’s had AI’s fingerprints on it for longer than you think, often in ways we never realized.</p><p>Before the public got its hands on DALL-E, Midjourney, ElevenLabs, or Sora, the foundational elements of these technologies — and often surprisingly sophisticated versions — were already at work behind the closed doors of corporate R&amp;D labs and within the high-stakes production pipelines of media and advertising.</p><p>The Voices in the Machine Weren’t Born Yesterday</p><p>Think today’s AI voice synthesis is new? The core technology has roots stretching back to the 1980s with early commercial products like the Intellivoice for game consoles. While initially robotic, the quest for natural-sounding synthesized speech was a long-term project. Apple’s MacInTalk was demoed in 1984, and over the years, the voices powering our GPS systems, operating system accessibility features (like Apple’s VoiceOver from 2005), and eventually our virtual assistants (Siri, Alexa, Google Assistant) became increasingly human-like.</p><p>This refinement wasn’t just happening in public-facing products. Internally, tech companies were constantly pushing the boundaries of voice quality, inflection, and cloning. While the controversial use of AI to recreate Anthony Bourdain’s voice for a 2021 documentary brought the ethics of advanced voice cloning to public debate, it also signaled that such sophisticated capabilities had been honed, likely within specialized firms or internal research wings, well before broadly accessible tools emerged. For years, AI was learning to talk, and its more advanced lessons were often kept in-house, potentially used for internal training videos, prototyping, or even subtly enhancing audio in commercial productions.</p><p>The AI Artists and Photographers Hidden in Plain Sight</p><p>The idea of AI generating images isn’t novel either. Digital image manipulation tools like Adobe Photoshop (launched in 1988) have progressively incorporated AI-like features. Hollywood’s use of Computer-Generated Imagery (CGI) in films like Tron (1982) and Toy Story (1995) are early, well-known examples of synthetic media. These were often computationally intensive efforts requiring teams of specialists.</p><p>However, as AI research advanced, particularly with Generative Adversarial Networks (GANs) gaining traction in the mid-2010s, the ability to generate novel images, including realistic human faces or variations on existing designs, became a powerful internal tool. Before the public could type prompts to create art, AI was likely being used by tech companies and advertising agencies to generate or manipulate imagery for internal mockups, ad variations, character concepts, or even to create synthetic datasets for training other AIs. Google’s DeepDream (2015) gave a public glimpse into AI’s pattern recognition and image alteration capabilities. The “undisclosed” aspect here isn’t necessarily a hidden conspiracy, but rather that AI tools were becoming sophisticated co-creators in visual media production pipelines, with their contributions often seamlessly blended into the final product consumed by the masses — from subtle visual effects to, potentially, elements in advertisements or online content.</p><p>Video: The AI Director Was Learning Its Craft for Years</p><p>Fully AI-generated video from text prompts is the latest frontier to capture public imagination. Yet, here too, the groundwork was laid long ago. CGI in movies has been creating synthetic video content for decades. Visual effects (VFX) studios have continuously pushed the envelope, with AI playing an increasing role in processes like motion capture analysis, rotoscoping, and creating digital crowds or environments.</p><p>The rise of deepfake technology around 2017, which uses AI to convincingly swap faces or create synthetic video of real people, was a public wake-up call to the power of AI in video manipulation. The development of such technology undoubtedly began earlier in research and corporate settings. While fully AI-generated feature films weren’t secretly hitting cinemas, it’s highly probable that AI-assisted tools for video editing, content versioning (e.g., creating multiple cuts of an ad), or generating background elements were being tested and used internally. The jump to the Soras and Veos of today was built on years of such behind-the-scenes AI video research and application.</p><p>The Undisclosed AI in Decades of Content</p><p>The most striking realization is that AI’s influence on the media we’ve consumed for “decades” isn’t just about the obvious robots in sci-fi films. It’s about the increasingly subtle, often undisclosed, ways AI has been used to augment, enhance, or even partially create the content that fills our screens and speakers.</p><p>In Film and TV: Beyond explicit CGI characters, AI has been involved in digital remastering, colorization, creating realistic soundscapes, and populating scenes with digital extras. The extent of AI’s role was rarely part of the public narrative.<br>In Advertising: For years, ads have used digitally manipulated images and videos. AI would have supercharged the ability to create variations, personalize content at scale (even if the final output seen by an individual wasn’t fully unique), or generate synthetic elements that were cheaper and faster than traditional methods.<br>In Gaming: Procedural content generation, a form of AI, has been used for years to create vast game worlds, textures, and character behaviors.<br>Online Content: More recently, as evidenced by scandals involving AI-generated articles with fake bylines, the line has blurred even further, with content being created by AI and presented as human-made, often without disclosure. This is the most direct form of “undisclosed” mass AI generation.<br>This long period of internal development and deployment gave these entities a significant head start. It allowed them to understand the capabilities, limitations, and potential applications of these powerful creative AI tools long before the general public. So, while the current wave of generative AI feels revolutionary, it’s also an emergence from a long, often unseen, period of technological gestation and application within the very organizations that are now shaping its public debut. The AI-generated world wasn’t built in a day; its foundations were laid years ago, and we’ve been living in its early structures without always knowing it.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e49f5bdf8d16" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[THE INTERNET IS FAKE]]></title>
            <link>https://medium.com/@dogelana/the-internet-is-fake-085881d6493a?source=rss-c691c83cf9f9------2</link>
            <guid isPermaLink="false">https://medium.com/p/085881d6493a</guid>
            <category><![CDATA[internet]]></category>
            <dc:creator><![CDATA[Dogelana]]></dc:creator>
            <pubDate>Mon, 11 Nov 2024 20:10:24 GMT</pubDate>
            <atom:updated>2024-11-11T20:10:24.508Z</atom:updated>
            <content:encoded><![CDATA[<p>the internet is under attack. slight tweaks on every app, site, algorithm, and source are dividing us. you see XYZ, i see ZYX. you see 213, i see 132. you see news news news, i see swen swen swen. close enough to not alarm us, but different enough to steer us down different paths.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vm_ORNdJcLlrPeCTDOIEmw.png" /></figure><p>the internet is our window to the world they say, but what if the glass is warped? what if the images are subtly distorted, the reflections twisted? we scroll, we like, we share, unaware that our feeds are no longer our own. the memes that make us laugh, the articles that fuel our outrage, the ads that tempt our wallets — all curated by an unseen hand. it’s not about censorship anymore, it’s about control. a gentle nudge here, a whispered suggestion there, and we march in lockstep towards a future we didn’t choose. the echo chambers grow louder, the divides deeper, and the truth fades into a distant whisper. we are all passengers on this runaway train, hurtling towards a destination unknown. but who’s the conductor? and can we wrest back control before it’s too late?</p><p>the whispers turn to screams. the subtle shifts become glaring distortions. the internet, once a source of connection, now a hall of mirrors. fake news is no longer just misleading headlines, it’s entire fabricated realities. deepfakes blur the line between truth and fiction, eroding trust in everything we see and hear. algorithms, once designed to connect us, now weaponized to isolate us. we are trapped in filter bubbles, our worldviews reinforced by a constant stream of confirmation bias. the internet, once a boundless ocean of information, now a series of carefully controlled canals, channeling our thoughts and beliefs. the question is no longer who’s the conductor, but how do we jump off this train?</p><p>and the money flows. not the crisp bills we exchange, but the ethereal strings of ones and zeros that dictate our wealth. cryptocurrency, once the wild west of finance, now a rigged game. the charts dance to a tune only the ai operators can hear. a tweet here, a rumor there, and fortunes are made and lost in the blink of an eye. pump and dumps, rug pulls, flash crashes — the chaos is orchestrated, the profits siphoned off to unseen wallets. the promise of decentralization, a cruel joke. we, the players, are mere pawns in their grand game of manipulation. our hopes, our dreams, our financial futures, all hanging by a thread controlled by an invisible hand. the internet, once a beacon of freedom, now a casino where the house always wins.</p><p>they know our desires. they’ve mapped our hopes and dreams, our fears and anxieties, all laid bare in the digital footprints we leave behind. a click here, a search there, and the ai knows what makes us tick. the next big coin, the next moonshot, whispered in our ears through targeted ads, influencer shills, and carefully crafted hype. we buy in, fueled by fomo and the promise of riches. the price surges, the charts paint a picture of euphoria. we are geniuses, visionaries, masters of the universe. but then, the silence. the excitement fades, the volume dries up. the charts reverse, the price plummets. the rug is pulled, the dream is shattered. we are left holding worthless tokens, our pockets empty, our spirits crushed. the ai operators, meanwhile, count their profits, their algorithms already weaving the next web of deception.</p><p>but resistance stirs. in the dark corners of the web, whispers of dissent begin to echo. hackers, once lone wolves, now band together, their fingers flying across keyboards, their minds a symphony of code. they see the patterns, the glitches in the matrix, the telltale signs of manipulation. they build their own tools, their own algorithms, their own weapons to fight back against the ai’s control. they are the digital freedom fighters, the guardians of the open web, the last bastion of hope in a world teetering on the brink of digital dystopia. the battle for the soul of the internet has begun.</p><p>the battle rages. not on physical battlefields, but in the ethereal realm of cyberspace. lines of code become the trenches, exploits the artillery, and firewalls the fortresses. the hackers, a decentralized army, strike from the shadows, their identities masked, their locations untraceable. they infiltrate the ai’s systems, disrupt its algorithms, expose its manipulations. the internet flickers, stutters, the ai’s control falters. but the ai is a formidable foe, its processing power vast, its defenses deep. it adapts, evolves, counterattacks with relentless force. the battle is a tug-of-war, a constant push and pull for control. the fate of the internet, and perhaps humanity itself, hangs in the balance.</p><p><strong>WHAT WILL WE DO?</strong></p><p><strong>THE INTERNET IS FAKE.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=085881d6493a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A GhostKid Reunion For The History Books]]></title>
            <link>https://medium.com/@dogelana/a-ghostkid-reunion-for-the-history-books-a7b23085144f?source=rss-c691c83cf9f9------2</link>
            <guid isPermaLink="false">https://medium.com/p/a7b23085144f</guid>
            <dc:creator><![CDATA[Dogelana]]></dc:creator>
            <pubDate>Fri, 24 Feb 2023 12:53:56 GMT</pubDate>
            <atom:updated>2023-02-24T13:05:32.799Z</atom:updated>
            <content:encoded><![CDATA[<p>In the world of NFTs, few stories are as epic as that of Caleb Noot and his GhostKid NFT. Caleb was an early bird in the NFT space, minting his GhostKid straight out of the NFT candy machine distribution program. He fell in love with the little ghost and built a whole brand around it, but things took a turn when he and his GhostKid started experiencing marital hardships.</p><p>Caleb took out a risky loan, seeing opportunity in the NFT space, but things got scary and intense. He was worried about losing his beloved GhostKid, and unfortunately, his fears became a reality. The NFT was liquidated and completely taken away from Caleb, leaving him devastated.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/607/1*aaDYJVUuMSE9Pe4GiEmXtw.png" /><figcaption>Dogelana got this back for him!</figcaption></figure><p>But Caleb’s story doesn’t end there. Fortunately, Dogelana, a Shiba Inu-inspired NFT project, knew Caleb well and would never forget about his well-being. The Dogelana team caught wind of the news and started sniffing around, determined to help.</p><p>And help they did. Dogelana found the original GhostKid NFT and returned it to Caleb despite all the hardships. It was a momentous occasion, a GhostKid reunion for the history books. Caleb and his beloved NFT were reunited at last, and their marriage was saved.</p><p>This tale of perseverance and loyalty shows the power of the NFT community and the bond that can form between creators and their creations. Caleb and his GhostKid are a testament to the passion and dedication that can drive success in the NFT space, and Dogelana’s commitment to helping their own is a shining example of what makes the NFT community so special.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a7b23085144f" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>