Introducing the First-Ever AI Magazine & Podcast Made By AI

An Exploration into Automated News Generation with No Human Oversight

Shaked Zychlinski 🎗️
The Startup


Tech by AI is available at Open source code is available on

Generated by Dall-E

Tech and AI are advancing fast. Really fast. So fast I can’t keep up with the pace, and found myself lost when trying to. There are new discoveries and models on a daily — sometimes hourly — basis, so much news to consume, so many tweets to read, how do I make it all work?

Wouldn’t it be great if someone — or let’s say, something — would gather all the news for me, filter out only the things that really matter, and summarize them, so I can get all the news with morning coffee?

So I’ve decided to do a little experiment — a social experiment with no humans involved — and simply let the generative models read, aggregate, filter and summarize the important news for me. Everything will be done automatically, without any human intervention. How good will the result be? Will it makes sense? How much will it cost? There’s only one way to find out.

Building a Magazine, Step-By-Step

Choosing a Model

Obviously, the most crucial part is which LLM to use? There are so many on the market — with new ones joining daily — this isn’t a trivial call. I realized I have two main requirements from the LLM I’ll choose:

  1. It needs a long context window. The model will scan through and read several different articles before serving me with something, so it needs the ability to store a lot of data in its memory.
  2. It needs to work well with external tools. Obviously, the model will be required to search the web and access websites on my behalf, so working with external tools in an effective way is crucial.

With these two requirements in mind, I came to the conclusion that GPT-4 Turbo is the model to go with. So now that I have the model to power my newsroom, it was time to ask how will the newsroom operate? Am I just going to ask GPT to “summarize news on the web” for me, or do I want it to interact with other people — or models — like a real newsroom?


Inspired much by Microsoft’s AutoGen (even though I haven’t used it in this project), I’ve decided to go with the second option — I’ll have several agents, each with their own role, interacting with one another to create a daily issue for my AI news magazine. After some trial-and-error, I’ve converged to four types of agents, working together:

  1. Editor-in-Chief. That’s the agent that governs everything, and eventually has the last word. The Editor doesn’t write any article — they only edit the reporters articles. The Editor is also the one to brief the reporters about what to look for, and also has the final decision in what will be featured in the daily issue.
  2. Reporters. Reporters are the agents which do the research online, pick the top articles and write about those selected by the Editor. There’s more than one reporter, as the point is to have each which a different system prompt, which should ideally result in different web-searches and different article selection.
  3. Academic Reporter. One of the things I quickly realized is just like humans, giving agents to many options yields confusion. Instead of asking the same reporters to do research both online and on Arxiv, I split the tasks, and gave the academic-research task to a separate reporter, dealing only with this.
  4. Twitter Analyst. In the field of AI, news and trends sometimes start off as tweets before getting headlines on more traditional media. Realizing that, I created an agent specializing in searching data on Twitter, which then notifies the editor what everyone’s is talking about.

Having established these roles, it became clear that I need to focus now on providing them with robust tools to effectively gather and process information. This requirement led me to explore and set up the necessary digital infrastructure.


Communicating with the outside-world is the most important thing for my newsroom agents to successfully accomplish their assignments. Here are the tools I needed, and how I created them:

  1. Web Search. The quality of the magazine will directly correlated to the agents search ability. Therefore, I gave them access to Google Search. Getting started with involves setting up a Google Console account with an active Search API, and setting up a Custom Search Engine. Once done, the official Python package can be installed from PyPI: google-api-python-client. The documentation isn’t great, though.
    (FYI, there’s another free, out-of-the-box, no-questions-asked option by DuckDuckGo).
  2. Accessing Websites. Once found, the articles needs to be read. In Python, creating a simple too to scrape text from website can be done with a few lines of code using requests and BeautifulSoup.
  3. Accessing Arxiv. A little documentation-lacking too, but Arxiv makes it very easy to search and download PDFs from it. There’s also a quite easy to use Python library named arxiv. We’ll need another library for parsing the PDF files. I used PyPDF.
  4. Accessing Twitter. This one is a little tricky. Twitter under Elon Musk charges $100/month to access Twitter API. As a workaround, I used Google search while limiting it with This seems to be working quite well for public tweets, which are the vast majority.
  5. Magazine Archive. News can sometime be duplicated, and a topic discussed on one site today might have appeared on another yesterday. I wanted to give the Editor an option to search for articles in the magazine’s archive, and check if there are any similar headlines from before. To get this done, I created embeddings of each article in the magazine, and allow the the Editor to search in a similar way to how RAG works. As this very little data, I used a naive Numpy array and Pandas DataFrame as the vector DB.

With the tools in place, from web search capabilities to Twitter data access, I was ready to define the daily operations of my AI-driven newsroom. This setup dictated how the agents would interact and how the entire process would unfold each day.

The Routine

Now we have the determined the agents and set up their tools, it’s time to determine how the daily routine will look like. I had two conflicting guidelines here — the first was to let the agents interact as much as needed with one another, and the second was to limit their interactions in order to reduce costs. Eventually, the following routine was the one that worked best for me:

Tech by AI: flow chart

It goes like this:

  1. The routine starts with the Editor getting a general overview of what I’m expecting of the magazine to be — what’s the fields and specific topics I’m interested it.
  2. In the meantime, the Twitter Analyst comes up with a list of people to follow on Twitter, and checks what they are talking about. It compiles a list of trends, and sends them to the Editor.
  3. The Editor takes into account all these inputs, and creates a briefing for the reporters, asking them what to look for and write about.
  4. The reporters go searching the web and Arxiv, and deliver a list of the best items they found back to the Editor. Who decides what are the top items? The reporters themselves, of course.
  5. The Editor looks at all the suggestions and does several things:
    - It decides what are the items to be featured in the issue, and asks the reporters to write
    - It combines several suggestions about the same topic from different sources, to avoid duplications
    - It looks up the articles topics in the Magazine Archive, verifying this topic wasn’t covered already
  6. Reporters summarize the articles, and hand their drafts to the Editor.
  7. The Editor has the final say, and has the option to edit the texts. The final edit is being served to me

This entire process takes a little less than 5 minutes, and costs vary from $1 to $5, depending on the length of texts read by the agents.

Making It Interesting

After outlining the daily routine that efficiently utilizes our agents and tools, I focused next on the uniqueness of each publication. This uniqueness is primarily driven by the system prompts of each agent, curated to inject variety and depth into the content they generate. Which is why I decided I won’t be the one writing them.

As the Editor is the one in charge, the first task it gets is to hire the reporters. The Editor is asked to describe the characteristics of the reporters which will be the best match for the newsroom. I ask the Editor to describe them in second body, as if addressing them directly, telling them who they are. I then take these descriptions and use them as the reporters system prompts.

And who decides what’s the system prompt of the Editor? For that I use another agent, with only one task — to describe to me several different editors and their characteristics, again in second body. From these I randomly pick one, and assign it as the Editor. Add to that the fact that all agents temperature is set to ~0.5, and you’ll realize that if you run the same routine 10 times in arow, you’ll get completely different issues. Every issue is unique.

Log screenshot, the reporters search queries can be seen

Making It Accessible

Creating content is great, but it needs to be served somehow. I decided to go with a simple and efficient solution — GiHub Pages. All I needed to do is to make sure the final edit is written in Markdown. I used a clean and MIT-licensed Jekyll theme I found online, and that’s pretty much it — I got a website. I also integrated GitHub Actions to trigger the routine every morning, so when wake there’s a new fresh issue ready for me.

But then I realized that I actually like to get my news when I walk my dog in the morning — and it’ll be great if the news could be narrated for me. So I added one last phase to the routine — narration. To keep it simple, as I’m already using OpenAI API both for GPT and the embeddings, I decided to use the company’s text-to-speech API too. And as Jekyll and GitHub Pages render my website every time a new issue is added, creating an RSS feed is straightforward. Now, if you didn’t know, apparently setting up a podcast only requires one thing — an RSS feed. So, in a matter of minutes, my news narration became available on Spotify, and now I get me news every morning while I’m out for a walk.

Generated by Dall-E

Reducing Costs

While the daily costs were always in the range of $1 to $5, as days went by, I noticed they stabilized around ~$3.5. Which is isn’t a lot, but that’s still more than I was anticipating, as it adds up to ~$105 a month. So I took a deeper look into the costs breakdown, and noticed that the research phase — the one where the reporters search online for articles — was the most expensive part of the process, reaching ~$2.7. Is there a way to reduce the costs without affecting results? Yes — reducing tokens.

While English words are in most cases either a single token or two, URLs — as my dear friend

noted— are a bit more problematic. As there are no spaces, and words are either separated by dashes, slashes or by nothing at all, and are often mixed with numbers (besides being very long). That means a single URL might require even 27 tokens. Consider the amount of URLs that are being processed — that becomes a lot of tokens.

The solution was to map URLs to IDs. Behind the scenes all URLs were replaced by a numeric ID. I chose numeric IDs for a reason: all numbers up to three digits (0–999) are converted to a single token. My code converted URLs to IDs and vice-versa, but only the IDs were fed into the prompts. That simple change dropped the costs of the research phase by more than 50%!

There are probably more ways to reduce costs. I’m still playing around with this, learning how to optimize it better 💪.

And well, that’s it — now it’s out there. Everyday I get my AI news summary directly from AI, and yes — I learn a lot from it, and t saves me a lot of time catching up on what’s happening. And since it’s already up there, you can check it out too — go to or follow the news on Spotify. And let me know what you think!