Sitemap

A practical n8n workflow example from A to Z — Part 1: Use Case, Learning Journey and Setup

19 min readJan 5, 2025

--

This is part 1 of a planned 3 part series, explaining the setup and the details of an n8n.io- workflow from A to Z, solving a concrete problem across several cloud services via automation and the integration of AI summarization and agent capabilities.
This Part1 will explain the background, the use case, the motivation and the setup.
Part 2 will show the first part of the workflow in detail: how to collect, summarize and store newsletter articles:

Part3 will explain in all detail the second workflow: from database summaries to LinkedIn post:

The Use Case

It’s been a while since I wrote the last time. Busy times — and especially in AI, with all the great improvement in models, multi-modality and all that.
However: AI is NOT my job. My job is (corporate) finance & accounting (…no, not (investment) banking, portfolio optimization and all the quant stuff). It’s the grunt work of getting company figures straight and reporting them. Both for internal decision making and external communications.

In this domain, “creativity” is more of a problem, not a solution most of the time (… occasional exceptions to this rule may apply, though ;-)).

What counts in this domain is traceability, reliability, efficiency and the ability to reconcile systems (due to the high level of tool fragmentation, which seems to be an inevitable problem over time). Not exactly the strong points of LLMs, TXT2IMG-models etc. — that thrive more on their ability to “correctly guess” and be imaginative.
The “agentic approach” to tackle everyday problems with AI looked very promising to me from the start. And as things look right now, agentic systems will be THE HOT subject in 2025. The agent approach creates a good balance between the LLM’s creative abilities to solve problems and still have the guard rails of a clear supervision thru workflow rules.

I am aware of the ongoing debate that workflows are NOT agents. My take on this: especially in the Finance domain, systems will be equipped with all kinds of guard rails for some time to come and I do not foresee “agentic systems fully let loose” for some time to come. Accountability and compliance rules will prevent full agent autonomy and require a human in the loop. Needless to say, this also will be the case in even more sensible domains, like e.g. medical care.

The thing that interested me: can I define and solve a sample use case that wraps it all up and solves a real use case with AI-support and across several IT-systems, in a maybe not fully agentic but at least fully automated way ?

The Use Case: What ?

The use case I came up with:
- (1) Automate a “knowledge acquisition process” by aggregating newsletter info into a database (for convenience’s sake Notion for this use case). Not too surprisingly, I chose AI-related newsletters as focus domain for this article.
- (2) Use this database of newsletter summaries to further aggregate it into an actionable result with the help of an agentic approach. In this case: create a LinkedIn post on the latest developments in the domain covered that is automatically generated and pushed to LinkedIn… after being translated to German (as an added level of complexity that would have mostly defied automation before the arrival of the latest AI advances in the last year or two).
The following sequence diagram describes the entire use case (where IMAP denotes the mail server that newsletters are retrieved from and n8n is the chosen automation tool; more details on this later):

As said: this first part will not dive yet into the technicalities of the two workflows themselves. But, as a teaser, here the screenshots showing both workflows that translate the above sequence diagram into action. Explaining them in detail will be the the role of part 2 and 3 of this series:

Full Workflow1: from newsletters to Notion database with summaries

and

Full Workflow2: from newsletter summaries in Notion to LinkedIn blog post

The Use Case: Why ?

The “Why?” question has different aspects:

Why ? The Learning Intent

The intent of this use case setup for a PoC had several aspects:
- Simply to learn using an automation tool
- Make sure the automation tool allows for an “agentic workflow” to flexibly take advantage of new AI capabilities.
- The translation part stand as a proxy for an additional problem to be solved that would have defied automation only a few months back — but can now be automized thanks to the proper combination of agents and agent prompts.
- The interaction between different systems (automation tool, Notion online database and LinkedIn) serves as a proxy for the typical IT-landscape fragmentation.
- The outcome satisfies several real needs:
— keep abreast with the progress in a field that evolves incredibly fast (AI in the given case, but the same approach may be applied to many other domains)
— transform this information in some kind of “actionable result”, e.g. a LinkedIn post highlighting only the most important recent developments; within a corporate context, this might also be e.g. a continuously updated dashboard with important company KPIs.

Why ? The Learning Journey

If you’re not interested in the details of this journey, but only in the technical details, you can stop right here and wait for (or head over to…. depending on the time you’re reading this) Part2 and Part3 of this series. But please be aware that this part explains the different technical systems involved, their components and their setup needed to get n8n up and running in the first place.

And for everybody else: well, it’s called the “Learning Journey” because the journey is the destination. I wanted to learn more about how I could eventually put a combination of latest (AI) advances to a good use in my corporate work environment.

After doing a lot of reading, I recalled the good old adage that you remember 10% of what you read, 50% of what you do and 90% of what you teach.
I recently found another very good visualization for aspect of actually building AI apps on LinkedIn (obviously focused on the AI agents aspect) which I liked a lot. So I’ll just reproduce it here:

This does not fit my use case 100% my case: not all points are applicable (e.g. “Start a platform or community” is pretty irrelevant for me; plus: even though I also usually prefer Python, this given use case made me adapt to Java Script instead).

But you get the point: to learn, I wanted to get my hands dirty on an actual project. Dirty on data. And dirty on (low) code. It’s in the engine room that you learn how the engine truly runs.

And after I had fumbled my way to success with describe use case, I thought it would be a good idea to summarize the entire learning experience. And this not only to retain the additional 40% of what I had learned in the process — but also in order to simply share (because there are not too many practical n8n tutorial on medium.com that go beyond the mere basics). So that’s for my motivation to write here on medium.com.

An interesting side aspect: as much as I am fascinated by AI, it’s still worth the time to sit down and synthesize one’s thoughts oneself. So everything you will read here is 100% AI-free (I swear !) and based on my own reflections instead of LLM-inference.

Long story short: in the below graphic I try to summarize all the different topics I touched during my learning journey for this project — and that made sure that the time spend on the project was time well invested:

Aspects of the project’s learning journey

I’ll elaborate on all of the branches of the mindmap in the later parts. But here some quick first takes what details were part of my learning journey:

n8n.io automation: learning how to work with this automation tool

Docker: the environment in which n8n’s AI starter kit is running; a very useful tool — and if you are serious about, as shown above as a requirement, getting your hands dirty on data and code, knowing more about Docker is time well invested.

JavaScript: coming from Python, this was all new to me; Python integration is still in the beta stage at n8n. But the learning was that one does not have to be an expert coder anymore to get things done (…to a certain extent, that is…). Hence the next level of “Claude Support”, as Claude currently excels in (free) support on coding questions. I was able to do necessary code adjustments just with a bit of “transfer learning” from Python and proper prompting of the free Claude chat version.

Integrations: testing out how n8n helps to combine different platforms and system, emulating, as explained above, an often fragmented corporate IT landscape. For the given use case, the workflows must integrate with Notion and LinkedIn and an IMAP mail server.

LLMs: part of n8n automation works with “AI nodes”. And these need to be connected with specific LLMs in order to be able to function properly. I was able to gain some valuable insights into the quality of working with different LLMs with different APIs in the context of this project.

Prompting: as some central functionalities of the workflow are based on the AI-nodes or even Agent nodes, they must be provided with proper prompts….which has become all of an art by itself. An art I was able to hone thanks to this project.

Algorithms: I found some standard workflows that provided a good basis for what I wanted to achieve. But I found out that it makes quite a difference if e.g. a workflow is “live”, receiving updates regularly — or if it is in test mode and is executed in batch mode. These differences made several important adjustments necessary vs. the “plain vanilla” solution found online.

Why ? The Tool Choice

Tool Choice: n8n.io

n8n covers, according to my understanding (and now experience), the very interesting middle ground between automation focus (with competing tools being Zapier or make.com) and AI agent focus (with agent frameworks like Langraph, crewAI, Flowise, autoGen etc.):

Positioning of n8n between pure automation and pure agent frameworks

n8n attracted my attention because I was looking for this mix of process automation and flexible AI integration.

Add to this the fact that it has a free, dockerized “AI Starter Kit” (details to follow), is free for basic use and apparently considerably cheaper in large scale roll-out than the other automation alternatives and you have a very interesting solution to get started to automate standard business processes. Plus: you are able to leverage, at the same time, the advantages of AI Agents for more process flexibility and can scale up your solution if it fits your needs at a comparably low price.

Tool Choice: Docker

That wasn’t really my choice — but it was good to get first exposure to this quasi IT-standard thru the n8n AI Starter Kit.

I had heard and red about Docker before. But for me, it was “something for the techies”, managing and scaling services on cloud servers well beyond my modest PoC-needs.

I wasn’t aware that (a) it has become a de-facto standard for many services, (b) it has a very good desktop version exists that makes its functionalities accessible also for the not-so-specialists (see the following screenshot) and (c) that also other interesting frameworks are now often offered in this format so that learning to work with Docker promises to pay off over time.

Docker Desktop App with running n8n AI Starter Kit container

If you are not familiar with Docker: Dockerization serves roughly the same purpose as creating environments when writing e.g. Python code. You create a specific configuration of packages, services etc. that you need to get the job done. Only that Docker goes a step farther in the sense that it creates basically a computer system running on your computer system. That’s what makes it so useful, from what I understood, for cloud deployment: you can quickly deploy Docker Container on running server to quickly scale up and down. And each container is self-contained entity that brings everything with it that is needed for the specific job (….once it’s properly configured, that is) AND is secure thanks to the self-containment.

The set-up process is based on Docker Images and rather descriptive YAML files: if you are half way familiar with object-oriented programming, the best comparison seems to be to regard the Docker Image as the class definition and the Container as the instance of the class. So the Docker Image is a sort of blue print and the Container the running computer system based on this footprint. Finally, it’s often the case that you need several services running — so a container can contain (sic !!) the running instances of several Images (where each Image represents a required service, like e.g. a running database to store workflow information). And the YAML file is the blue print how to configure the instantiation of the different Docker Images when the Container is build. I hope true Docker experts who read this don’t run away screaming — but that’s my basic understanding.

Tool Choice: Notion

I absolutely wanted to store the retrieved and summarized newsletter info in a database and be able to access it outside of the n8n workflow, too. There are a lot of database connectors and choices available in n8n. But the free ones are often ephimeral (aka: they do only store content while the Container is running and not permanently) — or they are online databases that required APIs and plans — and they do mostly not offer very user friendly interfaces to access the info (….and if I’m wrong with this view, I’m happy to take note of alternatives !).

Enter Notion: a note-taking and online-sharing service I’m using since ages. They are a rather big player now in the online tool / productivity tool market.

I happened to already know their database approach (a rather basic one) thru other use cases. And when I saw that n8n offered Notion integration, it was a natural choice to go this easy route — with the added advantage that the interface for online access to the database data is VERY user friendly, as you can see from the following screenshot if you’re not familiar yet with Notion

Typical Notion Database View

Tool Choice: LinkedIn

I’m certainly not the first person to publish newsletter summaries on specific subjects on LinkedIn. But the motivation here lay in the full automation of the process (instead of e.g. copy&paste the newsletter text manually to an LLM-chat app like chatGPT, Claude or similar and have it summarized there; or using an agent framework to automate the process of generating the summary and save it as text file… to be copied manually again to create a LinkedIn post).

And there were additional motivations beyond the automation test:

(1) The added translation. This represents the fact that almost all AI-related content is published in English. This adds to the bubble effect that AI enthusiasts live in: even though anyone who reads this probably believes that AI will at least considerably change the way we work and live, it doesn’t mean that the majority of the world population has already grasped the full impact this technology will have. And a large number of people is even cut off from being able to know: if you’re not English-speaking, it’s truly difficult to keep up with the developments in this domain. So providing a short summary of the major developments (as featured in the selected newsletters) in a different language like German can provide access to relevant information to more people and thus be useful.

(2) Connecting technically to LinkedIn represents a PoC for connecting different (cloud-hosted) IT-services, showing how to bridge a fragmented IT-landscape with a single automation tool. BTW, the same is of course true for connecting the automation workflow with Notion.

(3) Last but not least: LinkedIn is full of tongue-in-cheek comments about the ratio of “people doing AI” vs. “people talking about AI”. So this project was also an effort to tilt the ratio in the favor of the “doing AI” side ;-)

Why this project ? A Relatable & Adjustable Subject !

Using AI in order to write automatically about AI is a bit self-referential. But the translation part described above adds a new layer of usefulness to the AI product “newsletter summarization”, creating better accessibility for some people otherwise left out of the game.

More importantly, though: the approach is domain agnostic. The same workflow could be used to aggregate information and further summarize it into regular posts in any domain where regular newsletters help those active in the domain to keep up-to-date on the latest development.
And, btw, if it’s not about newsletters but information on websites, changing the IMAP-email-retrieval component of the first workflow thru e.g. a web scraping component or an RSS-feed-node would also be possible…

But enough theory and explanation about the whys… it’s time to pass on to the how.

The Technical Setup

I will not go very much into the setup details, but rather provide sources that will enable most readers to quickly get up to speed on the actual two-part workflow that solves the described use case. This use case requires:
- Being able to run Docker
- Having the n8n AI Starter Kit configured to run as Docker container
- Having set up a mail inbox you have access to via the IMAP protocol — and have some newsletters subscribed to hit this inbox in more or less regular intervals
- Creating a Notion database and allowing access from external apps
- Enable automation on your LinkedIn account

The Technical Setup: Docker on Mac

The following link you be good enough to get you started for installing the Docker Desktop app on your computer (Mac in my case, but there’s also an app for Windows):
https://www.docker.com/blog/getting-started-with-docker-desktop/

The Technical Setup: the n8n AI Starter Kit

If you have some IT-experience, you probably find everything you need to get started with the AI starter kit on n8n’s github repository where they make the kit available for everyone:
https://github.com/n8n-io/self-hosted-ai-starter-kit

And how to host it on Docker is explained here:
https://docs.n8n.io/hosting/installation/docker/

I also bookmarked a very good YT-video, where I found a step by step description how to set up the Starter Kit; this also contained some details that are not explicitly mentioned on the GitHub page linked above.
https://www.youtube.com/watch?v=V_0dNE-H2gw

Mind the Mac specificities if you want to work with Ollama locally:
https://github.com/n8n-io/self-hosted-ai-starter-kit?tab=readme-ov-file#for-mac--apple-silicon-users

I am not 100% sure to have understood this correctly. But it seems that you can, if you want, run Ollama within the Docker Container. But n8n advises against it, because you couldn’t make use of your Apple Silicone GPU kernels. However, if you run Ollama as a service on your Mac in parallel to the Starter Kit in its Container (aka: after a standalone standard Ollama install on your Mac), you can work with this Ollama outside of the container — and this Ollama is able to make use of your Apple Silicone GPU kernels. However, when you configure an Ollama node as part of your n8n workflow, you must reference it differently (because it’s running outside of the container):

http://host.docker.internal:11434/
instead of simply:
http://localhost:11434

Sidenote: the YAML file and the .env-file for the Starter Kit:
I will not enter into the details here. Suffice it to say that the YAML file provided on the AI Starter Kit GitHub should be sufficient to get you going.
What the YAML file does: define which images will constitute the container when the later is composed. And then it defines, per image, parameters that are used by the images as start parametrization of the concrete instance when the container is composed to run.
From what I understood, the only needed intervention was on the kind of language model Ollama could work with — but as I did most of the work finally with the openAI API, this did not become relevant.
Why certain parameters are defined in a second .env-file and not the YAML-file escapes my understanding. But I didn’t mind as long as everything worked smoothly with both the YAML and .env file provided on the AI Starter Kit GitHub.

Sidenote: updating the Starter Kit
I also had a hard time at first to understand how to update the Starter Kit (which is a good sign, as it shows that n8n continues to improve the AI Starter Kit). It’s not enough to simply update a Docker Image. In order to make the changes come thru, one also needs to force the re-creation of the Container that uses the Image. Which finally started to make sense once I had grasped the comparison with object oriented programming I mentioned above.
Luckily, the process is also explained in detail on n8n’s GitHub:
https://docs.n8n.io/hosting/installation/docker/#updating
and the best (aka easiest) solution seems to be to work “Docker compose” (at least it works for me):
https://docs.n8n.io/hosting/installation/docker/#docker-compose

Technical Setup: the mail inbox for newsletter subscriptions

An obvious requirement: if you want to aggregate newsletters, you better make sure you subscribe to them and receive them in a mailbox for which you know the access credentials and which supports the IMAP protocol.

There are also e.g. gmail connectors available in n8n. Actually, I first tried to use a gmail-address for this use case. But the administrative burden Google puts on authorizing programmatic access to its mailbox surprised me. It actually made me wonder if they might have employed some former German state employees to design this process that shines rather with useless complexity than with user friendliness…. but I digress — and since I do have access to an IMAP email account, I found it way simpler to go down the IMAP path.

Technical Setup: Notion and LinkedIn access

This step can best be solved with the n8n documentation itself:
- For Notion: https://docs.n8n.io/integrations/builtin/credentials/notion/
- For LinkedIn: https://docs.n8n.io/integrations/builtin/credentials/linkedin/
This should explain what needs to be done. I am not a huge expert in these matters, so it required a bit more back-and-forth with Google and Perplexity — but you can get it running eventually.

n8n is very user friendly in the sense that it provides a “Connect” or “Try”-button in the edit menus for the “credentials” of all nodes that establish connections to external 3rd party services like Notion and LinkedIn. In this part of the node configuration, you need to enter the credentials necessary to access these services (and the links above describe in detail what is required and how to obtain it) — and clicking these buttons and seeing them turn green gives you immediate feedback that you’re set to go.

You should, of course, prepare a Notion database with database property fields that correspond to your use case. I opted for the FROM-field to be the database entry title, a DATE-field (indication the date of th newsletter) and a GLANCE-field, allowing a first glance at the summary without having to open the database page (details on that in Part2). Overall, you need to know what you want in your database — and synchronize the property names with the entries in the corresponding n8n-nodes. n8n makes your life easier by simply presenting you dropdown list of possible properties once you’ve connected your workflow with a specific Notion database.

n8n Basics and the Need for Two Workflows

To understand the further approach, some basics about n8n need to be understood first: the workflows n8n creates are supposed to be hosted online (e.g. on Docker) — and hence to be permanently online.

The workflow consists of a sequence of nodes, with every node representing some sort of manipulation of or decision on data. So-called trigger nodes trigger the workflow and outside data enters into the workflow via these trigger nodes (marked with a flash symbol).

The IMAP node is such a trigger node: once the workflow is actually deployed online and the workflow is set to status “active”, this IMAP trigger node will continuously (in set intervals) check the IMAP inbox for new mails (or based on other criteria than “UNSEEN” as flag…. but checking for new mails is probably the most common case). And the email content is the data that enters via this node to be further manipulated down the process line.

However, my workflow is running locally. I could set the workflow to status “active” — but my computer would need to run continuously to make it work like a server. For my case, it’s easier to operate the workflow in “batch mode”, where I as a user trigger the trigger node execution manually. This is, by the way, also more helpful in the phase of bug fixing. And n8n provides very good visibility on the status and content of each node after execution — which makes the development and debugging process quite easy and fun.

As you may recall from the sequence diagram at the very start of the article, there are two flows which are desynchronized: Workflow 1 retrieves the emails, has the content summarized and writes it to the Notion database. A realistic check interval for an online, active workflow could be in the range of several times per hour to “once a day”.

Workflow 2 reads stored summaries from the Notion database, compiles a LinkedIn post from this content and pushes the post to LinkedIn. This process is independent from the first process. The interval here could be e.g. “daily” or “weekly”, depending how often such a summary should be posted to LinkedIn. The only thing one needs to make sure: in workflow1, mails should only be read, summarized and written to Notion once. And in worflow2 that summarized articles do not enter the “blog post aggregation process” more than once. Which can be realized in different ways, e.g. a database flag or an updatable date filter.

As it will become obvious in the following parts, the manual “batch triggering” made the addition of several filter nodes necessary to allow the manual synchronization between the two flows — this might eventually be skipped in a “real” online version of the workflows. In an online version, both flows can exist in parallel, because they would have two different continuous time trigger nodes with different intervals, both independent from one another. But for the batch processing, I found it clearer to separate the two flow completely.

This is the end of Part1. The real action with all the details on the flows will take place in (with links provided on top of this article):

If you bore with me until here, I appreciate your dedication to the subject — and would feel overjoyed if you left a clap … or two.

--

--

syrom
syrom

Written by syrom

happy about my past, glad about the present and curious about the future.

Responses (7)