<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Santiago on Medium]]></title>
        <description><![CDATA[Stories by Santiago on Medium]]></description>
        <link>https://medium.com/@barbieri.santiago?source=rss-8a7f528dad2------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 03:11:24 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@barbieri.santiago/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Artificial Intelligence]]></title>
            <link>https://medium.com/@barbieri.santiago/artificial-intelligence-a490fdd6ba8a?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/a490fdd6ba8a</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sat, 16 May 2026 18:19:45 GMT</pubDate>
            <atom:updated>2026-05-16T18:19:45.758Z</atom:updated>
            <content:encoded><![CDATA[<h3>Technical Foundations for Understanding Predictive AI, Machine Learning and Generative AI</h3><h3>1. Definition of Artificial Intelligence</h3><p>Artificial intelligence is a branch of computer science focused on building systems capable of performing tasks that are normally associated with human intelligence. These tasks may include recognizing patterns, classifying information, making decisions, solving problems, interpreting language, predicting behavior or generating new content.</p><p>This does not mean that a machine thinks, understands or has consciousness. An AI system can produce a useful response without truly understanding what it is doing. For this reason, it is important to distinguish between <strong>intelligent behavior</strong> and <strong>real understanding</strong>.</p><p>A machine may appear intelligent because it answers correctly, classifies data accurately or predicts outcomes with precision. However, internally, it operates through rules, data, statistical models or machine learning techniques.</p><p>In simple terms, an AI system receives an input, processes it through a model and produces an output. That output may be a prediction, a classification, a recommendation, a written response, an image, an alert or an action in a physical environment.</p><h3>2. The Difficulty of Defining Intelligence</h3><p>Before defining artificial intelligence, it is important to understand that human intelligence itself is difficult to define. One person may be highly skilled at mathematical reasoning but struggle with writing. Another may learn languages easily but have less ability in logical analysis. Others may excel in creativity, music, communication or spatial perception.</p><p>For this reason, artificial intelligence should not be understood as a direct copy of the human mind. In practice, current AI does not attempt to reproduce all forms of human intelligence. Instead, it focuses on building systems that can solve specific tasks efficiently.</p><p>AI is usually evaluated by its external results. If a system detects fraud, translates text, answers questions or recommends content effectively, we say that it performs an intelligent task. However, this does not prove that the system understands the meaning of what it is doing.</p><p>This distinction is fundamental: modern AI can simulate intelligent behavior, but it does not possess human understanding.</p><h3>3. Symbolic AI and Rule-Based Systems</h3><p>The first approaches to artificial intelligence tried to represent reasoning through symbols and rules. This approach is known as <strong>symbolic AI</strong>.</p><p>The basic idea was that a problem could be described through an initial state, a set of rules and a final goal. If the system applied the rules correctly, it could reach a solution. Within this approach, Allen Newell and Herbert Simon developed the <strong>General Problem Solver</strong>, an early attempt to create a machine capable of solving problems through formal reasoning.</p><p>Expert systems also emerged from this tradition. In these systems, the knowledge of human specialists was translated into explicit rules. For example, in a financial system, an expert could define rules to evaluate credit risk based on income, debt, payment history and other variables. The system might appear to reason, but in reality, it was only executing instructions written in advance.</p><p>This approach was important because it showed that machines could solve useful tasks through rules. However, it also had a clear limitation: it depended on humans anticipating and programming possible situations beforehand.</p><h3>4. The Limits of Symbolic AI</h3><p>Symbolic AI works well when the environment is simple, stable and controlled. The problem appears when the real world introduces exceptions, ambiguity, incomplete data and too many possible combinations.</p><p>A rule-based system requires someone to write in advance what the system should do in each situation. This may work for small problems, but it becomes unmanageable when there are thousands or millions of possible scenarios. This problem is known as <strong>combinatorial explosion</strong>.</p><p>The conclusion was important: it was not practical to manually program all intelligence into a system. Instead of writing every possible rule, researchers needed systems capable of learning patterns from examples.</p><p>This shift led to the rise of <strong>machine learning</strong>.</p><h3>5. Machine Learning: Learning from Data</h3><p>Machine learning is an approach within artificial intelligence in which a system learns patterns from data instead of depending only on rules written by humans.</p><p>In a symbolic system, knowledge is programmed directly. In machine learning, knowledge is acquired through training. The system receives examples, identifies regularities and adjusts its behavior to improve its results.</p><p>An important historical example is Arthur Samuel’s checkers program from 1959. The system played games, observed which moves produced better outcomes and adjusted its strategy over time. It was not programmed with every possible move. Instead, it improved through experience.</p><p>The core idea is simple: if a system receives enough examples, it can identify patterns that allow it to make better decisions in new situations.</p><p>This does not mean that the machine understands what it is doing. It means that it can detect useful statistical relationships within the data.</p><h3>6. Massive Data and the Growth of Machine Learning</h3><p>For decades, machine learning was limited by the lack of digital data. The idea existed, but systems did not have enough information to learn complex patterns effectively.</p><p>This changed with the rise of the internet and large-scale digitalization. People and organizations began generating enormous amounts of data: documents, images, audio, video, purchases, searches, user interactions, forms, financial records, application activity and platform behavior.</p><p>This transformed machine learning into one of the dominant approaches in modern AI. A system can analyze volumes of information that would be impossible for humans to review manually and identify useful patterns for organizations.</p><p>A more technical example would be an online education platform. Each user interaction can generate data: how long a student watches a lesson, where they pause, where they stop watching, which module they repeat, which course they complete and where they lose interest. Manually analyzing millions of records would be impossible. A machine learning model can detect dropout patterns, identify difficult content, recommend modules or improve the course structure.</p><p>The value is not simply in storing data. The real value appears when data helps improve decisions, automate processes or anticipate behavior.</p><h3>7. Artificial Neural Networks</h3><p>One of the most important approaches within machine learning is the artificial neural network. These are computational models loosely inspired by the structure of the brain, although they are not real copies of the human brain.</p><p>A neural network receives input data, processes it through internal layers, and produces an output. The first layer is called the <strong>input layer</strong>. The last layer is the <strong>output layer</strong>. Between them are <strong>hidden layers</strong>, which progressively transform the information.</p><p>Learning occurs by adjusting internal values called <strong>weights</strong>. At the beginning, these weights do not represent useful knowledge. During training, the network processes examples, compares its output with the expected answer and modifies its weights to reduce error. This process is repeated many times until the model improves.</p><p>A technical example is financial fraud detection. The model’s input may include the transaction amount, time of day, type of business, approximate location, device used, customer history and transaction frequency. The output may be a risk probability. The network does not “understand” fraud as a human analyst would. Instead, it learns statistical combinations that commonly appear in normal or suspicious transactions.</p><p>Neural networks are useful because they can detect complex patterns in large amounts of data. However, they also introduce a challenge: the more complex the network becomes, the harder it may be to explain exactly how it reached a conclusion.</p><h3>8. The Black Box Problem</h3><p>Many advanced models work like a <strong>black box</strong>. They receive data, perform internal transformations and produce an output, but the intermediate process may be difficult for a person to interpret.</p><p>This is not always a serious problem. If a system recommends the wrong movie, the impact is low. But in areas such as healthcare, credit, insurance, security, or autonomous driving, accuracy is not enough. It also matters why the system made a decision.</p><p>For this reason, the use of AI in sensitive contexts requires human oversight, error evaluation, responsibility criteria, and explanation mechanisms. A model may find patterns that humans cannot easily perceive, but that does not mean all of its conclusions should be accepted automatically.</p><h3>9. Natural Language Processing</h3><p>Natural language processing, or <strong>NLP</strong>, is the area of artificial intelligence that allows machines to work with human language. Its goal is to enable a system to receive text or speech, interpret the user’s intention, and produce a useful output.</p><p>Human language is difficult for machines because it does not work like an exact command. People use incomplete sentences, ambiguity, context, mistakes, indirect expressions, and meanings that depend on the situation.</p><p>Older search systems depended heavily on keywords. If a user typed certain terms, the system looked for matches. Modern NLP systems try to go further: they analyze relationships between words, context, intention, and probable meaning.</p><p>NLP is used in automatic translation, virtual assistants, chatbots, message classification, document summarization, sentiment analysis, speech transcription, and response generation.</p><p>Even so, the technical point remains the same: the system does not understand language as a human being does. It learns statistical patterns from language use and applies those patterns to produce useful responses.</p><h3>10. Robotics and AI in the Physical World</h3><p>Robotics allows machines to perform physical tasks. Not all robotics uses advanced AI. Many industrial robots operate through traditional programming: they repeat precise movements, assemble parts, transport objects, or execute tasks in controlled environments.</p><p>The difference appears when the robot must act in a changing environment. In that case, it may need sensors, cameras, machine learning models, and decision-making capabilities.</p><p>An autonomous vehicle illustrates this difference well. Turning the wheel or applying the brakes is not the central problem. The difficult part is deciding when to do it. The system must process signals from the environment, identify obstacles, pedestrians, vehicles, lanes, traffic lights, and unexpected situations. At that point, robotics is no longer only a mechanical problem; it also becomes a data problem.</p><p>However, in the physical world, errors have a higher cost. A bad digital recommendation may be annoying. A bad robotic decision can cause real damage. For this reason, many robotic systems still use simple rules when the environment is controlled and safety is a priority.</p><h3>11. The Internet of Things and AI</h3><p>The Internet of Things, or <strong>IoT</strong>, refers to physical objects connected to the internet that include sensors and can send data. These may include smart watches, industrial sensors, cameras, appliances, vehicles, medical devices, or security systems.</p><p>The importance of IoT for AI is that it turns the physical world into a constant source of data. In the past, many systems mainly analyzed digital data: searches, clicks, online purchases, or documents. With IoT, systems can also collect information about location, movement, temperature, physical activity, heart rate, human presence, machine usage, or environmental conditions.</p><p>The relationship is direct: the device captures data, the system processes it, and AI detects patterns. Based on those patterns, it can anticipate failures, detect anomalies, adjust behavior, or activate automatic responses.</p><p>The benefit is clear: more data from the physical world can lead to better predictions and automation. However, there is also an important risk: privacy. Connected devices may collect sensitive information about habits, health, location, and behavior. For this reason, combining IoT with AI requires special care regarding what data is collected, who uses it, and for what purpose.</p><h3>12. Weak AI and Strong AI</h3><p><strong>Weak AI</strong> is artificial intelligence designed to perform specific tasks. This is the type of AI that exists today. It can be highly effective in a particular domain, but it does not possess general understanding or consciousness.</p><p>A recommendation system, a fraud detector, an automatic translator, a chatbot, or an image generator are examples of weak AI. They can produce useful results, but they do not understand the world as a human does.</p><p><strong>Strong AI</strong> would be an artificial intelligence with real understanding, general reasoning, conceptual autonomy, and possibly consciousness. It would not merely simulate intelligence; it would possess genuine intelligence.</p><p>Strong AI does not currently exist. Even the most advanced modern models are still weak AI because they operate through data, patterns, and statistical models.</p><h3>13. Predictive AI</h3><p>Predictive AI uses historical data to estimate future outcomes. It does not truly predict the future. It calculates probabilities based on patterns found in past data.</p><p>This approach is used to recommend products, detect fraud, estimate credit risk, anticipate customer churn, personalize content, forecast demand, or classify behavior.</p><p>The logic is usually similar: the system observes previous data, identifies repeated patterns, and estimates which result is most likely.</p><p>Its strength is that it works very well for clearly defined problems. Its limitation is that each model is usually optimized for a specific task. A system trained to detect fraud is not necessarily useful for answering general questions or generating content.</p><h3>14. Generative AI</h3><p>Generative AI is a type of artificial intelligence capable of producing new content from patterns learned in large amounts of data.</p><p>It can generate text, images, code, audio, video, summaries, answers, or simulations. It does not create from nothing. It generates content by combining patterns learned during training.</p><p>In language models, the system learns relationships between words, phrases, topics, and contexts. When it responds, it generates text step by step, estimating which fragment is most likely according to the user’s input and the patterns it has learned.</p><p>In generative image models, the system learns visual relationships between objects, shapes, styles, lighting, and composition. It can then produce a new image based on those relationships.</p><p>Generative AI became possible through the combination of three factors: large amounts of digital data, deep learning models, and greater computing power.</p><p>The main difference between predictive AI and generative AI is the type of output. Predictive AI estimates probabilities about an outcome. Generative AI produces new content.</p><h3>15. The Relationship Between Data, Patterns, and Value</h3><p>Modern AI is built around one central idea: transforming data into useful outputs.</p><p>Symbolic AI used explicit rules. Machine learning learns patterns from data. Neural networks detect more complex relationships. Predictive AI uses those patterns to estimate future outcomes. Generative AI uses them to produce new content. NLP applies these methods to human language. Robotics and IoT bring AI into the physical world.</p><p>The value of AI is not that the machine “thinks.” Its value is that it can process information at a scale impossible for humans and identify useful patterns for decision-making, automation, or content generation.</p><h3>Final Summary</h3><p>Artificial intelligence is a branch of computer science that builds systems capable of performing tasks associated with human intelligence. However, this does not mean that these systems truly understand what they are doing.</p><p>Symbolic AI was one of the first approaches. It used symbols and explicit rules. It worked well for simple problems but struggled when too many possible combinations appeared.</p><p>Machine learning emerged as an alternative. Instead of programming every rule manually, the system learns patterns from data.</p><p>Artificial neural networks are machine learning models that process data through layers and adjust internal weights during training. They are useful for complex patterns, but they can be difficult to interpret.</p><p>The black box problem appears when a model produces a useful output, but it is not easy to explain how it reached that conclusion.</p><p>Natural language processing allows machines to work with human text or speech. It does not understand language like a person, but it identifies patterns in language and intention.</p><p>Robotics shows how AI can act in the physical world. In this context, errors are more serious, and safety becomes essential.</p><p>IoT turns physical objects into sources of data. Sensors and connected devices allow AI to analyze real-world patterns.</p><p>Weak AI is the AI that exists today: systems useful for specific tasks, without consciousness or real understanding. Strong AI would be an artificial intelligence with genuine understanding, but it does not exist yet.</p><p>Predictive AI uses historical data to estimate future probabilities. Generative AI uses learned patterns to produce new content.</p><p>The main idea is this: modern AI does not think like a human being. It processes data, detects patterns, and produces useful outputs through statistical models.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a490fdd6ba8a" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Inteligencia Artificial]]></title>
            <link>https://medium.com/@barbieri.santiago/inteligencia-artificial-d65c356bf43b?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/d65c356bf43b</guid>
            <category><![CDATA[inteligencia-artificial]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sat, 16 May 2026 17:13:42 GMT</pubDate>
            <atom:updated>2026-05-16T17:13:42.984Z</atom:updated>
            <content:encoded><![CDATA[<h3>Fundamentos técnicos para entender IA predictiva, machine learning e IA generativa</h3><h3>1. Definición de inteligencia artificial</h3><p>La inteligencia artificial es una rama de la informática que estudia cómo construir sistemas capaces de realizar tareas que normalmente asociamos con inteligencia humana. Esto incluye reconocer patrones, clasificar información, tomar decisiones, resolver problemas, interpretar lenguaje, predecir comportamientos o generar contenido.</p><p>Esta definición no significa que una máquina piense, entienda o tenga conciencia. Un sistema de IA puede producir una respuesta útil sin comprender realmente lo que está haciendo. Lo importante es distinguir entre <strong>comportamiento inteligente</strong> y <strong>comprensión real</strong>. Una máquina puede parecer inteligente porque responde bien, clasifica correctamente o predice con precisión, pero internamente opera mediante reglas, datos, modelos estadísticos o aprendizaje automático.</p><p>En términos simples, un sistema de IA recibe una entrada, la procesa mediante algún modelo y produce una salida. Esa salida puede ser una predicción, una clasificación, una recomendación, una respuesta escrita, una imagen, una alerta o una acción dentro de un entorno físico.</p><h3>2. El problema de definir inteligencia</h3><p>Antes de hablar de inteligencia artificial, hay que entender que la inteligencia humana tampoco es fácil de definir. Una persona puede ser muy buena resolviendo problemas matemáticos y mala escribiendo. Otra puede tener facilidad para aprender idiomas, pero poca habilidad para el razonamiento lógico. Otra puede destacar en creatividad, música, comunicación o percepción espacial.</p><p>Por eso, cuando se habla de inteligencia artificial, no conviene imaginar una copia de la mente humana. En la práctica, la IA actual no busca reproducir toda la inteligencia humana, sino construir sistemas capaces de resolver tareas concretas de manera eficiente.</p><p>La IA se evalúa principalmente por sus resultados externos. Si un sistema detecta fraude, traduce texto, responde preguntas o recomienda contenido con buenos resultados, se dice que realiza una tarea inteligente. Pero eso no demuestra que el sistema comprenda el significado de lo que hace.</p><p>Esta diferencia es fundamental para todo el documento: <strong>la IA actual puede simular capacidades inteligentes, pero no posee comprensión humana</strong>.</p><h3>3. IA simbólica y sistemas basados en reglas</h3><p>Los primeros enfoques de inteligencia artificial intentaron representar el razonamiento mediante símbolos y reglas. Esta corriente se conoce como <strong>IA simbólica</strong>.</p><p>La idea era que un problema podía describirse mediante un estado inicial, un conjunto de reglas y un objetivo. Si el sistema aplicaba correctamente esas reglas, podía llegar a una solución. Dentro de este enfoque apareció el <strong>General Problem Solver</strong>, desarrollado por Allen Newell y Herbert Simon, que intentaba construir una máquina capaz de resolver problemas mediante razonamiento formal.</p><p>También surgieron los <strong>sistemas expertos</strong>. En ellos, el conocimiento de especialistas humanos se transformaba en reglas explícitas. Por ejemplo, en un sistema financiero, un experto podía definir reglas para evaluar riesgo crediticio según ingresos, deuda, historial de pagos y otros factores. El sistema parecía tomar decisiones razonadas, pero en realidad solo ejecutaba instrucciones previamente escritas.</p><p>Este enfoque fue importante porque mostró que una máquina podía resolver tareas útiles mediante reglas. Sin embargo, tenía un límite claro: dependía de que los humanos anticiparan y programaran las situaciones posibles.</p><h3>4. Límite de la IA simbólica</h3><p>La IA simbólica funciona bien cuando el entorno es simple, estable y controlado. El problema aparece cuando el mundo real introduce excepciones, ambigüedades, datos incompletos y demasiadas combinaciones posibles.</p><p>Un sistema basado en reglas necesita que alguien escriba de antemano qué debe hacer en cada situación. Eso puede funcionar en problemas pequeños, pero se vuelve inmanejable cuando existen miles o millones de escenarios posibles. A este problema se lo conoce como <strong>explosión combinatoria</strong>.</p><p>La conclusión fue importante: no era práctico programar manualmente toda la inteligencia dentro del sistema. En lugar de escribir todas las reglas, hacía falta crear sistemas capaces de aprender patrones a partir de ejemplos.</p><p>Ahí aparece el cambio central hacia el <strong>machine learning</strong>.</p><h3>5. Machine learning: aprender desde datos</h3><p>El machine learning, o aprendizaje automático, es un enfoque de IA donde el sistema aprende patrones a partir de datos en lugar de depender únicamente de reglas escritas por humanos.</p><p>En un sistema simbólico, el conocimiento se programa directamente. En machine learning, el conocimiento se obtiene mediante entrenamiento. El sistema recibe ejemplos, identifica regularidades y ajusta su comportamiento para mejorar sus resultados.</p><p>Un caso histórico importante fue el programa de damas de Arthur Samuel en 1959. El sistema jugaba contra sí mismo, observaba qué movimientos producían mejores resultados y ajustaba su estrategia. No se le programaron todas las jugadas posibles. Aprendía a mejorar mediante experiencia.</p><p>La idea de fondo es simple: si un sistema recibe suficientes ejemplos, puede encontrar patrones que le permitan tomar mejores decisiones en casos nuevos.</p><p>Esto no significa que la máquina entienda lo que hace. Significa que puede detectar relaciones estadísticas útiles dentro de los datos.</p><h3>6. Datos masivos y crecimiento del machine learning</h3><p>Durante décadas, el machine learning estuvo limitado por la falta de datos digitales. La idea existía, pero los sistemas no tenían suficiente información para aprender patrones complejos.</p><p>Esa situación cambió con internet y la digitalización masiva. Las personas y organizaciones empezaron a generar enormes cantidades de datos: documentos, imágenes, audios, videos, compras, búsquedas, interacciones, formularios, registros financieros, actividad en aplicaciones y comportamiento dentro de plataformas.</p><p>Esto convirtió al machine learning en un enfoque dominante dentro de la IA moderna. Un sistema puede analizar volúmenes de información imposibles de revisar manualmente y encontrar patrones útiles para una organización.</p><p>Un ejemplo más técnico es una plataforma de educación online. Cada interacción del usuario puede generar datos: cuánto tiempo mira una clase, dónde pausa, dónde abandona, qué módulo repite, qué curso termina y en qué punto pierde interés. Analizar manualmente millones de registros sería inviable. Un modelo de machine learning puede detectar patrones de abandono, identificar contenidos difíciles, recomendar módulos o mejorar la estructura del curso.</p><p>El valor no está solamente en almacenar datos. El valor aparece cuando esos datos permiten mejorar decisiones, automatizar procesos o anticipar comportamientos.</p><h3>7. Redes neuronales artificiales</h3><p>Dentro del machine learning, uno de los enfoques más importantes son las <strong>redes neuronales artificiales</strong>. Son modelos computacionales inspirados de manera general en la estructura del cerebro, aunque no son una copia real del cerebro humano.</p><p>Una red neuronal recibe datos de entrada, los procesa a través de capas internas y produce una salida. La primera capa se llama <strong>capa de entrada</strong>. La última se llama <strong>capa de salida</strong>. Entre ambas se encuentran las <strong>capas ocultas</strong>, que transforman progresivamente la información.</p><p>El aprendizaje ocurre ajustando valores internos llamados <strong>pesos</strong>. Al comienzo, esos pesos no representan conocimiento útil. Durante el entrenamiento, la red procesa ejemplos, compara su salida con la respuesta esperada y modifica sus pesos para reducir el error. Ese proceso se repite muchas veces hasta que el modelo mejora.</p><p>Un ejemplo técnico puede ser la detección de fraude financiero. La entrada del modelo puede incluir monto de la operación, horario, tipo de comercio, ubicación aproximada, dispositivo utilizado, historial del cliente y frecuencia de transacciones. La salida puede ser una probabilidad de riesgo. La red no “entiende” el fraude como lo haría un analista humano; aprende combinaciones estadísticas que suelen aparecer en operaciones normales o sospechosas.</p><p>Las redes neuronales son útiles porque pueden detectar patrones complejos en grandes volúmenes de datos. Pero también introducen un problema: cuanto más compleja es la red, más difícil puede ser explicar exactamente cómo llegó a una conclusión.</p><h3>8. El problema de la caja negra</h3><p>Muchos modelos avanzados funcionan como una <strong>caja negra</strong>. Reciben datos, realizan transformaciones internas y producen una salida, pero el proceso intermedio puede ser difícil de interpretar para una persona.</p><p>Esto no siempre es grave. Si un sistema recomienda una película incorrecta, el daño es bajo. Pero en áreas como salud, crédito, seguros, seguridad o conducción autónoma, no alcanza con que el sistema acierte muchas veces. También importa entender por qué tomó una decisión.</p><p>Por eso, el uso de IA en contextos sensibles requiere control humano, evaluación de errores, criterios de responsabilidad y mecanismos de explicación. Un modelo puede encontrar patrones que los humanos no perciben, pero eso no significa que todas sus conclusiones deban aceptarse automáticamente.</p><h3>9. Procesamiento del lenguaje natural</h3><p>El procesamiento del lenguaje natural, o <strong>NLP</strong>, es el área de la IA que permite a las máquinas trabajar con lenguaje humano. Su objetivo es que un sistema pueda recibir texto o voz, interpretar la intención del usuario y producir una salida útil.</p><p>El lenguaje humano es difícil para una máquina porque no funciona como un comando exacto. Las personas usan frases incompletas, ambigüedades, contexto, errores, expresiones indirectas y significados que dependen de la situación.</p><p>Los sistemas antiguos de búsqueda dependían mucho de palabras clave. Si el usuario escribía ciertos términos, el sistema buscaba coincidencias. Los sistemas modernos de NLP intentan ir más allá: analizan relaciones entre palabras, contexto, intención y significado probable.</p><p>El NLP se usa en traducción automática, asistentes virtuales, chatbots, clasificación de mensajes, resumen de documentos, análisis de sentimiento, transcripción de voz y generación de respuestas.</p><p>Aun así, el punto técnico sigue siendo el mismo: el sistema no comprende el lenguaje como una persona. Aprende patrones estadísticos del uso del lenguaje y los utiliza para producir respuestas útiles.</p><h3>10. Robótica e IA en el mundo físico</h3><p>La robótica permite que una máquina realice tareas físicas. No toda robótica usa IA avanzada. Muchos robots industriales funcionan mediante programación tradicional: repiten movimientos precisos, ensamblan piezas, transportan objetos o ejecutan tareas en entornos controlados.</p><p>La diferencia aparece cuando el robot debe actuar en un entorno cambiante. En ese caso, puede necesitar sensores, cámaras, modelos de machine learning y capacidad de decisión.</p><p>Un vehículo autónomo muestra bien esta diferencia. Mover el volante o frenar no es el problema central. Lo difícil es interpretar cuándo hacerlo. El sistema debe procesar señales del entorno, distinguir obstáculos, peatones, vehículos, carriles, semáforos y situaciones imprevistas. Ahí la robótica deja de ser solo un problema mecánico y se convierte también en un problema de datos.</p><p>Sin embargo, en el mundo físico los errores tienen mayor costo. Una mala recomendación digital puede ser molesta; una mala decisión de un robot puede causar daño real. Por eso, muchos sistemas robóticos siguen usando reglas simples cuando el entorno es controlado y la seguridad es prioritaria.</p><h3>11. Internet de las Cosas e IA</h3><p>El Internet de las Cosas, o <strong>IoT</strong>, se refiere a objetos físicos conectados a internet que incorporan sensores y pueden enviar datos. Pueden ser relojes inteligentes, sensores industriales, cámaras, electrodomésticos, vehículos, dispositivos médicos o sistemas de seguridad.</p><p>La importancia del IoT para la IA está en que convierte el mundo físico en una fuente constante de datos. Antes, muchos sistemas analizaban principalmente datos digitales: búsquedas, clics, compras online o documentos. Con IoT, también se pueden registrar ubicación, movimiento, temperatura, actividad física, ritmo cardíaco, presencia de personas, uso de máquinas o condiciones ambientales.</p><p>La relación es directa: el dispositivo captura datos, el sistema los procesa y la IA detecta patrones. Con esos patrones puede anticipar fallas, detectar anomalías, ajustar comportamientos o activar respuestas automáticas.</p><p>El beneficio es claro: más datos del mundo físico permiten mejores predicciones y automatizaciones. Pero también aparece un riesgo importante: privacidad. Los dispositivos conectados pueden registrar información muy sensible sobre hábitos, salud, ubicación y comportamiento. Por eso, IoT combinado con IA requiere especial cuidado sobre qué datos se recolectan, quién los usa y con qué finalidad.</p><h3>12. IA débil e IA fuerte</h3><p>La IA débil es la inteligencia artificial diseñada para realizar tareas específicas. Es la IA que existe actualmente. Puede ser muy eficiente en un dominio concreto, pero no posee comprensión general ni conciencia.</p><p>Un sistema de recomendación, un detector de fraude, un traductor automático, un chatbot o un modelo generador de imágenes son ejemplos de IA débil. Pueden producir resultados útiles, pero no entienden el mundo como una persona.</p><p>La IA fuerte sería una inteligencia artificial con comprensión real, razonamiento general, autonomía conceptual y posiblemente conciencia. No simularía inteligencia: tendría inteligencia genuina.</p><p>Actualmente no existe IA fuerte. Incluso los modelos modernos más avanzados siguen siendo IA débil, porque operan mediante datos, patrones y modelos estadísticos.</p><h3>13. IA predictiva</h3><p>La IA predictiva utiliza datos históricos para estimar resultados futuros. No adivina el futuro. Calcula probabilidades basadas en patrones del pasado.</p><p>Este enfoque se usa para recomendar productos, detectar fraude, estimar riesgo crediticio, anticipar abandono de clientes, personalizar contenido, prever demanda o clasificar comportamientos.</p><p>La lógica es siempre parecida: el sistema observa datos anteriores, identifica patrones repetidos y estima qué resultado es más probable.</p><p>Su fortaleza es que funciona muy bien en problemas definidos. Su límite es que normalmente cada modelo está optimizado para una tarea concreta. Un sistema entrenado para detectar fraude no necesariamente sirve para responder preguntas generales o generar contenido.</p><h3>14. IA generativa</h3><p>La IA generativa es un tipo de inteligencia artificial capaz de producir contenido nuevo a partir de patrones aprendidos en grandes volúmenes de datos.</p><p>Puede generar texto, imágenes, código, audio, video, resúmenes, respuestas o simulaciones. No crea desde la nada. Genera contenido combinando patrones aprendidos durante el entrenamiento.</p><p>En los modelos de lenguaje, el sistema aprende relaciones entre palabras, frases, temas y contextos. Cuando responde, genera texto paso a paso, estimando qué fragmento es más probable según la entrada del usuario y los patrones aprendidos.</p><p>En los modelos generativos de imágenes, el sistema aprende relaciones visuales entre objetos, formas, estilos, iluminación y composición. Luego puede producir una imagen nueva a partir de esas relaciones.</p><p>La IA generativa se volvió posible por la combinación de tres factores: grandes cantidades de datos digitales, modelos de aprendizaje profundo y mayor capacidad de cómputo.</p><p>Su diferencia con la IA predictiva está en el tipo de salida. La IA predictiva estima probabilidades sobre un resultado. La IA generativa produce contenido nuevo.</p><h3>15. Relación entre datos, patrones y valor</h3><p>Toda la IA moderna gira alrededor de una idea central: convertir datos en salidas útiles.</p><p>La IA simbólica usaba reglas explícitas. El machine learning aprende patrones desde datos. Las redes neuronales permiten detectar relaciones más complejas. La IA predictiva usa esos patrones para estimar resultados futuros. La IA generativa los usa para producir contenido nuevo. NLP aplica estos métodos al lenguaje humano. Robótica e IoT llevan la IA al mundo físico.</p><p>El valor de la IA no está en que la máquina “piense”, sino en que puede procesar información a una escala imposible para una persona y encontrar patrones útiles para tomar decisiones, automatizar tareas o generar contenido.</p><h3>Resumen final</h3><p>La inteligencia artificial es una rama de la informática que construye sistemas capaces de realizar tareas asociadas con inteligencia humana, pero eso no significa que esos sistemas comprendan lo que hacen.</p><p>La IA simbólica fue uno de los primeros enfoques. Usaba símbolos y reglas explícitas. Funcionaba en problemas simples, pero fallaba cuando aparecían demasiadas combinaciones posibles.</p><p>El machine learning surgió como alternativa: en lugar de programar todas las reglas, el sistema aprende patrones a partir de datos.</p><p>Las redes neuronales artificiales son modelos de machine learning que procesan datos en capas y ajustan pesos internos durante el entrenamiento. Son útiles para patrones complejos, pero pueden ser difíciles de interpretar.</p><p>El problema de la caja negra aparece cuando un modelo produce una salida útil, pero no es fácil explicar cómo llegó a esa conclusión.</p><p>El procesamiento del lenguaje natural permite que las máquinas trabajen con texto o voz humana. No entiende como una persona, pero identifica patrones de lenguaje e intención.</p><p>La robótica muestra cómo la IA puede actuar en el mundo físico. En ese contexto, el error es más costoso y la seguridad se vuelve central.</p><p>IoT convierte objetos físicos en fuentes de datos. Sensores y dispositivos conectados permiten que la IA analice patrones del mundo real.</p><p>La IA débil es la IA actual: sistemas útiles para tareas específicas, sin conciencia ni comprensión real. La IA fuerte sería una inteligencia artificial con comprensión genuina, pero todavía no existe.</p><p>La IA predictiva usa datos históricos para estimar probabilidades futuras. La IA generativa usa patrones aprendidos para producir contenido nuevo.</p><p>La idea principal de todo el tema es esta: <strong>la IA moderna no piensa como un ser humano; procesa datos, detecta patrones y produce salidas útiles mediante modelos estadísticos.</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d65c356bf43b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Inversion of Control (IoC) and Container-Based Runtime Architecture
Structure, Behavior and…]]></title>
            <link>https://medium.com/@barbieri.santiago/inversion-of-control-ioc-and-container-based-runtime-architecture-structure-behavior-and-07fe30de527d?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/07fe30de527d</guid>
            <category><![CDATA[spring]]></category>
            <category><![CDATA[inversion-of-control]]></category>
            <category><![CDATA[dependency-injection]]></category>
            <category><![CDATA[java]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Wed, 13 May 2026 00:49:51 GMT</pubDate>
            <atom:updated>2026-05-13T00:49:51.622Z</atom:updated>
            <content:encoded><![CDATA[<h3><strong>Inversion of Control (IoC) and Container-Based Runtime Architecture</strong><br><em>Structure, Behavior and Execution Model</em></h3><h3><strong>Document Structure Overview</strong></h3><p>This document progresses from conceptual foundations to runtime execution. Each section builds on the previous one, moving from principles (IoC) to implementation (Container, DI), behavioral extension (AOP), infrastructure concerns (Transactions, Persistence) and finally to the complete runtime execution model.</p><h3>1. Inversion of Control (IoC)</h3><p>1.1 Definition of IoC<br>1.2 Limitations of Traditional Architecture<br>1.3 Control Inversion Model<br>1.4 Core Principles of IoC<br> 1.4.1 Separation of Responsibilities<br> 1.4.2 Dependency Externalization<br> 1.4.3 Dependency Inversion<br> 1.4.4 Centralized Composition<br> 1.4.5 Controlled Lifecycle Management<br> 1.4.6 Managed Object Scope</p><h3>2. IoC Container</h3><p>2.1 Definition of the IoC Container<br>2.2 Responsibilities of the Container<br>2.3 Container Configuration<br> 2.3.1 Annotation-Based Configuration<br> 2.3.2 Java-Based Configuration<br> 2.3.3 External Configuration<br>2.4 Bootstrapping Process</p><h3>3. Object Management</h3><p>3.1 Definition of Object Management<br>3.2 Bean Lifecycle<br> 3.2.1 Instantiation<br> 3.2.2 Dependency Injection Phase<br> 3.2.3 Initialization Phase<br> 3.2.4 Ready State<br> 3.2.5 Destruction Phase<br>3.3 Bean Scope<br> 3.3.1 Singleton Scope<br> 3.3.2 Prototype Scope<br> 3.3.3 Contextual Scopes (Request, Session)</p><h3>4. Dependency Injection</h3><p><em>(Operational realization of IoC, its concrete implementation)</em></p><p>4.1 DI as Implementation of IoC<br>4.2 Core Idea of Dependency Injection<br>4.3 Forms of Dependency Injection<br> 4.3.1 Constructor Injection<br> 4.3.2 Setter Injection<br> 4.3.3 Field Injection<br>4.4 Dependency Resolution Process<br>4.5 Relationship with IoC Container</p><h3>5. Aspect-Oriented Programming (AOP)</h3><p>5.1 Definition of AOP<br>5.2 Separation of Cross-Cutting Concerns<br>5.3 Interception Mechanism<br>5.4 Proxy-Based Execution Model<br>5.5 Aspects<br> 5.5.1 Pointcut<br> 5.5.2 Advice<br> 5.5.3 Weaving<br>5.6 Execution Flow with AOP<br>5.7 Relationship with IoC Container</p><h3>6. Transaction Management</h3><p>6.1 Definition of Transaction Management<br>6.2 Transactions as Aspects (AOP-Based Model)<br>6.3 Transaction Interception Flow<br>6.4 Transaction Lifecycle<br> 6.4.1 Transaction Start<br> 6.4.2 Execution Phase<br> 6.4.3 Commit Phase<br> 6.4.4 Rollback Phase<br>6.5 Declarative Transaction Model<br>6.6 Relationship with IoC and Dependency Injection</p><h3>7. Persistence Integration</h3><p>7.1 Definition of Persistence Integration<br>7.2 Persistence Components<br> 7.2.1 Data Source<br> 7.2.2 Entity Manager / Session<br> 7.2.3 Repositories / Data Access Layer<br>7.3 Persistence Context<br>7.4 Integration with Container-Managed Transactions<br>7.5 Execution Flow with Persistence<br>7.6 Abstraction of Data Access</p><h3>8. Runtime Execution Model</h3><p><em>(Integration of all previous layers into a unified execution pipeline)</em></p><p>8.1 Definition of Runtime Execution Model<br>8.2 Structural Phase vs Runtime Phase<br>8.3 End-to-End Execution Flow<br>8.4 Unified Execution Pipeline<br> 8.4.1 IoC Container<br> 8.4.2 Dependency Injection<br> 8.4.3 AOP<br> 8.4.4 Transaction Management<br> 8.4.5 Persistence Integration<br>8.5 Transparency of Execution<br>8.6 Determinism and Consistency</p><h3>1. Inversion of Control (IoC)</h3><h4>1.1 Definition of IoC</h4><p>Inversion of Control (IoC) is a software design principle in which the responsibility for object creation, dependency resolution, configuration, and lifecycle management is delegated from application code to an external system, typically referred to as a container or framework.</p><p>In a conventional design model, application components are responsible for constructing their own dependencies and controlling how objects are created and connected. This results in a system where object relationships and infrastructure concerns are embedded directly within business logic.</p><p>IoC reverses this control model. Application components no longer manage their own construction or dependency wiring. Instead, they declare their required dependencies and an external container is responsible for managing the complete object graph at runtime.</p><p>As a result, application code is decoupled from infrastructure concerns and can focus on domain-specific behavior, while the container assumes responsibility for system-level concerns.</p><h4>1.2 Limitations of Traditional Architecture</h4><p>In traditional architectures, object creation and dependency management are handled explicitly within application code.</p><p>First, tight coupling emerges between components. When a class directly instantiates its dependencies, it becomes bound to specific implementations, making substitution or extension difficult without modifying the original code.</p><p>Second, infrastructure concerns such as configuration, lifecycle handling and dependency wiring become intertwined with business logic. This mixing of responsibilities reduces clarity and makes the system harder to maintain and evolve.</p><p>Third, the absence of centralized control over object composition leads to fragmentation. Object relationships are defined across multiple classes, preventing a unified view of the system structure.</p><p>Finally, testability is significantly reduced. Since dependencies are created internally, replacing them with mock or alternative implementations requires invasive changes or complex workarounds.</p><p>These limitations highlight the need for a model in which object management is externalized and controlled in a consistent and centralized manner.</p><h4>1.3 Control Inversion Model</h4><p>The core idea of IoC lies in the inversion of control over object creation and composition.</p><p>In a traditional execution model, control flows from the application code outward:</p><ul><li>Classes instantiate their dependencies</li><li>Objects define how they are connected</li><li>Execution is driven directly by application logic</li></ul><p>Under IoC, this flow is reversed:</p><ul><li>The container constructs objects</li><li>Dependencies are resolved externally</li><li>The application operates within a container-managed environment</li></ul><p>This inversion transforms the role of application code. Instead of being responsible for system assembly, it becomes declarative in nature: components specify what they require, not how those requirements are fulfilled.</p><p>The container, in turn, acts as the central execution environment. It builds the object graph, enforces configuration rules, ensures that all components are correctly initialized and connected before use.</p><p>This model introduces a clear separation between:</p><ul><li><strong>Structure</strong> → defined and managed by the container</li><li><strong>Behavior</strong> → implemented by application components</li></ul><p>By shifting control outward, IoC establishes the foundation for all higher-level mechanisms, including dependency injection, lifecycle management and runtime behavior interception.</p><h4>1.4 Core Principles of IoC</h4><p>IoC is not a single mechanism but a set of architectural principles that define how responsibilities are distributed within a system.</p><h4>1.4.1 Separation of Responsibilities</h4><p>IoC enforces a strict separation between business logic and infrastructure concerns.</p><p>Application components are responsible only for implementing domain behavior. All concerns related to object creation, dependency wiring, configuration and lifecycle management are delegated to the container.</p><p>This separation ensures that business logic remains isolated, improving clarity, maintainability and architectural consistency.</p><h4>1.4.2 Dependency Externalization</h4><p>Under IoC, objects do not construct their own dependencies.</p><p>Instead, dependencies are provided externally by the container. This shifts the responsibility for object composition out of application code and into a centralized system.</p><p>As a result, classes no longer control their internal structure, which reduces coupling and ensures consistent dependency management across the application.</p><h4>1.4.3 Dependency Inversion</h4><p>IoC enforces dependency inversion at the architectural level.</p><p>High-level components depend on abstractions rather than concrete implementations. This decouples business logic from specific technologies or implementations, allowing components to be replaced or extended without modifying core logic.</p><p>This principle also enhances testability, as dependencies can be substituted with alternative implementations in isolation.</p><h4>1.4.4 Centralized Composition</h4><p>The structure of the application is defined in a centralized and declarative manner.</p><p>Instead of distributing object creation logic across multiple classes, the container maintains a unified definition of the object graph through configuration metadata.</p><p>This provides a single source of truth for component relationships, making the system easier to understand, modify and evolve.</p><h4>1.4.5 Controlled Lifecycle Management</h4><p>The lifecycle of application objects is fully managed by the container.</p><p>This includes instantiation, dependency injection, initialization and destruction. Application code does not explicitly control these phases.</p><p>By centralizing lifecycle management, the system ensures consistent behavior and removes repetitive infrastructure logic from business components.</p><h4>1.4.6 Managed Object Scope</h4><p>IoC introduces the concept of scope to control how objects are created and reused.</p><p>The container determines whether objects are shared across the application, created per request or instantiated on demand. This behavior is defined declaratively and enforced at runtime.</p><blockquote>Inversion of Control is the foundational principle that redefines how applications are structured.</blockquote><blockquote>It removes responsibility for object creation from application code and delegates it to an external container. This shift establishes a clear separation between structure and behavior, enabling a modular, maintainable and extensible architecture.</blockquote><blockquote>All subsequent mechanisms, Dependency Injection, Object Management, AOP, Transaction Management and Persistence Integration are built upon this foundational model.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KrJKEr_LL1_viDYyuf-CJA.png" /></figure><h3>2. IoC Container</h3><h4>2.1 Definition of the IoC Container</h4><p>The IoC Container is the runtime component responsible for realizing the Inversion of Control principle. It acts as the central execution system that constructs, configures and manages the complete set of application objects during runtime.</p><p>Instead of allowing application components to instantiate and wire their own dependencies, the container assumes full responsibility for assembling the object graph based on configuration metadata. This includes creating objects, resolving their dependencies and ensuring that all components are properly initialized before they are used.</p><p>The container defines the execution environment in which the application operates. Application components do not exist independently; they are created, connected and managed within the context of the container. As a result, structural concerns such as object relationships, lifecycle transitions and configuration are externalized and handled in a controlled and consistent manner.</p><h4>2.2 Responsibilities of the Container</h4><p>The IoC Container is responsible exclusively for infrastructure-level concerns related to system construction.</p><p>At its core, the container manages object instantiation. It creates application components based on configuration metadata, determining when and how each object should be constructed.</p><p>Once objects are instantiated, the container performs dependency resolution. It identifies required dependencies for each component and injects the appropriate instances, ensuring that all objects are correctly connected.</p><p>The container also controls the lifecycle of managed objects. It governs initialization and destruction.</p><p>Another key responsibility is maintaining the consistency of the object graph. The container ensures that dependencies are resolved deterministically, preventing duplication, misconfiguration or inconsistent state across the system.</p><p>In addition, the container may enhance objects at runtime through mechanisms such as proxy creation. This enables interception of method calls and supports higher-level features like aspect-oriented behavior and transaction management.</p><h4>2.3 Container Configuration</h4><p>Container Configuration defines how the IoC Container is instructed to construct and manage the application. Since the container has no inherent knowledge of which objects exist or how they are related, it relies on configuration metadata that describes the system structure.</p><p>This configuration acts as a blueprint of the application’s object graph. It specifies which components must be managed, how they are instantiated and how dependencies between them should be resolved.</p><p>Regardless of the approach used, configuration serves a single purpose: providing the container with a complete and consistent definition of the system.</p><h4>2.3.1 Annotation-Based Configuration</h4><p>In annotation-based configuration, components are defined directly within the source code using metadata annotations.</p><p>The container scans the codebase, detects annotated classes and automatically registers them as managed components. Dependencies and configuration details are also expressed through annotations, allowing the container to resolve relationships without external definitions.</p><p>This approach reduces explicit configuration and keeps structural information close to the implementation, while still delegating control to the container.</p><pre>@Repository<br>public class UserRepository {<br><br>    public String findUser() {<br>        return &quot;User from database&quot;;<br>    }<br>}<br><br>@Service<br>public class UserService {<br><br>    private final UserRepository repository;<br><br>    @Autowired<br>    public UserService(UserRepository repository) {<br>        this.repository = repository;<br>    }<br><br>    public String getUser() {<br>        return repository.findUser();<br>    }<br>}<br><br>@RestController<br>public class UserController {<br><br>    private final UserService service;<br><br>    @Autowired<br>    public UserController(UserService service) {<br>        this.service = service;<br>    }<br><br>    @GetMapping(&quot;/user&quot;)<br>    public String user() {<br>        return service.getUser();<br>    }<br>}</pre><pre>@ComponentScan<br>        ↓<br>Detect annotated classes<br>        ↓<br>Create beans automatically<br>        ↓<br>Resolve dependencies<br>        ↓<br>Inject dependencies<br>        ↓<br>Application ready</pre><h4>2.3.2 Java-Based Configuration</h4><p>In Java-based configuration, the structure of the application is defined through dedicated configuration classes.</p><p>These classes explicitly declare managed objects and their dependencies using methods. The container processes these definitions and constructs the object graph accordingly.</p><p>This approach provides fine-grained control over object creation and is particularly useful when default behavior needs to be customized or when complex wiring logic is required.</p><pre>public class UserRepository {<br><br>    public String findUser() {<br>        return &quot;User from database&quot;;<br>    }<br>}<br><br>public class UserService {<br><br>    private final UserRepository repository;<br><br>    public UserService(UserRepository repository) {<br>        this.repository = repository;<br>    }<br><br>    public String getUser() {<br>        return repository.findUser();<br>    }<br>}<br><br>@Configuration<br>public class AppConfig {<br><br>    @Bean<br>    public UserRepository userRepository() {<br>        return new UserRepository();<br>    }<br><br>    @Bean<br>    public UserService userService() {<br>        return new UserService(userRepository());<br>    }<br>}</pre><pre>Read @Configuration class<br>        ↓<br>Execute @Bean methods<br>        ↓<br>Create objects manually<br>        ↓<br>Register beans in container<br>        ↓<br>Resolve dependencies</pre><h4>2.3.3 External Configuration</h4><p>In external or file-based configuration, the structure of the system is defined outside of the application code.</p><p>The container reads configuration files and uses them to construct and connect application components. This approach fully separates configuration from implementation, allowing changes to system structure without modifying source code.</p><p>Although less common in modern systems, it remains relevant in environments where strict separation between configuration and code is required.</p><pre>public class UserRepository {<br><br>    public String findUser() {<br>        return &quot;User from database&quot;;<br>    }<br>}<br><br>public class UserService {<br><br>    private UserRepository repository;<br><br>    public void setRepository(UserRepository repository) {<br>        this.repository = repository;<br>    }<br><br>    public String getUser() {<br>        return repository.findUser();<br>    }<br>}</pre><pre>&lt;beans&gt;<br><br>    &lt;bean id=&quot;userRepository&quot;<br>          class=&quot;com.example.UserRepository&quot;/&gt;<br><br>    &lt;bean id=&quot;userService&quot;<br>          class=&quot;com.example.UserService&quot;&gt;<br><br>        &lt;property name=&quot;repository&quot;<br>                  ref=&quot;userRepository&quot;/&gt;<br><br>    &lt;/bean&gt;<br><br>&lt;/beans&gt;</pre><pre>Read XML file<br>        ↓<br>Parse bean definitions<br>        ↓<br>Create objects<br>        ↓<br>Inject dependencies<br>        ↓<br>Build object graph</pre><h4>2.4 Bootstrapping Process</h4><p>Bootstrapping is the process of initializing the IoC Container and preparing it to manage the application.</p><p>It represents the transition from a static codebase to a fully operational runtime system. During this phase, the container is created and begins processing configuration metadata.</p><p>The container loads component definitions, scans for managed elements, and builds an internal representation of the application structure. It determines how objects should be instantiated, resolves dependencies and prepares the infrastructure required for lifecycle and execution management.</p><p>In many cases, the container performs early initialization of certain components during this phase, particularly those required for core system functionality, such as singleton instances or internal services.</p><p>Once all definitions are processed and the internal state is fully constructed, the container reaches a stable state. At this point, it is capable of creating objects on demand, injecting dependencies and managing lifecycle transitions.</p><p>Bootstrapping is not part of business execution. It is a preparatory phase that establishes the runtime environment in which the application will operate.</p><blockquote>The IoC Container is the execution engine of the application.</blockquote><blockquote>It transforms a set of class definitions and configuration metadata into a fully connected and managed system. It does not implement business behavior but provides the structural and operational foundation required for all higher-level mechanisms.</blockquote><blockquote>Without the container, IoC remains a design principle. With the container, IoC becomes a functioning runtime architecture.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ilkvsHsvfCEqKEymm6p_Pg.png" /></figure><h3>3. Object Management</h3><h4>3.1 Definition of Object Management</h4><p>Object Management defines how the IoC Container controls the creation, initialization, usage and destruction of application objects (commonly referred to as beans) during runtime.</p><p>Once the container has completed the bootstrapping phase and processed configuration metadata, it becomes responsible for managing all application components. These components do not exist independently; they are instantiated, configured and maintained within the container’s execution context.</p><p>Object Management is the concrete realization of key IoC principles, particularly controlled lifecycle management and managed object scope. It ensures that object creation is consistent, dependencies are correctly resolved and runtime behavior remains predictable.</p><p>Rather than simply instantiating objects, the container applies a structured lifecycle model and enforces scope rules that determine how objects are reused and shared across the system.</p><h4>3.2 Bean Lifecycle</h4><p>The Bean Lifecycle describes the sequence of stages that a managed object passes through from its creation to its destruction within the IoC Container.</p><p>Object instantiation is only the initial step in a broader lifecycle fully controlled by the container. Before a bean becomes available for use, it must pass through several phases that ensure it is properly configured and operational.</p><h4>3.2.1 Instantiation</h4><p>During the instantiation phase, the container creates the object instance.</p><p>This is typically performed through constructor invocation or reflection mechanisms. At this stage, the object exists in memory but is not yet fully functional, as its dependencies have not been injected.</p><h4>3.2.2 Dependency Injection Phase</h4><p>After instantiation, the container performs dependency injection.</p><p>All required dependencies are resolved and injected into the object. This establishes the relationships between components and completes the structural composition of the bean.</p><p>At the end of this phase, the object has all required collaborators, but it may still require initialization logic before it can be used safely.</p><h4>3.2.3 Initialization Phase</h4><p>In the initialization phase, the container applies any required initialization logic.</p><p>This may include invoking lifecycle callbacks, executing initialization methods or applying internal post-processing mechanisms. The goal of this phase is to ensure that the object reaches a valid and stable state.</p><p>Only after initialization is complete is the object considered ready for use.</p><h4>3.2.4 Ready State</h4><p>Once initialization is completed, the bean enters the ready state.</p><p>In this state, the object is fully configured and available for use by other components in the application. It participates in normal application execution and remains in this state for a duration defined by its scope.</p><h4>3.2.5 Destruction Phase</h4><p>When the container determines that the bean is no longer needed, it enters the destruction phase.</p><p>During this phase, the container executes any cleanup logic associated with the object, such as releasing resources, closing connections or invoking destruction callbacks.</p><p>After this process is complete, the object is removed from the container’s management context.</p><h4>3.3 Bean Scope</h4><p>Bean Scope defines how the IoC Container controls the creation, reuse and visibility of bean instances within the application.</p><p>While the lifecycle describes how an object evolves over time, scope defines how many instances exist and how they are shared.</p><h4>3.3.1 Singleton Scope</h4><p>In singleton scope, the container creates a single instance of a bean and shares it across the entire application context.</p><p>Every request for that bean returns the same instance. This model is memory-efficient and suitable for stateless or shared components.</p><h4>3.3.2 Prototype Scope</h4><p>In prototype scope, the container creates a new instance of the bean each time it is requested.</p><p>Unlike singleton beans, prototype instances are not fully managed after creation. The container is responsible for instantiation and dependency injection, but it does not manage the complete lifecycle beyond that point.</p><h4>3.3.3 Contextual Scopes (Request, Session)</h4><p>In environments that define execution contexts (such as web applications), additional scopes are available.</p><p>A request scope creates a new bean instance for each incoming request, ensuring isolation between requests. A session scope maintains a separate instance for each user session, allowing state to be preserved across multiple interactions.</p><p>These scopes align object lifetime with external execution contexts and enable more granular control over resource usage and state management.</p><blockquote>Object Management is where IoC becomes operational.</blockquote><blockquote>The IoC Container does not merely construct objects; it actively manages their entire existence. Through lifecycle control and scope definition, it ensures that all components are created, configured, reused and destroyed in a consistent and predictable manner.</blockquote><blockquote>This allows application code to remain focused on business behavior, while the container guarantees structural integrity and runtime stability.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GOE8Q0PbGefv4RhS82HRbA.png" /></figure><h3>4. Dependency Injection</h3><p><em>(Operational realization of IoC, concrete implementation mechanism)</em></p><h3>4.1 DI as Implementation of IoC</h3><p>Dependency Injection (DI) is the operational mechanism through which Inversion of Control (IoC) is realized in practice.</p><p>IoC defines the architectural principle that object creation and dependency management must be delegated to an external system. Dependency Injection is the concrete technique used by the IoC Container to perform that delegation.</p><p>Without DI, IoC remains a conceptual model. With DI, the container becomes capable of constructing application components, resolving their dependencies and assembling the complete object graph during runtime.</p><p>Under traditional object-oriented design, classes instantiate their collaborators directly. This causes object composition logic to become embedded within business code.</p><p>Example of traditional architecture:</p><pre>public class UserService {<br><br>    private UserRepository repository =<br>            new UserRepository();<br><br>}</pre><p>In this model:</p><ul><li>the class controls dependency creation</li><li>object composition is decentralized</li><li>implementations are tightly coupled</li><li>substitution becomes difficult</li></ul><p>Dependency Injection reverses this responsibility.</p><p>Instead of creating dependencies internally, application components declare the dependencies they require, while the container provides those dependencies externally.</p><p>Example using Spring:</p><pre>@Service<br>public class UserService {<br><br>    private final UserRepository repository;<br><br>    @Autowired<br>    public UserService(UserRepository repository) {<br>        this.repository = repository;<br>    }<br>}</pre><p>Here:</p><ul><li>UserService does not create UserRepository</li><li>the dependency is declared</li><li>the container resolves and injects it</li><li>object composition becomes centralized</li></ul><p>DI therefore transforms application components from object creators into declarative units that describe required collaborators without controlling how they are obtained.</p><p>This introduces a fundamental architectural separation:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/765/1*UEAmlExyUcty6i3mkTcsow.png" /></figure><p>Dependency Injection is therefore not an independent architectural concept separate from IoC. It is the primary implementation strategy used by container-based frameworks such as Spring to operationalize IoC at runtime.</p><h3>4.2 Core Idea of Dependency Injection</h3><p>The core idea of Dependency Injection is that objects should not construct or locate their own dependencies. Instead, dependencies must be provided externally by the container.</p><p>Under DI, components become passive recipients of collaborators.</p><p>A component only declares:</p><ul><li>what it requires</li><li>which abstractions it depends on</li><li>which collaborators are necessary for operation</li></ul><p>The container assumes responsibility for:</p><ul><li>locating dependencies</li><li>constructing objects</li><li>resolving relationships</li><li>injecting the correct instances</li></ul><p>This model shifts the system from imperative composition to declarative composition.</p><p>Traditional model:</p><pre>Class → creates dependencies</pre><p>Dependency Injection model:</p><pre>Container → injects dependencies into class</pre><p>This inversion produces several architectural benefits.</p><h4>Reduced Coupling</h4><p>Classes no longer depend on concrete implementations.</p><p>Example:</p><pre>public class UserService {<br><br>    private final UserRepository repository;<br><br>    public UserService(UserRepository repository) {<br>        this.repository = repository;<br>    }<br>}</pre><p>The service depends on the abstraction of a collaborator, not on its creation process.</p><h4>Centralized Composition</h4><p>All object relationships are controlled by the container.</p><p>The complete application structure becomes externally manageable and deterministic.</p><h4>Improved Testability</h4><p>Dependencies can easily be replaced with mocks or alternative implementations.</p><p>Example:</p><pre>UserRepository mockRepository =<br>        Mockito.mock(UserRepository.class);<br><br>UserService service =<br>        new UserService(mockRepository);</pre><p>Because dependencies are injected externally, the service can be tested in isolation.</p><h4>Lifecycle Consistency</h4><p>The container controls:</p><ul><li>instantiation</li><li>initialization</li><li>reuse</li><li>destruction</li><li>scope behavior</li></ul><p>Application code remains focused exclusively on domain logic.</p><p>Dependency Injection therefore represents a structural composition model in which application components are assembled externally by the container rather than internally by application code.</p><h4>4.3 Forms of Dependency Injection</h4><p>Dependency Injection can be implemented through multiple injection strategies.</p><p>The IoC Container supports several mechanisms for supplying dependencies to managed objects.</p><p>The three primary forms are:</p><ul><li>Constructor Injection</li><li>Setter Injection</li><li>Field Injection</li></ul><p>Although all three achieve the same goal, they differ in:</p><ul><li>lifecycle semantics</li><li>immutability guarantees</li><li>visibility of dependencies</li><li>architectural safety</li></ul><h4>4.3.1 Constructor Injection</h4><p>Constructor Injection provides dependencies through the class constructor.</p><p>The container invokes the constructor and supplies all required collaborators during object creation.</p><p>Example:</p><pre>@Service<br>public class UserService {<br><br>    private final UserRepository repository;<br><br>    @Autowired<br>    public UserService(UserRepository repository) {<br>        this.repository = repository;<br>    }<br><br>    public String getUser() {<br>        return repository.findUser();<br>    }<br>}</pre><p>Execution process:</p><pre>Container creates UserRepository<br>        ↓<br>Container invokes UserService constructor<br>        ↓<br>Dependency injected during instantiation<br>        ↓<br>Fully initialized immutable object</pre><h4>Mandatory Dependencies</h4><p>A dependency required in the constructor becomes mandatory.</p><p>The object cannot exist in an invalid state.</p><h4>Immutability</h4><p>Dependencies can be declared final.</p><pre>private final UserRepository repository;</pre><p>This guarantees that injected collaborators cannot change after construction.</p><h4>Explicit Object Structure</h4><p>All required collaborators are visible directly in the constructor signature.</p><p>This improves readability and architectural clarity.</p><h4>Preferred Strategy in Spring</h4><p>Constructor Injection is generally considered the recommended approach in modern Spring applications because it promotes:</p><ul><li>immutability</li><li>explicit dependencies</li><li>safer initialization</li><li>easier testing</li></ul><h4>4.3.2 Setter Injection</h4><p>Setter Injection provides dependencies through setter methods after object creation.</p><p>The container:</p><ol><li>creates the object,</li><li>then invokes setter methods to inject collaborators.</li></ol><p>Example:</p><pre>@Service<br>public class UserService {<br><br>    private UserRepository repository;<br><br>    @Autowired<br>    public void setRepository(UserRepository repository) {<br>        this.repository = repository;<br>    }<br><br>    public String getUser() {<br>        return repository.findUser();<br>    }<br>}</pre><p>Execution process:</p><pre>Container instantiates object<br>        ↓<br>Object exists without dependency<br>        ↓<br>Container invokes setter method<br>        ↓<br>Dependency assigned after creation</pre><h4>Characteristics</h4><h4>Optional Dependencies</h4><p>Setter Injection is useful when dependencies are optional rather than mandatory.</p><h4>Mutable Structure</h4><p>Dependencies can be modified after object creation.</p><p>This introduces more flexibility but reduces immutability guarantees.</p><h4>Two-Phase Initialization</h4><p>The object exists temporarily in a partially initialized state before dependency injection completes.</p><h4>Historical Usage</h4><p>Setter Injection was more common in older Spring applications and XML-based configurations.</p><p>Modern systems typically prefer Constructor Injection except for optional collaborators.</p><h4>4.3.3 Field Injection</h4><p>Field Injection injects dependencies directly into object fields using reflection.</p><p>Example:</p><pre>@Service<br>public class UserService {<br><br>    @Autowired<br>    private UserRepository repository;<br><br>    public String getUser() {<br>        return repository.findUser();<br>    }<br>}</pre><p>Execution process:</p><pre>Container creates object<br>        ↓<br>Container accesses private field<br>        ↓<br>Dependency injected via reflection</pre><h4>Characteristics</h4><h4>Minimal Boilerplate</h4><p>Field Injection requires less code.</p><p>No constructor or setter is necessary.</p><h4>Hidden Dependencies</h4><p>Dependencies are not explicitly visible in constructors.</p><p>This reduces transparency of object requirements.</p><h4>Reduced Testability</h4><p>Testing becomes more difficult because dependencies cannot easily be supplied manually.</p><h4>Reflection-Based Injection</h4><p>The container must bypass encapsulation through reflection mechanisms.</p><h4>Discouraged in Modern Spring Design</h4><p>Although widely used historically, Field Injection is generally discouraged in modern Spring architecture because it:</p><ul><li>hides dependencies</li><li>reduces immutability</li><li>complicates testing</li><li>weakens explicit object design</li></ul><h4>4.4 Dependency Resolution Process</h4><p>Dependency Resolution is the process through which the IoC Container determines which objects must be injected into a component.</p><p>This process occurs during bean creation and is fully managed by the container.</p><p>The resolution pipeline typically follows these stages:</p><h4>1. Bean Definition Discovery</h4><p>The container identifies managed components through:</p><ul><li>annotations</li><li>configuration classes</li><li>external configuration metadata</li></ul><p>Example:</p><pre>@Service<br>public class UserService</pre><h4>2. Dependency Analysis</h4><p>The container analyzes:</p><ul><li>constructors</li><li>fields</li><li>setter methods</li><li>injection metadata</li></ul><p>It determines which dependencies are required.</p><p>Example:</p><pre>public UserService(UserRepository repository)</pre><p>Required dependency:</p><ul><li>UserRepository</li></ul><h4>3. Bean Lookup</h4><p>The container searches its internal registry for a compatible bean.</p><p>Resolution may occur:</p><ul><li>by type</li><li>by qualifier</li><li>by bean name</li><li>by primary designation</li></ul><h4>4. Dependency Construction</h4><p>If the dependency does not yet exist, the container recursively creates it.</p><p>This process may trigger additional dependency chains.</p><p>Example:</p><pre>UserController<br>    ↓<br>UserService<br>    ↓<br>UserRepository</pre><h4>5. Injection</h4><p>Once resolved, the dependency is injected into the target object.</p><h4>6. Initialization Completion</h4><p>After all dependencies are injected:</p><ul><li>initialization callbacks execute</li><li>post-processors run</li><li>the bean becomes operational.</li></ul><p>The Dependency Resolution Process therefore represents the dynamic assembly phase in which the container builds the complete runtime object graph.</p><h4>4.5 Relationship with IoC Container</h4><p>Dependency Injection cannot exist independently from the IoC Container.</p><p>The container is the execution engine that makes DI possible.</p><p>DI itself is not responsible for:</p><ul><li>object creation</li><li>lifecycle management</li><li>scope handling</li><li>proxy generation</li><li>runtime interception</li></ul><p>Those responsibilities belong to the container.</p><p>The relationship can therefore be understood as follows:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/775/1*jjZJDdX4Vtl8pRmtTcnmkg.png" /></figure><p>The IoC Container performs DI as part of its broader object management responsibilities.</p><p>Execution flow:</p><pre>Configuration Metadata<br>        ↓<br>IoC Container Bootstrapping<br>        ↓<br>Bean Discovery<br>        ↓<br>Object Instantiation<br>        ↓<br>Dependency Resolution<br>        ↓<br>Dependency Injection<br>        ↓<br>Initialization<br>        ↓<br>Ready State</pre><p>Dependency Injection is therefore one phase inside the larger container-managed lifecycle.</p><p>Without the container:</p><ul><li>dependency injection cannot occur automatically</li><li>object graphs cannot be centrally managed</li><li>runtime composition becomes fragmented</li></ul><p>The IoC Container transforms Dependency Injection from a simple coding technique into a complete runtime composition model capable of supporting large-scale modular architectures.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*XfuX36teuiZjj5tuVRG9lg.png" /></figure><h3>5. Aspect-Oriented Programming (AOP)</h3><p>(Behavioral extension layer over the container-managed runtime model)</p><h4>5.1 Definition of AOP</h4><p>Aspect-Oriented Programming (AOP) is a programming paradigm that introduces a structured mechanism for modularizing behaviors that affect multiple parts of an application simultaneously.</p><p>In traditional object-oriented design, application logic is primarily organized around business entities and services. However, certain types of behavior do not belong exclusively to a single component. Instead, they appear repeatedly across multiple layers of the system.</p><p>Examples include:</p><ul><li>transaction management</li><li>logging</li><li>security validation</li><li>auditing</li><li>caching</li><li>monitoring</li><li>performance measurement</li><li>exception handling</li></ul><p>These concerns are commonly referred to as <strong>cross-cutting concerns</strong> because they cut across the primary object model of the application.</p><p>Without AOP, these behaviors become duplicated throughout the codebase.</p><p>Example without AOP:</p><pre>@Service<br>public class UserService {<br><br>    public void createUser() {<br><br>        log.info(&quot;Starting transaction&quot;);<br><br>        try {<br><br>            // business logic<br><br>            log.info(&quot;Commit transaction&quot;);<br><br>        } catch (Exception e) {<br><br>            log.error(&quot;Rollback transaction&quot;);<br>        }<br>    }<br>}</pre><p>In this model:</p><ul><li>business logic and infrastructure logic become mixed</li><li>duplication increases</li><li>maintenance becomes difficult</li><li>behavioral consistency is harder to enforce</li></ul><p>AOP introduces a different execution model.</p><p>Instead of embedding infrastructure behavior directly into application code, the system externalizes these behaviors into independent modules called <strong>aspects</strong>.</p><p>The container then applies these aspects dynamically during runtime execution.</p><p>Example using Spring AOP:</p><pre>@Aspect<br>@Component<br>public class LoggingAspect {<br><br>    @Before(&quot;execution(* com.example.service.*.*(..))&quot;)<br>    public void logBefore() {<br><br>        System.out.println(&quot;Method execution intercepted&quot;);<br>    }<br>}</pre><p>Here:</p><ul><li>logging behavior is separated from business logic</li><li>the aspect defines where interception occurs</li><li>the container applies the behavior automatically</li><li>business components remain focused on domain functionality</li></ul><p>AOP therefore extends the IoC model from structural management into behavioral management.</p><p>IoC externalizes:</p><ul><li>object creation</li><li>dependency wiring</li><li>lifecycle management</li></ul><p>AOP externalizes:</p><ul><li>runtime behavioral concerns</li><li>execution interception</li><li>infrastructural processing</li></ul><p>This introduces a second major architectural separation:</p><ul><li>Structure → managed by the IoC Container</li><li>Behavior → dynamically enhanced through AOP</li></ul><p>Aspect-Oriented Programming is therefore not a replacement for object-oriented programming, but an extension layer that complements it by modularizing behaviors that cannot be cleanly encapsulated through traditional class hierarchies alone.</p><h4>5.2 Separation of Cross-Cutting Concerns</h4><p>The central objective of AOP is the separation of cross-cutting concerns from core business logic.</p><p>A cross-cutting concern is a behavior that:</p><ul><li>affects multiple components</li><li>is not tied to a single business responsibility</li><li>must be applied consistently throughout the system</li></ul><p>In layered applications, these concerns often appear repeatedly.</p><p>Example:</p><pre>@Service<br>public class PaymentService {<br><br>    public void processPayment() {<br><br>        securityCheck();<br><br>        startTransaction();<br><br>        logRequest();<br><br>        // business logic<br><br>        commitTransaction();<br>    }<br>}</pre><p>The actual business operation becomes surrounded by infrastructural behavior.</p><p>This creates several architectural problems.</p><p>Code Duplication<br>The same infrastructure logic appears across multiple services.</p><p>Tight Coupling<br>Business components become dependent on technical infrastructure.</p><p>Reduced Clarity<br>Core domain behavior becomes obscured by repetitive operational logic.</p><p>Maintenance Complexity<br>Changing infrastructure policies requires modifications across many classes.</p><p>AOP solves this problem by extracting these behaviors into centralized aspects.</p><p>Example:</p><pre>@Aspect<br>@Component<br>public class TransactionAspect {<br><br>    @Around(&quot;@annotation(Transactional)&quot;)<br>    public Object manageTransaction(<br>            ProceedingJoinPoint joinPoint) throws Throwable {<br><br>        try {<br><br>            startTransaction();<br><br>            Object result = joinPoint.proceed();<br><br>            commitTransaction();<br><br>            return result;<br><br>        } catch (Exception e) {<br><br>            rollbackTransaction();<br>            throw e;<br>        }<br>    }<br>}</pre><p>Business logic becomes simplified:</p><pre>@Service<br>public class PaymentService {<br><br>    @Transactional<br>    public void processPayment() {<br><br>        // pure business logic<br>    }<br>}</pre><p>The business component no longer controls:</p><ul><li>transaction boundaries</li><li>logging</li><li>security checks</li><li>monitoring logic</li></ul><p>Those concerns are externalized into reusable behavioral modules managed by the container.</p><p>This produces a clean architectural division:</p><ul><li>Core concerns → business/domain behavior</li><li>Cross-cutting concerns → infrastructural runtime behavior</li></ul><p>The separation of cross-cutting concerns is therefore the primary architectural motivation behind AOP.</p><h4>5.3 Interception Mechanism</h4><p>AOP operates through an interception mechanism.</p><p>The fundamental idea is that method execution can be intercepted dynamically before, during or after invocation.</p><p>Instead of invoking a target object directly, the container introduces an intermediate execution layer capable of:</p><ul><li>observing method calls</li><li>injecting additional behavior</li><li>modifying execution flow</li><li>handling exceptions</li><li>controlling method continuation</li></ul><p>Conceptually:</p><pre>Caller<br>   ↓<br>Proxy / Interceptor<br>   ↓<br>Target Object</pre><p>The interception layer acts as a runtime behavioral gateway.</p><p>When a method call occurs:</p><ol><li>the proxy intercepts the invocation</li><li>the aspect logic executes</li><li>the original method may continue</li><li>additional post-processing may occur</li></ol><p>Example:</p><pre>@Aspect<br>@Component<br>public class LoggingAspect {<br><br>    @Before(&quot;execution(* com.example.service.*.*(..))&quot;)<br>    public void beforeExecution() {<br><br>        System.out.println(&quot;Before method execution&quot;);<br>    }<br>}</pre><p>Execution flow:</p><pre>Client invokes method<br>        ↓<br>Proxy intercepts invocation<br>        ↓<br>@Before advice executes<br>        ↓<br>Target method executes<br>        ↓<br>Result returned</pre><p>Interception enables:</p><ul><li>transparent behavioral enhancement</li><li>centralized runtime policies</li><li>dynamic execution control</li><li>reusable infrastructure logic</li></ul><p>Importantly, the target object is often unaware that interception is occurring.</p><p>From the perspective of the application component:</p><ul><li>execution appears normal</li><li>no infrastructure code is required</li><li>interception remains externalized</li></ul><p>This transparency is one of the defining characteristics of AOP-based runtime systems.</p><h4>5.4 Proxy-Based Execution Model</h4><p>In Spring AOP, interception is implemented primarily through proxies.</p><p>A proxy is a runtime-generated object that wraps the original target object and intercepts method invocations before delegating execution to the real component.</p><p>Instead of exposing the original bean directly, the container exposes the proxy.</p><p>Conceptual model:</p><pre>Client<br>   ↓<br>Proxy Object<br>   ↓<br>Target Bean</pre><p>The proxy becomes the externally visible object inside the container.</p><p>When a method is called:</p><ol><li>the proxy receives the invocation</li><li>matching aspects execute</li><li>the target method is invoked</li><li>post-processing logic executes</li><li>the result returns to the caller</li></ol><p>Example:</p><pre>@Service<br>public class UserService {<br><br>    @Transactional<br>    public void registerUser() {<br><br>        // business logic<br>    }<br>}</pre><p>Runtime behavior:</p><pre>Client calls registerUser()<br>        ↓<br>Transactional proxy intercepts<br>        ↓<br>Transaction starts<br>        ↓<br>Target method executes<br>        ↓<br>Transaction commits<br>        ↓<br>Result returned</pre><p>The target class itself contains no transaction management code.</p><p>Spring commonly uses two proxy strategies.</p><p>JDK Dynamic Proxies</p><ul><li>used when interfaces are available</li><li>proxy implements the same interfaces as the target object</li></ul><p>CGLIB Proxies</p><ul><li>used when no interface exists</li><li>proxy subclasses the concrete class dynamically</li></ul><p>This model allows the container to enhance behavior without modifying the original source code.</p><p>Proxy-based execution is therefore the operational foundation of Spring AOP.</p><h4>5.5 Aspects</h4><p>An aspect is a modular unit that encapsulates cross-cutting behavior.</p><p>It defines:</p><ul><li>where behavioral interception occurs</li><li>what additional behavior executes</li><li>how execution flow is modified</li></ul><p>An aspect combines:</p><ul><li>Pointcuts</li><li>Advice</li><li>Weaving rules</li></ul><p>Example:</p><pre>@Aspect<br>@Component<br>public class SecurityAspect {<br><br>    @Before(<br>      &quot;execution(* com.example.service.*.*(..))&quot;<br>    )<br>    public void validateSecurity() {<br><br>        System.out.println(&quot;Security validation&quot;);<br>    }<br>}</pre><p>The aspect itself does not belong to business logic.<br> Instead, it defines behavioral policies applied externally by the container.</p><p>Aspects therefore function as reusable runtime behavior modules.</p><h4>5.5.1 Pointcut</h4><p>A pointcut defines where interception should occur.</p><p>It identifies:</p><ul><li>methods</li><li>classes</li><li>packages</li><li>annotations</li><li>execution patterns</li></ul><p>that should be intercepted by the aspect.</p><p>Example:</p><pre>execution(* com.example.service.*.*(..))</pre><p>Meaning:</p><ul><li>any method</li><li>inside the service package</li><li>regardless of return type</li><li>regardless of parameters</li></ul><p>Pointcuts act as behavioral selectors.</p><p>Without pointcuts:</p><ul><li>the container would not know where to apply aspects</li><li>interception would become uncontrolled</li></ul><p>Example using annotations:</p><pre>@Pointcut(&quot;@annotation(Transactional)&quot;)<br>public void transactionalMethods() {}</pre><p>This selects all methods annotated with @Transactional.</p><p>Pointcuts therefore define the structural targeting rules for runtime interception.</p><h4>5.5.2 Advice</h4><p>Advice defines the behavior executed when a pointcut matches a method invocation.</p><p>It represents the actual logic applied during interception.</p><p>Spring supports multiple advice types.</p><p>Before Advice</p><p>Executes before the target method.</p><pre>@Before(&quot;execution(* service.*.*(..))&quot;)<br>public void before() {<br>    System.out.println(&quot;Before execution&quot;);<br>}</pre><p>After Advice</p><p>Executes after method completion.</p><pre>@After(&quot;execution(* service.*.*(..))&quot;)<br>public void after() {<br>    System.out.println(&quot;After execution&quot;);<br>}</pre><p>After Returning Advice</p><p>Executes only if the method completes successfully.</p><pre>@AfterReturning(<br>    pointcut = &quot;execution(* service.*.*(..))&quot;,<br>    returning = &quot;result&quot;<br>)<br>public void afterReturning(Object result) {<br>}</pre><p>After Throwing Advice</p><p>Executes when an exception occurs.</p><pre>@AfterThrowing(<br>    pointcut = &quot;execution(* service.*.*(..))&quot;,<br>    throwing = &quot;ex&quot;<br>)<br>public void handleException(Exception ex) {<br>}</pre><p>Around Advice</p><p>Provides complete control over execution flow.</p><pre>@Around(&quot;execution(* service.*.*(..))&quot;)<br>public Object around(<br>        ProceedingJoinPoint joinPoint)<br>        throws Throwable {<br><br>    System.out.println(&quot;Before&quot;);<br><br>    Object result = joinPoint.proceed();<br><br>    System.out.println(&quot;After&quot;);<br><br>    return result;<br>}</pre><p>@Around advice is the most powerful form because it can:</p><ul><li>continue execution</li><li>block execution</li><li>modify arguments</li><li>alter return values</li><li>manage transactions</li><li>measure performance</li></ul><p>Advice therefore defines the runtime behavior injected into the execution pipeline.</p><h4>5.5.3 Weaving</h4><p>Weaving is the process of combining aspects with application code.</p><p>It is the mechanism through which the system integrates:</p><ul><li>target objects</li><li>pointcuts</li><li>advice</li><li>interception logic</li></ul><p>into a unified execution model.</p><p>Conceptually:</p><pre>Target Object<br>      +<br>Aspect Definition<br>      ↓<br>Weaved Runtime Object</pre><p>Spring performs weaving primarily at runtime through proxy creation.</p><p>Execution stages:</p><pre>Bean creation<br>        ↓<br>Aspect detection<br>        ↓<br>Proxy generation<br>        ↓<br>Advice binding<br>        ↓<br>Runtime interception enabled</pre><p>Other AOP frameworks may support:</p><ul><li>compile-time weaving</li><li>bytecode weaving</li><li>load-time weaving</li></ul><p>However, Spring AOP focuses mainly on lightweight proxy-based runtime weaving.</p><p>Weaving therefore represents the integration phase where behavioral enhancement becomes operational.</p><h4>5.6 Execution Flow with AOP</h4><p>The AOP execution model introduces an additional behavioral layer into the normal application execution pipeline.</p><p>Standard execution without AOP:</p><pre>Client<br>   ↓<br>Target Object<br>   ↓<br>Method Execution</pre><p>Execution with AOP:</p><pre>Client<br>   ↓<br>Proxy<br>   ↓<br>Aspect Interception<br>   ↓<br>Advice Execution<br>   ↓<br>Target Method<br>   ↓<br>Post-Processing<br>   ↓<br>Return Result</pre><p>Complete runtime example:</p><pre>HTTP Request<br>        ↓<br>Controller<br>        ↓<br>AOP Proxy<br>        ↓<br>Transaction Aspect<br>        ↓<br>Security Aspect<br>        ↓<br>Logging Aspect<br>        ↓<br>Business Method<br>        ↓<br>Commit Transaction<br>        ↓<br>Return Response</pre><p>This execution model allows multiple infrastructural behaviors to be layered dynamically around the same business operation without modifying application code.</p><p>AOP therefore transforms runtime execution into a composable behavioral pipeline.</p><h4>5.7 Relationship with IoC Container</h4><p>AOP depends directly on the IoC Container.</p><p>Without the container:</p><ul><li>aspects cannot be discovered automatically</li><li>proxies cannot be generated centrally</li><li>interception cannot be coordinated</li><li>runtime weaving cannot occur transparently</li></ul><p>The IoC Container provides:</p><ul><li>bean lifecycle management</li><li>proxy creation</li><li>aspect registration</li><li>dependency injection</li><li>runtime object substitution</li></ul><p>AOP operates as an extension layer built on top of container-managed object management.</p><p>Execution relationship:</p><pre>Configuration Metadata<br>        ↓<br>IoC Container Bootstrapping<br>        ↓<br>Bean Discovery<br>        ↓<br>Aspect Detection<br>        ↓<br>Proxy Generation<br>        ↓<br>Dependency Injection<br>        ↓<br>Runtime Interception Enabled<br>        ↓<br>Application Ready</pre><p>The container determines:</p><ul><li>which beans require proxies</li><li>which aspects apply</li><li>how interception chains are assembled</li><li>how runtime behavior is enhanced</li></ul><p>Importantly, proxied objects remain fully managed beans inside the container.</p><p>This means:</p><ul><li>AOP integrates naturally with Dependency Injection</li><li>transactional behavior can be injected transparently</li><li>aspects themselves can receive dependencies</li><li>runtime behavior becomes centrally orchestrated</li></ul><p>Aspect-Oriented Programming is therefore not an isolated mechanism separate from IoC.<br> It is a behavioral extension of the container-managed runtime architecture.</p><p>IoC manages:</p><ul><li>structure</li><li>object composition</li><li>lifecycle</li></ul><p>AOP manages:</p><ul><li>runtime behavior</li><li>execution interception</li><li>infrastructural processing</li></ul><p>Together, they form the structural and behavioral foundation of modern Spring-based enterprise architectures.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vcdaFcSUgtERDaPcUZ0ibA.png" /></figure><h3>6. Transaction Management</h3><p>Transaction Management is the infrastructure mechanism responsible for guaranteeing consistency during the execution of business operations that interact with persistent resources. In enterprise applications, a single operation often involves multiple database actions that must behave as one atomic unit of work. If one operation fails while others succeed, the application may enter an inconsistent state. Transaction management solves this problem by ensuring that all operations are either committed successfully or completely reverted through rollback. In Spring-based architectures, this responsibility is externalized from business logic and managed by the container through declarative configuration and runtime interception.</p><h4>6.1 Definition of Transaction Management</h4><p>A transaction defines a protected execution boundary around a business operation. All actions executed inside this boundary belong to the same unit of work and remain coordinated until execution finishes. If the operation completes successfully, the transaction is committed and all changes become permanent. If an exception occurs, the transaction is rolled back and the previous consistent state is restored. This model guarantees atomicity, consistency and reliability during runtime execution without forcing the application to manage low-level persistence infrastructure manually.</p><h4>6.2 Transactions as Aspects (AOP-Based Model)</h4><p>In Spring, transaction management is implemented primarily through Aspect-Oriented Programming (AOP). Instead of embedding transaction control directly inside business methods, transactional behavior is applied externally through proxies and interceptors managed by the container. The @Transactional annotation acts as metadata that marks a method for transactional interception. When the method is invoked, the proxy intercepts execution, opens a transaction, delegates execution to the target method and finally decides whether to commit or rollback depending on the execution result. This approach keeps business logic isolated from infrastructural concerns and allows transaction policies to be centralized and reusable across the entire application.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/951/1*nCHIUXFLG0mH7ANykhLJiA.png" /></figure><h4>6.3 Transaction Interception Flow</h4><p>When a transactional method is called, execution first passes through a container-managed proxy rather than directly reaching the target object. The proxy detects the transactional metadata and activates the transaction interceptor. The interceptor requests a transaction from the transaction manager, initializes the transactional context and then invokes the business method inside that boundary. After execution completes, the interceptor evaluates the result. Successful execution leads to commit, while exceptions trigger rollback. Finally, all transactional resources are released and control returns to the caller. This entire process occurs transparently at runtime without explicit transaction management inside application code.</p><h4>6.4 Transaction Lifecycle</h4><p>The transaction lifecycle represents the sequence of stages through which a transaction passes during execution. Once a transactional method is intercepted, the transaction manager creates the transactional context and binds the necessary persistence resources to the current execution thread. The business logic then executes within the active transaction boundary while all modifications remain temporary until completion. If execution finishes correctly, the transaction enters the commit phase and all changes are permanently persisted. If an error occurs, the transaction transitions into rollback, reverting all operations executed during the transaction in order to preserve consistency and prevent partial updates.</p><h4>6.4.1 Transaction Start</h4><p>During the start phase, the transaction manager initializes the transactional context, obtains the required persistence resources and establishes the execution boundary that will contain the business operation.</p><h4>6.4.2 Execution Phase</h4><p>During the execution phase, the target business method runs inside the active transactional boundary previously created by the transaction manager. All persistence operations executed during this stage become part of the same logical unit of work and remain coordinated until the transaction reaches its final state. Changes performed against the database are not permanently persisted immediately, since they still depend on the final outcome of the transaction. If execution completes successfully, the transaction may proceed to the commit phase; otherwise, any runtime failure or exception may trigger a rollback operation that cancels all modifications executed during the transaction scope. This phase represents the core operational stage of the transactional lifecycle because it is where the actual business logic executes under transactional protection.</p><h4>6.4.3 Commit Phase</h4><p>If the business operation completes successfully, the transaction enters the commit phase. During this stage, the transaction manager permanently persists all changes executed within the transactional boundary and synchronizes the final state with the underlying database. Once the commit operation is completed, the transaction is considered successful and the associated resources are released. This phase guarantees that all operations executed during the transaction become permanently visible and consistent from the perspective of the system.</p><h4>6.4.4 Rollback Phase</h4><p>If an exception occurs during execution, the transaction enters the rollback phase. In this stage, the transaction manager cancels all operations performed inside the transactional context and restores the previous consistent state of the system. Rollback prevents partially completed operations from being permanently persisted and guarantees atomic execution behavior. This mechanism is essential for maintaining reliability and consistency during runtime failures, especially when multiple persistence operations are executed as part of the same business process.</p><h4>6.5 Declarative Transaction Model</h4><p>In Spring-based architectures, transaction management is commonly implemented through a declarative model based on metadata annotations such as @Transactional. Instead of manually controlling transaction boundaries inside application code, developers declare transactional behavior at the method or class level while the container manages the underlying infrastructure automatically. This approach simplifies application design by separating business logic from persistence and transaction coordination concerns. The declarative model also centralizes transaction policies and integrates directly with the AOP interception system responsible for applying transactional behavior during runtime execution.</p><h4>6.6 Relationship with IoC and Dependency Injection</h4><p>Transaction Management is deeply integrated with the IoC Container and the Dependency Injection model. The container creates and manages transactional proxies, injects the required infrastructural dependencies and coordinates the runtime interception process used to apply transaction behavior dynamically. Dependency Injection allows business components to remain independent from transaction APIs and persistence infrastructure, while IoC provides the lifecycle and proxy management required for transactional execution. Together, IoC, Dependency Injection and AOP form the infrastructural foundation that enables transparent and container-managed transaction coordination in modern enterprise applications.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Y4iXVzWDfi8jnfHRUhWJug.png" /></figure><h3>7. Persistence Integration</h3><p>Persistence Integration is the infrastructural layer responsible for connecting the application runtime with persistent storage systems such as relational databases. In enterprise architectures, business operations constantly require the creation, retrieval, modification and deletion of persistent data. Instead of allowing business components to interact directly with low-level database APIs, modern frameworks introduce an abstraction layer that centralizes persistence access, resource management and transaction coordination. In Spring-based systems, persistence integration is deeply connected with the IoC Container, Dependency Injection, Transaction Management and the ORM infrastructure, allowing persistence operations to be executed transparently inside the container-managed runtime model.</p><h4>7.1 Definition of Persistence Integration</h4><p>Persistence Integration is the process through which the runtime environment coordinates application components, persistence frameworks and database resources in order to provide consistent and abstracted access to persistent data. Its objective is to isolate business logic from low-level persistence concerns such as connection management, SQL execution, transaction synchronization and object-relational mapping.</p><p>In traditional applications, persistence logic was frequently embedded directly inside business components, forcing developers to manually manage JDBC connections, SQL statements and resource cleanup. This approach produced tightly coupled architectures where business logic became dependent on infrastructural APIs and persistence implementation details.</p><p>Modern container-managed architectures externalize these responsibilities into dedicated persistence layers integrated with the runtime container. Instead of interacting directly with database infrastructure, business components operate through repositories, entity managers and persistence contexts managed by the framework itself. This separation improves maintainability, scalability and consistency while reducing boilerplate infrastructure code.</p><p>Persistence Integration therefore acts as the bridge between the object-oriented domain model and the underlying persistent storage system, enabling enterprise applications to manipulate persistent state while remaining structurally independent from database implementation details.</p><h4>7.2 Persistence Components</h4><p>Persistence Integration is composed of multiple infrastructural components that collaborate during runtime execution. Each component has a specialized responsibility inside the persistence architecture and together they form the persistence execution pipeline used by the container.</p><p>The Data Source provides physical connectivity with the database and manages connection allocation during execution. The Entity Manager or ORM Session controls the persistence context and coordinates entity state transitions during runtime operations. Repositories and Data Access Layers provide abstraction over persistence access by exposing simplified interfaces used by business services to interact with persistent data without directly manipulating low-level persistence APIs.</p><p>These components are fully integrated into the IoC Container and are typically injected automatically into business components through Dependency Injection.</p><h4>7.2.1 Data Source</h4><p>The Data Source represents the infrastructural component responsible for managing physical database connectivity. Instead of manually opening and closing JDBC connections inside application code, enterprise frameworks centralize connection management through a container-managed Data Source.</p><p>The Data Source maintains connection pools, allocates connections during execution and releases them once transactional operations complete. This model improves performance, scalability and resource efficiency because connections can be reused across multiple requests instead of being constantly recreated.</p><p>In Spring-based architectures, the Data Source is generally configured as a managed bean inside the container and becomes the foundational resource used by the persistence infrastructure, transaction manager and ORM framework during runtime execution.</p><p>The business layer remains completely isolated from direct connection handling, allowing persistence operations to occur transparently through the container-managed execution pipeline.</p><h4>7.2.2 Entity Manager / Session</h4><p>The Entity Manager, or Session in Hibernate terminology, is the central runtime component responsible for managing persistent entities inside the persistence context. It acts as the intermediary between the application domain model and the database layer.</p><p>When entities are loaded, modified or removed during execution, the Entity Manager tracks their state transitions and synchronizes those changes with the database when the transaction reaches the commit phase. This mechanism allows developers to manipulate ordinary Java objects while the persistence infrastructure automatically coordinates SQL generation and synchronization internally.</p><p>The Entity Manager also controls the lifecycle of managed entities, including entity retrieval, dirty checking, caching and persistence synchronization. Instead of writing explicit SQL for every operation, developers interact with domain entities while the ORM framework translates object state transitions into database operations during runtime execution.</p><p>This model significantly reduces infrastructural complexity and enables the persistence layer to remain integrated with container-managed transactions and transactional synchronization.</p><h4>Example: Entity Management and Automatic Persistence Synchronization</h4><pre>@Entity<br>public class User {<br><br>    @Id<br>    private Long id;<br><br>    private String name;<br>}<br><br>@Transactional<br>public void updateUser() {<br><br>    User user = entityManager.find(User.class, 1L);<br><br>    user.setName(&quot;John Updated&quot;);<br>}<br></pre><p>After the entity is loaded through the Entity Manager, it becomes managed by the Persistence Context. Any modification performed on the entity is automatically tracked by the ORM framework through dirty checking. When the transaction reaches the commit phase, Hibernate synchronizes the updated entity state with the database by generating the corresponding SQL UPDATE statement automatically, without requiring explicit SQL manipulation inside application code.</p><h4>7.2.3 Repositories / Data Access Layer</h4><p>Repositories and Data Access Layers provide the abstraction layer used by business services to access persistent data. Their purpose is to isolate persistence operations from business logic by encapsulating data retrieval and storage responsibilities into dedicated components.</p><p>Instead of embedding persistence queries directly inside service classes, enterprise applications centralize persistence access through repositories that expose domain-oriented operations such as entity retrieval, persistence and deletion. This separation improves architectural clarity and prevents business components from becoming dependent on persistence implementation details.</p><p>In Spring Data architectures, repositories are commonly declared as interfaces managed automatically by the container. The framework dynamically generates the underlying persistence implementation during runtime, allowing developers to work with high-level persistence abstractions instead of manually implementing repetitive database access code.</p><p>This repository-based model reinforces the separation between business behavior and infrastructural persistence concerns while maintaining consistency across the application architecture.</p><h4>Example: Repository Abstraction with Spring Data JPA</h4><pre>@Repository<br>public interface UserRepository<br>        extends JpaRepository&lt;User, Long&gt; {<br><br>}<br><br>@Service<br>public class UserService {<br><br>    private final UserRepository userRepository;<br><br>    public UserService(UserRepository userRepository) {<br><br>        this.userRepository = userRepository;<br>    }<br><br>    public User loadUser(Long id) {<br><br>        return userRepository.findById(id)<br>                .orElse(null);<br>    }<br>}</pre><p>The service layer interacts only with the repository abstraction and remains completely independent from JDBC APIs, SQL statements and persistence implementation details. Spring Data JPA dynamically generates the repository implementation during runtime, allowing the application to access persistent data through high-level domain-oriented operations instead of manually implementing database access logic.</p><h4>7.3 Persistence Context</h4><p>The Persistence Context represents the runtime environment in which persistent entities are managed and tracked during execution. It acts as an internal workspace maintained by the Entity Manager where entity instances remain synchronized with the current transactional state.</p><p>When an entity is loaded into the persistence context, it becomes managed by the ORM infrastructure. Any modifications performed on that entity are automatically detected and coordinated internally by the persistence framework. Instead of immediately synchronizing changes with the database, modifications remain temporarily stored inside the persistence context until the transaction reaches completion.</p><p>This mechanism enables automatic dirty checking, identity management and transactional synchronization. Multiple operations executed during the same transaction therefore interact with a consistent in-memory representation of persistent entities before final synchronization occurs.</p><p>The Persistence Context is tightly integrated with Transaction Management because its lifecycle is commonly bound to the active transaction scope managed by the container.</p><h4>7.4 Integration with Container-Managed Transactions</h4><p>Persistence Integration is deeply connected with the container-managed transaction system. During runtime execution, persistence operations execute inside transactional boundaries coordinated by the transaction manager and the AOP interception infrastructure.</p><p>When a transactional method begins execution, the container creates the transactional context and associates the persistence resources required during the operation. The Entity Manager, persistence context and database connection become synchronized with the active transaction, ensuring that all persistence operations belong to the same consistent unit of work.</p><p>As business logic executes, the persistence infrastructure tracks entity modifications while the transaction remains active. If execution completes successfully, the transaction manager coordinates the commit operation and the persistence context synchronizes all pending changes with the database. If an exception occurs, rollback cancels the transaction and discards all pending modifications in order to preserve consistency.</p><p>This integration between Persistence Integration and Transaction Management allows enterprise applications to execute complex persistence operations while remaining isolated from low-level synchronization and transaction coordination concerns.</p><h4>7.5 Execution Flow with Persistence</h4><p>The persistence execution flow begins when a business service invokes a repository or persistence operation during runtime execution. Instead of interacting directly with the database, the request passes through the container-managed persistence infrastructure.</p><p>The repository delegates the operation to the Entity Manager, which interacts with the persistence context responsible for managing entity state during the active transaction. The persistence provider then translates object-oriented operations into SQL statements executed against the database through the Data Source connection infrastructure.</p><p>Throughout execution, the persistence context tracks entity modifications and maintains synchronization with the transactional boundary. Once execution completes, the transaction manager determines whether the operation should be committed or rolled back depending on the execution outcome.</p><p>This execution pipeline allows business logic to remain focused on domain behavior while the container transparently coordinates persistence synchronization, SQL generation, resource management and transactional consistency internally.</p><h4>7.6 Abstraction of Data Access</h4><p>One of the primary objectives of Persistence Integration is the abstraction of data access infrastructure from business logic. Enterprise applications should not depend directly on JDBC APIs, SQL statements or database-specific implementations because doing so introduces tight coupling between the application domain and the persistence technology.</p><p>Persistence abstraction allows developers to work with domain entities, repositories and business-oriented operations instead of low-level persistence mechanisms. The framework internally translates those operations into database interactions while maintaining transactional consistency and persistence synchronization transparently.</p><p>This abstraction model improves portability, maintainability and architectural separation because business components remain independent from persistence implementation details. Changes in database providers, ORM frameworks or persistence configurations can therefore occur with minimal impact on the business layer.</p><p>Persistence Integration consequently represents the persistence coordination layer of the container-managed runtime architecture. Together with IoC, Dependency Injection, AOP and Transaction Management, it forms part of the infrastructural foundation that enables scalable, maintainable and consistent enterprise application execution.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3yFGVvS7a4LV3ZAST0f4YA.png" /></figure><h3>8. Runtime Execution Model</h3><p><em>(Integration of all previous layers into a unified execution pipeline)</em></p><h4>8.1 Definition of Runtime Execution Model</h4><p>The Runtime Execution Model represents the coordinated execution architecture through which all infrastructural layers of the framework collaborate during application runtime. Instead of operating as isolated mechanisms, the IoC Container, Dependency Injection, Aspect-Oriented Programming, Transaction Management and Persistence Integration form a unified execution pipeline responsible for managing object creation, dependency coordination, runtime interception, transactional consistency and persistence synchronization.</p><p>In container-managed enterprise architectures, application execution is not limited to direct method invocation between objects. Every operation executes inside an infrastructural environment controlled by the runtime container, where multiple framework services participate transparently during execution. Business components therefore interact primarily with the domain model while the container orchestrates infrastructural behavior automatically behind the execution flow.</p><p>The Runtime Execution Model externalizes infrastructural responsibilities from application code and centralizes them inside the container-managed runtime environment. This approach enables enterprise systems to maintain consistency, scalability, modularity and deterministic execution behavior while reducing coupling between business logic and infrastructural concerns.</p><p>The runtime model consequently acts as the operational foundation that integrates all previous architectural layers into a single coordinated execution system.</p><h4>8.2 Structural Phase vs Runtime Phase</h4><p>Container-managed architectures operate through two complementary execution dimensions: the structural phase and the runtime phase. Although both belong to the same infrastructural system, each phase has different responsibilities inside the application lifecycle.</p><p>The structural phase occurs during container initialization and application bootstrapping. During this stage, the IoC Container scans configuration metadata, discovers components, creates bean definitions, resolves dependencies and builds the internal object graph required by the application. Proxy generation, dependency wiring and infrastructural configuration also occur during this initialization process before business execution begins.</p><p>At this stage, the framework prepares the execution environment but business operations have not yet started. The container therefore constructs the structural architecture required for runtime execution.</p><p>The runtime phase begins once the application starts processing business operations and method invocations. During execution, proxies intercept calls, aspects execute infrastructural behavior, transactions are coordinated, persistence contexts become active and runtime synchronization mechanisms manage consistency across the application.</p><p>While the structural phase defines how the application is assembled, the runtime phase defines how the application behaves during execution.</p><p>This distinction is fundamental because modern enterprise frameworks separate architectural construction from operational execution. The container first builds the execution environment and later orchestrates runtime behavior dynamically through the infrastructural layers previously configured during initialization.</p><h4>8.3 End-to-End Execution Flow</h4><p>The end-to-end execution flow represents the complete runtime path followed by a business request as it traverses the container-managed infrastructure. Instead of executing directly between application objects, requests move through multiple infrastructural layers coordinated transparently by the runtime environment.</p><p>Execution commonly begins when an external client invokes an application endpoint. The request first reaches a container-managed component that may already be wrapped by runtime-generated proxies responsible for infrastructural interception.</p><p>If Aspect-Oriented Programming features such as transaction management are active, the proxy intercepts execution before the target business method executes. Transactional boundaries are initialized, persistence resources are associated with the execution context and the runtime environment becomes synchronized with the active transaction scope.</p><p>The business method then executes while interacting with repositories, persistence abstractions and domain entities. Persistence operations are delegated to the Entity Manager and coordinated internally through the Persistence Context, where entity state transitions remain synchronized during the active transaction.</p><p>As execution completes, the transaction manager evaluates the execution outcome. Successful execution triggers commit and persistence synchronization with the database, while exceptions trigger rollback and cancellation of pending modifications.</p><p>Finally, infrastructural resources are released, the execution context is finalized and control returns to the client.</p><p>This entire execution pipeline occurs transparently through coordinated interaction between the container, proxies, transactional infrastructure and persistence mechanisms without requiring explicit infrastructural coordination inside business code.</p><h4>8.4 Unified Execution Pipeline</h4><p>The Runtime Execution Model unifies all infrastructural layers into a coordinated execution pipeline where each architectural component contributes a specialized responsibility during runtime execution. Rather than functioning independently, the container-managed infrastructure behaves as an integrated operational system.</p><p>The IoC Container provides object lifecycle management and establishes the structural foundation of the application. Dependency Injection connects application components and supplies infrastructural dependencies required during execution. Aspect-Oriented Programming introduces runtime interception capabilities that allow infrastructural behavior to execute transparently around business operations.</p><p>Transaction Management coordinates execution consistency by defining transactional boundaries, synchronizing persistence operations and controlling commit or rollback behavior during execution. Persistence Integration manages entity synchronization, persistence contexts, repository abstraction and communication with the underlying database infrastructure.</p><p>Together, these layers form a continuous execution pipeline where business operations move through multiple infrastructural stages before execution completes. The runtime environment therefore becomes responsible not only for executing business logic, but also for coordinating consistency, persistence synchronization, infrastructural interception and resource management automatically.</p><p>The unified execution pipeline consequently transforms the framework into a fully coordinated runtime orchestration system rather than a simple dependency management mechanism.</p><h4>8.4.1 IoC Container</h4><p>The IoC Container acts as the structural core of the Runtime Execution Model. It is responsible for discovering components, creating managed objects, controlling bean lifecycles and maintaining the internal dependency graph required by the application.</p><p>During application initialization, the container analyzes configuration metadata, registers bean definitions and prepares the infrastructural environment used during runtime execution. The container also manages proxy creation and integrates infrastructural services such as transaction management and persistence coordination into the execution architecture.</p><p>Without the IoC Container, the runtime infrastructure would not have a centralized mechanism capable of coordinating object management, lifecycle control and infrastructural orchestration consistently across the application.</p><h4>8.4.2 Dependency Injection</h4><p>Dependency Injection provides the connectivity mechanism that links all managed components inside the runtime architecture. Instead of allowing components to manually create infrastructural dependencies, the container injects required collaborators automatically during initialization.</p><p>This model enables business services to remain structurally independent from infrastructural implementations such as repositories, transaction managers or persistence providers. Components therefore depend on abstractions rather than directly controlling object creation or infrastructural coordination.</p><p>Dependency Injection also enables the runtime infrastructure to dynamically replace implementations, inject proxies and coordinate execution behavior transparently without modifying business logic.</p><h4>8.4.3 AOP</h4><p>Aspect-Oriented Programming introduces runtime behavioral interception into the execution pipeline. Through proxies and interceptors, the framework can execute infrastructural logic before, after or around business method execution without modifying the business components themselves.</p><p>AOP externalizes cross-cutting concerns such as transaction management, logging, security validation and monitoring into dedicated infrastructural aspects managed by the container. During runtime execution, proxies intercept method invocations and dynamically apply the required infrastructural behavior around the target operation.</p><p>This mechanism allows runtime behavior to remain centralized, reusable and transparently integrated into the execution flow while preserving clean business-oriented application design.</p><h4>8.4.4 Transaction Management</h4><p>Transaction Management coordinates consistency and reliability throughout the runtime execution pipeline. During transactional execution, the transaction manager establishes execution boundaries, synchronizes persistence resources and controls commit or rollback behavior depending on the execution outcome.</p><p>The transactional infrastructure remains deeply integrated with AOP interception because transactional behavior is commonly applied through runtime proxies surrounding business methods. Once execution begins, the transaction manager coordinates persistence synchronization, resource allocation and transactional consistency across all participating persistence operations.</p><p>This model guarantees atomic execution behavior while allowing business components to remain independent from low-level transaction APIs and infrastructural synchronization concerns.</p><h4>8.4.5 Persistence Integration</h4><p>Persistence Integration represents the runtime persistence coordination layer responsible for synchronizing domain entities, persistence contexts and database resources during execution.</p><p>Repositories provide abstraction over persistence access, while the Entity Manager coordinates entity lifecycle management and persistence synchronization. The Persistence Context maintains a consistent in-memory representation of managed entities throughout the active transaction scope.</p><p>During execution, the ORM infrastructure translates object-oriented state transitions into database operations while the transaction manager coordinates synchronization and consistency with the underlying persistent storage system.</p><p>Persistence Integration therefore allows enterprise applications to manipulate domain entities while the runtime infrastructure transparently coordinates SQL generation, persistence synchronization and transactional consistency internally.</p><h4>8.5 Transparency of Execution</h4><p>One of the defining characteristics of the Runtime Execution Model is execution transparency. Although enterprise applications may involve complex infrastructural coordination internally, business components interact primarily with simple domain-oriented abstractions while the runtime infrastructure manages operational complexity automatically.</p><p>Developers typically invoke ordinary methods, manipulate entities and interact with repositories without explicitly controlling transactions, persistence synchronization, proxy execution or infrastructural resource management. The container transparently orchestrates these responsibilities through the runtime execution pipeline.</p><p>This transparency is possible because infrastructural concerns are externalized into container-managed services coordinated dynamically during execution. Proxies intercept method invocations, aspects apply runtime behavior, transaction managers synchronize execution boundaries and persistence frameworks coordinate entity state transitions without exposing infrastructural complexity directly to the business layer.</p><p>The result is a programming model where application code remains focused on business behavior while the runtime environment manages infrastructural orchestration transparently behind the execution flow.</p><h4>8.6 Determinism and Consistency</h4><p>The Runtime Execution Model provides deterministic and consistent execution behavior across enterprise applications by centralizing infrastructural coordination inside the container-managed runtime environment.</p><p>Because object lifecycles, dependency wiring, runtime interception, transaction boundaries and persistence synchronization are all controlled by the framework, execution behavior becomes predictable and standardized throughout the application architecture. Operations execute under coordinated infrastructural rules rather than relying on manual resource handling or inconsistent application-level coordination.</p><p>Transactional boundaries guarantee atomic execution, persistence contexts maintain synchronized entity state and container-managed execution pipelines ensure that infrastructural behavior executes consistently across all business operations.</p><p>This deterministic model significantly improves reliability, scalability and maintainability because enterprise applications operate within a controlled runtime environment capable of coordinating complex infrastructural interactions systematically.</p><p>The Runtime Execution Model therefore represents the final integration layer of the container-managed architecture, unifying IoC, Dependency Injection, AOP, Transaction Management and Persistence Integration into a single coordinated execution system responsible for reliable enterprise application behavior.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=07fe30de527d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[JVM Garbage Collection (GC)]]></title>
            <link>https://medium.com/@barbieri.santiago/jvm-garbage-collection-gc-d6544d448330?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/d6544d448330</guid>
            <category><![CDATA[garbage-collector]]></category>
            <category><![CDATA[java]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sat, 02 May 2026 21:41:10 GMT</pubDate>
            <atom:updated>2026-05-02T21:41:10.660Z</atom:updated>
            <content:encoded><![CDATA[<p>Java Garbage Collection (GC) is a core mechanism of the JVM responsible for managing heap memory. It identifies objects that are no longer reachable from the application and reclaims the memory they occupy so it can be reused for new allocations.</p><p>Objects are created in the heap and over time many of them become unreachable. Instead of requiring manual memory management, the JVM automatically detects these unused objects and frees their memory.</p><p>Understanding how GC works helps you write applications that use memory more efficiently and behave more predictably under load, especially in systems where performance and responsiveness matter.</p><h3>JVM Heap</h3><p>The heap is divided into generations:</p><h4>Young Generation</h4><p>Where new objects are created:</p><ul><li>Eden Space (most objects die here)</li><li>Survivor Spaces (S0 / S1)</li></ul><h4>Old Generation (Tenured)</h4><p>Long-lived objects move here.</p><h4>Metaspace</h4><p>Stores class metadata (Java 8+ uses native memory, not heap).<br>Before Java 8, this area was called <strong>PermGen (Permanent Generation)</strong>.</p><blockquote>Every GC algorithm balances:<br><strong>Throughput: </strong>How much work the app does<br><strong>Latency: </strong>How long GC pauses are<br>You cannot maximize both at the same time.</blockquote><h3>GC Algorithms</h3><h4>Serial GC</h4><p>Serial GC uses one single thread to perform all garbage collection tasks.<br>When garbage collection starts, the JVM pauses all application threads (<strong>Stop-the-World</strong>) and executes the cleanup process sequentially using only one thread.<br>It is called <strong>Serial GC</strong> because every garbage collection phase runs one after another in a single GC thread.<br>Because there is no coordination between multiple GC threads, Serial GC has low overhead and works well on small systems. However, pause times become longer as heap size grows.</p><h4>How Serial GC Works</h4><h4><strong>Mark</strong></h4><p>The JVM starts from <strong>GC Roots</strong> (local variables in stacks, static fields, active threads, JNI references) and follows object references.<br>Every object that can still be reached is marked as alive. Objects that are not reached are considered garbage.</p><h4><strong>Sweep</strong></h4><p>After marking is complete, unreachable objects become reclaimable and their memory can be reused.</p><h4><strong>Copy (Young Generation)</strong></h4><p>The <strong>Young Generation</strong> is the heap area where new objects are created. Most of these objects are temporary and die quickly.<br>This area is commonly divided into:</p><ul><li><strong>Eden</strong>: where new objects are allocated</li><li><strong>Survivor From</strong>: survivor space used in the current cycle</li><li><strong>Survivor To</strong>: empty survivor space used as destination</li></ul><h4>When a <strong>Minor GC</strong> occurs:</h4><ol><li>The JVM checks objects in <strong>Eden</strong> and <strong>Survivor From</strong>.</li><li>Objects that are still alive are copied into <strong>Survivor To</strong>.</li><li>Dead objects are ignored and left behind.</li><li>After copying finishes, Eden and Survivor From are cleared.</li><li>Survivor spaces switch roles for the next collection.</li></ol><p>Instead of deleting dead objects one by one, the JVM copies only the live objects and reuses the previous memory areas.<br>This is efficient because usually only a small number of young objects survive.</p><h4><strong>Compact</strong></h4><p>During <strong>Full GC</strong>, surviving objects in the Old Generation may be moved together into continuous memory blocks.<br>This removes empty gaps between objects (<strong>fragmentation</strong>) and makes future memory allocation easier and faster.</p><h4><strong>Memory Strategy</strong></h4><ul><li><strong>Young Generation:</strong> uses copying collection</li><li><strong>Old Generation:</strong> uses mark-sweep-compact</li><li><strong>All GC work:</strong> handled by one single thread</li></ul><h4>Minor GC vs Full GC</h4><ul><li><strong>Minor GC:</strong> processes the Young Generation</li><li><strong>Full GC:</strong> processes the entire heap and causes longer pauses</li></ul><h4>When to Use</h4><ul><li>Small applications</li><li>Containers with strict CPU limits</li><li>Single-core or low-resource machines</li></ul><h4>When to Avoid</h4><ul><li>Large heap sizes</li><li>Multi-core servers</li><li>Low-latency systems</li><li>High-traffic backend services</li></ul><h4>Enable</h4><pre>-XX:+UseSerialGC</pre><h4>Pros</h4><ul><li>Simple design</li><li>Low memory overhead</li></ul><h4>Cons</h4><ul><li>Long Stop-the-World pauses</li><li>Poor scalability</li><li>Does not take advantage of multiple CPUs</li></ul><blockquote>Serial GC uses <strong>one single thread</strong> to perform all garbage collection tasks.<br>When GC starts, the JVM pauses all application threads (<strong>Stop-the-World</strong>) and runs the collection process sequentially.</blockquote><blockquote>Young Generation → Minor GC<br>The Young Generation is divided into:<br><strong>Eden<br>Survivor 0 (S0)<br>Survivor 1 (S1)<br></strong>New objects are created in <strong>Eden</strong>.<br>When Eden becomes full, a <strong>Minor GC</strong> starts. <br>Serial GC uses a <strong>Copying Collector</strong>.<br>Process:<br>The JVM checks objects in <strong>Eden</strong>.<br>Live objects are copied to the empty Survivor space.<br>Dead objects are discarded.<br>Eden is cleared.<br>Survivor spaces switch roles (<strong>S0 ↔ S1</strong>).<br>Each surviving object increases its age.<br>Objects that survive enough GC cycles are <strong>promoted</strong> to the Old Generation.<br>Example:<br>Eden → S0<br>Next GC: Eden + S0 → S1<br>Next GC: Eden + S1 → S0<br>After enough survival cycles → Old Generation<br>This is efficient because most young objects die quickly.</blockquote><blockquote>Old Generation → Full GC<br>The Old Generation stores long-lived objects promoted from Young Generation.<br>When memory becomes insufficient, Serial GC performs a <strong>Full GC</strong> using <strong>Mark-Sweep-Compact</strong>.<br>Process:<br><strong>Mark</strong><br>Reachable objects are identified from GC Roots.<br><strong>Sweep</strong><br>Unreachable objects are removed.<br><strong>Compact</strong><br>Remaining live objects are moved together into contiguous memory blocks.<br>This removes fragmentation and creates free continuous space.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/939/1*-yNDETawzMQ0xiiQ4JuYZw.png" /><figcaption>High-Level Flow</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*i8jkatTIuSng9lXtsSMiDQ.png" /><figcaption>Copying Collector</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/498/1*G_ozERSaH10577EvyYTDkg.png" /><figcaption>Old Generation — Mark-Sweep-Compact</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*29p04DpWV-fnEqcf0FaRvA.png" /></figure><h4>Parallel GC</h4><p>Parallel GC uses multiple threads to perform garbage collection.<br>When garbage collection starts, the JVM pauses all application threads (<strong>Stop-the-World</strong>) and distributes the collection work across several CPU cores.<br>Its main objective is <strong>throughput</strong>, meaning it tries to maximize the total amount of application work completed over time.<br>Compared with Serial GC, it completes garbage collection faster on multi-core machines.<br>For many years, Parallel GC was the default collector for many server workloads.<br>Parallel GC is optimized for <strong>throughput</strong>, not for low pause times.</p><h4>How Parallel GC Works</h4><h4><strong>Young Generation</strong></h4><p>The Young Generation stores newly created objects, most of which die quickly.<br>Parallel GC uses a <strong>parallel copying collector</strong> for this area.<br>During a <strong>Minor GC</strong>:</p><ol><li>Multiple GC threads scan Eden and Survivor spaces.</li><li>Live objects are copied into an empty Survivor space.</li><li>Dead objects are discarded.</li><li>The reclaimed memory becomes available for new allocations.</li></ol><p>Because most young objects do not survive long, this phase is usually fast and efficient.</p><h4><strong>Old Generation</strong></h4><p>Objects that survive several Young Generation collections are promoted to the Old Generation.<br>Parallel GC usually uses <strong>parallel mark-sweep-compact</strong> for this area.</p><h4>During a <strong>Full GC</strong>:</h4><ol><li>Live objects are identified.</li><li>Unreachable objects are reclaimed.</li><li>Surviving objects may be compacted to remove fragmentation.</li></ol><p>This phase is heavier and causes longer pauses.</p><h4><strong>Core Behavior</strong></h4><ul><li><strong>Young Generation:</strong> parallel copying collection</li><li><strong>Old Generation:</strong> parallel mark-sweep-compact</li><li><strong>Minor GC:</strong> frequent and faster</li><li><strong>Full GC:</strong> less frequent, slower and processes the entire heap</li></ul><h4>When to Use</h4><ul><li>Multi-core servers</li><li>Batch processing systems</li></ul><h4><strong>When to Avoid</strong></h4><ul><li>Low-latency systems</li><li>Real-time applications</li><li>Systems sensitive to long pauses</li></ul><h4><strong>Enable</strong></h4><pre>-XX:+UseParallelGC</pre><h4><strong>Pros</strong></h4><ul><li>High throughput</li><li>Good CPU utilization</li><li>Faster than Serial GC on multi-core hardware</li><li>Good choice for throughput-focused workloads</li></ul><h4><strong>Cons</strong></h4><ul><li>Stop-the-World pauses still occur</li><li>Full GC can be expensive on large heaps</li><li>Not ideal for latency-sensitive systems</li></ul><blockquote>Parallel GC uses <strong>multiple GC threads</strong> to perform garbage collection work in parallel.<br>When GC starts, the JVM pauses all application threads (<strong>Stop-the-World</strong>), but the cleanup work is divided across several CPU cores.</blockquote><blockquote>Young Generation → Minor G<br>Parallel GC uses a <strong>Parallel Copying Collector</strong>.<br>Process:<br>Multiple GC threads scan <strong>Eden</strong> and the active Survivor space.<br>Live objects are copied in parallel to the empty Survivor space.<br>Dead objects are discarded.<br>Eden is cleared.<br>Survivor spaces swap roles (<strong>S0 ↔ S1</strong>).<br>Surviving objects increase their age.<br>Objects that survive enough cycles are <strong>promoted</strong> to Old Generation.<br>This is similar to Serial GC, but faster because several GC threads work at the same time.</blockquote><blockquote>Old Generation → Full GC<br>When more space is needed, Parallel GC performs a <strong>Full GC</strong> using <strong>Parallel Mark-Sweep-Compact</strong>.Process:<br><strong>Mark<br></strong>Multiple threads identify reachable objects from GC Roots.<br><strong>Sweep</strong><br>Unreachable objects are reclaimed.<br><strong>Compact</strong><br>Surviving objects are moved together to eliminate fragmentation.<br>This creates larger continuous free memory blocks.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5jBZ3h9zUw8PwUxaIv1a8Q.png" /><figcaption>Serial vs Parallel</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/793/1*5XJKV9ILyp35mMPr25c42w.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QKUogSlQKUWynRQDAQmmwA.png" /><figcaption>Stop The World + Parallel Workers</figcaption></figure><h4>CMS GC (Concurrent Mark Sweep)</h4><p>CMS GC uses multiple threads and performs most garbage collection work <strong>concurrently with the application threads</strong>.</p><p>Its main goal is to <strong>reduce long Stop-the-World pauses</strong>, making response times more stable.<br>Unlike <strong>Serial GC</strong> and <strong>Parallel GC</strong>, which pause the entire application during most collection phases, CMS performs much of the marking and sweeping while the application continues running.</p><p>Because of this, CMS became popular in <strong>latency-sensitive server applications</strong>, where shorter pauses were more important than maximum throughput.</p><p>However, CMS had important limitations:</p><ul><li>It <strong>does not compact memory by default</strong>, which can cause <strong>fragmentation</strong>.</li><li>It uses extra CPU while running concurrently.</li><li>Large heaps could still trigger long fallback Full GC pauses.</li></ul><p>CMS was <strong>deprecated in Java 9</strong> and later removed, because newer collectors like G1 Garbage Collector provided lower pauses with better memory management and simpler operation.</p><h4>How CMS Works</h4><h4>Initial Mark</h4><p>The JVM pauses the application for a very short time.<br>In this phase, it marks Old Generation objects that are reachable from GC Roots, such as:</p><ul><li>local variables in thread stacks</li><li>static variables</li><li>active threads</li><li>JNI references</li></ul><p>This is only the starting point, so the pause is short.</p><h4>Concurrent Mark</h4><p>The application resumes execution.<br>While the program continues running, CMS follows references starting from the objects marked in the previous phase.<br>It identifies all reachable objects in the <strong>Old Generation</strong>.<br>This is usually the longest phase, but the application keeps running.</p><h4>Remark</h4><p>The JVM pauses the application again for a short time.<br>During Concurrent Mark, the application may have changed references between objects.<br>Remark updates the marking information so the JVM knows exactly which objects are still alive.</p><h4>Concurrent Sweep</h4><p>The application continues running.<br>CMS removes unreachable objects from the <strong>Old Generation</strong> and reclaims that memory.<br>Dead objects are deleted while the application remains active.</p><h4>Important Limitation: No Compaction</h4><p>CMS normally uses <strong>Mark + Sweep</strong>, but not <strong>Compact</strong>.<br>That means live objects are not moved together after cleanup.</p><p>Example:</p><p>Before Sweep:</p><pre>[A][dead][B][dead][C]</pre><p>After Sweep:</p><pre>[A][free][B][free][C]</pre><p>Free memory remains separated in small empty spaces.<br>This is called <strong>fragmentation</strong>.<br>Over time, fragmentation can create allocation problems for large objects.</p><blockquote>Memory Strategy<br><strong>Young Generation:</strong> Minor GC<br><strong>Old Generation:</strong> CMS<br><strong>Goal:</strong> reduce pause times</blockquote><h4><strong>Enable</strong></h4><pre>-XX:+UseConcMarkSweepGC</pre><h4><strong>Pros</strong></h4><ul><li>Lower pause times than Parallel GC</li><li>Much of the work runs concurrently</li><li>Good for latency-sensitive workloads</li></ul><h4><strong>Cons</strong></h4><ul><li>Deprecated and removed from modern JVMs</li><li>Can suffer from memory fragmentation</li><li>More CPU overhead than Serial / Parallel GC</li><li>May fall back to Full GC in some situations</li></ul><blockquote>CMS GC uses <strong>multiple threads</strong> and performs most garbage collection work <strong>concurrently with the application</strong>.<br>Its main goal is to reduce long <strong>Stop-the-World pauses</strong>.<br>CMS works mainly on the <strong>Old Generation</strong>.<br>The <strong>Young Generation</strong> is collected with <strong>Minor GC</strong>.</blockquote><blockquote>CMS Collection Stages (Old Generation)<br>1. Initial Mark<br>Application pause.<br>The JVM marks Old Generation objects reachable from <strong>GC Roots</strong>.<br>2. Concurrent Mark<br>The application continues running.<br>GC threads trace references and identify all reachable objects in the <strong>Old Generation</strong>.<br>3. Remark<br>Application pause.<br>The JVM updates reference changes that happened while Concurrent Mark was running.<br>4. Concurrent Sweep<br>The application continues running.<br>GC threads remove unreachable objects from the <strong>Old Generation</strong> and reclaim memory.<br>5. No Compaction<br>CMS does <strong>not</strong> move surviving objects together after sweep.<br>This can produce <strong>memory fragmentation</strong>.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1We2dyvL7X6beVe_PnPQug.png" /></figure><h4>G1 GC (Garbage First)</h4><p>G1 GC is a designed to deliver balance between <strong>throughput</strong>, <strong>pause time</strong> and <strong>scalability</strong>.<br>It was built for multi-core machines and large heaps, where older collectors often struggled with long pauses.<br>Because it performs well across many workloads, G1 became the default collector in modern versions of the Java Virtual Machine.</p><h4>Why It Is Called Garbage First</h4><p>The name comes from its strategy.<br>Instead of cleaning memory in one massive operation, G1 identifies the heap areas containing the most reclaimable garbage and prioritizes those first.<br>This allows the JVM to recover memory while keeping pauses more controlled.</p><h4>Region-Based Memory Layout</h4><p>Traditional collectors divide the heap into fixed contiguous spaces such as Young Generation and Old Generation.<br>G1 uses a different model.<br>The heap is split into many small, equally sized <strong>regions</strong>. These regions are managed and the JVM can dynamically assign them as:</p><ul><li>Eden</li><li>Survivor</li><li>Old</li><li>Free space</li></ul><p>This design gives G1 much more flexibility than older collectors.</p><h4>How G1 GC Works</h4><h4>Young Collection</h4><p>When Eden regions fill up:</p><ul><li>The application pauses briefly.</li><li>Multiple GC threads process those regions.</li><li>Live objects are copied to Survivor or Old regions.</li><li>Empty regions are reclaimed.</li></ul><p>This is similar to a Minor GC in traditional generational collectors.</p><h4>Concurrent Marking</h4><p>G1 periodically starts a background marking cycle to analyze old regions.<br>Its purpose is to determine:</p><ul><li>which objects are still alive</li><li>which regions contain the most garbage</li></ul><p>Most of this work happens while the application continues running, which helps reduce long stop-the-world pauses.</p><h4>Mixed Collection</h4><p>After marking finishes, G1 continues doing Young collections, but it also includes selected Old regions with high garbage content.<br>Instead of reclaiming the entire Old Generation at once, G1 cleans old memory over multiple cycles.<br>This is one of the reasons G1 usually avoids long pauses.</p><h4>Compaction</h4><p>Because objects are copied between regions during collection, memory is compacted.</p><p>That means:</p><ul><li>fewer fragmented gaps</li><li>better memory organization</li><li>easier allocation of large objects</li></ul><h4>Collection Types</h4><p><strong>Young GC</strong><br>Processes Eden and Survivor regions. Short pauses.</p><p><strong>Mixed GC</strong><br>Processes Young regions plus selected Old regions. Moderate pauses.</p><p><strong>Full GC</strong><br>Processes the entire heap. Used as fallback and expensive.</p><blockquote>fast cleanup of short-lived objects<br>gradual cleanup of old memory<br>predictable pause times<br>solid throughput performance</blockquote><h4>When to Use G1</h4><p>G1 is a strong choice for:</p><ul><li>Large server applications</li><li>Multi-core environments</li><li>Medium to large heaps</li><li>Systems needing balanced latency and throughput</li></ul><h4>When to Avoid G1</h4><p>G1 may be less ideal for:</p><ul><li>Very small applications</li><li>Ultra-low-latency systems where Z Garbage Collector or Shenandoah GC can be better choices</li></ul><h4>Enable G1 GC</h4><pre>-XX:+UseG1GC</pre><h4>Advantages</h4><ul><li>Good balance between throughput and pause time</li><li>Predictable pause behavior</li><li>Handles large heaps well</li><li>Reduces fragmentation through compaction</li></ul><h4>Disadvantages</h4><ul><li>Full GC is still expensive if triggered</li><li>Not ideal for extreme low-latency systems</li></ul><blockquote>G1 GC balance throughput, pause times and scalability. It was created for multi-core and large heaps, where older collectors often produced long pauses.</blockquote><blockquote>Unlike traditional collectors that split memory into fixed Young and Old spaces, G1 divides the heap into many equal-sized regions. These regions are managed independently.</blockquote><blockquote>Its name, <strong>Garbage First</strong>, comes from its strategy of prioritizing the regions with the highest amount of reclaimable memory. By collecting the most valuable regions first, G1 improves memory recovery while keeping pauses more predictable.</blockquote><blockquote>G1 combines several mechanisms to achieve this. It performs regular Young collections for short-lived objects, runs concurrent marking in the background to analyze old regions and executes mixed collections that gradually reclaim old memory instead of processing the whole heap at once.</blockquote><blockquote>Because objects are copied between regions during collection, memory is also compacted, which reduces fragmentation and helps future allocations.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*p-MijBd_Rg9qLUYiAZEUHQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sXnMYN8gnUiv_v_eYHadHw.png" /></figure><h4>ZGC (Z Garbage Collector)</h4><p>ZGC is a garbage collector designed to keep pause times short, even when the application uses large amounts of heap memory.<br>Traditional collectors perform significant work during Stop-the-World phases. As heap size grows, those pauses become more noticeable because the JVM must process more memory during collection cycles. ZGC was created to move most of that work into concurrent phases that run while the application continues executing.<br>Its main objective is low latency: reduce long GC interruptions while maintaining efficient memory reclamation.</p><h4>Core Design of ZGC</h4><p>ZGC is built on three main principles:</p><ul><li><strong>Region-based heap</strong></li><li><strong>Mostly concurrent collection</strong></li><li><strong>Concurrent relocation of live objects</strong></li></ul><p>Instead of treating the heap as one large area, ZGC divides memory into many smaller regions. These regions can be processed independently, which improves scalability and allows reclamation in smaller steps.</p><h4>Heap Layout</h4><p>The heap is split into multiple regions.</p><p>Each region contain allocated objects or free space. During collection, ZGC select specific regions to inspect, relocate or reclaim without processing the entire heap at once.</p><p>This is one reason ZGC behaves well with large heaps.</p><h4>Main Collection Phases</h4><h4>1. Pause Mark Start</h4><p>A short Stop-the-World pause.</p><p>The JVM pauses application threads to capture <strong>GC Roots</strong>, such as:</p><ul><li>thread stack references</li><li>CPU register references</li><li>static field references</li><li>JNI references</li></ul><p>These roots are the starting points used to discover reachable objects.<br>Only this initial synchronization is paused, so the phase remains short.</p><h4>2. Concurrent Mark</h4><p>The application resumes running.</p><p>GC threads traverse references starting from the captured roots and mark all reachable objects as live.</p><p>This is usually the largest marking phase, but it happens concurrently with normal application execution.</p><h4>3. Pause Mark End</h4><p>Another short Stop-the-World pause.</p><p>The JVM finalizes marking and accounts for reference changes that occurred while concurrent marking was running.</p><p>This gives the collector a consistent view of which objects are still alive.</p><h4>4. Concurrent Relocation</h4><p>ZGC selects regions that contain reclaimable garbage and moves surviving objects to other regions.</p><p>This relocation happens while the application continues running.</p><p>Older collectors often compact memory during long pauses. ZGC performs object movement concurrently instead.</p><h4>5. Concurrent Reclaim</h4><p>After relocation finishes, regions that no longer contain live objects are released.</p><p>Those regions become available for future allocations.</p><h4>Colored Pointers</h4><p>ZGC uses <strong>colored pointers</strong>, meaning object references include metadata bits used internally by the JVM.</p><p>These bits help track object state during the GC cycle, such as whether an object has been marked or relocated.</p><p>This mechanism supports concurrent movement of objects.</p><h4>Load Barriers</h4><p>ZGC uses <strong>load barriers</strong> when reading references.</p><p>If the application accesses an object that was moved during relocation, the JVM can detect the old reference, resolve the new location, and continue safely.</p><p>This allows relocation without long global pauses.</p><h4>Fragmentation Control</h4><p>Because live objects are regularly relocated, free memory is consolidated over time.</p><p>This reduces fragmentation and helps future allocations find usable contiguous space.</p><h4>Generational ZGC</h4><p>Early ZGC versions were non-generational.</p><p>Newer JVM versions introduced <strong>Generational ZGC</strong>, which separates:</p><ul><li>young objects (usually short-lived)</li><li>old objects (long-lived)</li></ul><p>This improves efficiency while preserving the same low-pause model.</p><h4>When ZGC Is Useful</h4><p>ZGC is commonly chosen for:</p><ul><li>APIs with latency requirements</li><li>systems sensitive to pause spikes</li><li>applications with very large heaps</li></ul><h4>When ZGC May Be Unnecessary</h4><p>ZGC may be excessive for:</p><ul><li>small applications</li><li>tiny heaps</li></ul><h4>Enable ZGC</h4><pre>-XX:+UseZGC</pre><blockquote>ZGC keeps pauses short by limiting Stop-the-World work to brief synchronization steps. Marking, relocation and memory reclamation happen mostly while the application continues running. Its region-based design, load barriers and concurrent relocation make it a strong collector for large, latency-sensitive JVM systems.</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*NVpanm8YS87rWNhj2XjYtQ.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EA-A8KMy9lvnJylP69O4Fw.png" /></figure><h4>Shenandoah GC</h4><p>Shenandoah GC uses multiple threads and performs most garbage collection work <strong>concurrently</strong> with the application.<br>Its main objective is to provide <strong>low pause times</strong>, especially on large heaps.<br>Like ZGC, Shenandoah was designed to keep pause times low regardless of heap size by moving heavy GC work out of Stop-the-World phases.<br>Because of this, Shenandoah is a choice for systems that require consistent responsiveness and low latency.<br>Shenandoah was originally developed by <strong>Red Hat</strong> and later integrated into OpenJDK.</p><h4>How Shenandoah GC Works</h4><h4><strong>Region-Based Heap</strong></h4><p>Shenandoah divides the heap into independent regions.<br>This allows the JVM to reclaim memory in smaller units instead of processing the whole heap at once.</p><h4><strong>Concurrent Mark</strong></h4><p>Shenandoah identifies live objects while the application continues running.<br>Most of the marking phase happens concurrently, reducing long pauses.</p><h4><strong>Concurrent Evacuation</strong></h4><p>Instead of waiting for a long pause to compact memory, Shenandoah moves live objects to new regions while the application is still running.<br>References are updated using read/write barriers.</p><h4><strong>Short Pause Phases</strong></h4><p>Very short Stop-the-World pauses are still required for synchronization and update steps.<br>These pauses are designed to remain minimal.</p><h4><strong>Compaction</strong></h4><p>Because objects are continuously evacuated to new regions, fragmentation is greatly reduced.</p><h4><strong>Young / Old Generation Behavior</strong></h4><p>Early Shenandoah versions were non-generational.<br>Newer versions introduced <strong>generational Shenandoah</strong>, allowing separate handling of young and old objects for better efficiency.<br>Its main design goal remains minimizing pause times across the heap.</p><h4><strong>When to Use</strong></h4><ul><li>Large heap applications</li><li>Low-latency backend systems</li><li>Modern OpenJDK environments</li></ul><h4><strong>When to Avoid</strong></h4><ul><li>Very small applications</li><li>Simple workloads where G1 is sufficient</li><li>JVM distributions without Shenandoah support</li></ul><h4><strong>Enable</strong></h4><pre>-XX:+UseShenandoahGC</pre><h4><strong>Pros</strong></h4><ul><li>Very low pause times</li><li>Scales well with large heaps</li><li>Concurrent compaction reduces fragmentation</li><li>Good for latency-sensitive workloads</li></ul><h4><strong>Cons</strong></h4><ul><li>Higher runtime complexity</li><li>May use more CPU overhead than simpler collectors</li><li>Not always available in every JVM distribution</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1m1ojhvjbyC6jw_yEv6QhQ.png" /></figure><h3>Technical Summary</h3><p>Garbage collectors differ in <strong>how they reclaim unused objects</strong>, <strong>how memory is organized</strong>.</p><p>Older collectors use a <strong>classic generational heap</strong>:</p><ul><li><strong>Young Generation</strong>: where new objects are created</li><li><strong>Old Generation</strong>: where long-lived objects are promoted</li></ul><p>Newer collectors often use a <strong>region-based heap</strong>, where memory is split into many equal regions managed independently.</p><h4>Serial GC</h4><p>Serial GC uses a single GC thread for all collection work.</p><h4>Young Generation</h4><p>When Eden fills, a Minor GC runs:</p><ol><li>Scan objects in Eden and active Survivor space</li><li>Copy reachable objects to empty Survivor space</li><li>Discard unreachable objects</li><li>Swap Survivor spaces</li><li>Promote aged objects to Old Generation</li></ol><h4>Old Generation</h4><p>When Old memory is full, Full GC runs:</p><ol><li><strong>Mark</strong> reachable objects</li><li><strong>Sweep</strong> unreachable objects</li><li><strong>Compact</strong> live objects together to remove gaps</li></ol><h4>Characteristics</h4><ul><li>single-threaded</li><li>all GC phases stop the application</li></ul><h4>Parallel GC</h4><p>Parallel GC uses the same collection model as Serial GC, but with multiple GC threads.</p><h4>Young Generation</h4><p>Minor GC uses parallel copying:</p><ol><li>Scan Eden + Survivor spaces in parallel</li><li>Copy live objects</li><li>Discard dead objects</li></ol><h4>Old Generation</h4><p>Full GC uses parallel:</p><ol><li>Mark</li><li>Sweep</li><li>Compact</li></ol><h4>Characteristics</h4><ul><li>multiple GC threads</li><li>Stop-The-World during GC</li><li>optimized for throughput</li></ul><h4>CMS (Concurrent Mark Sweep)</h4><p>CMS reduces pause time in Old Generation.</p><h4>Young Generation</h4><p>Uses traditional Minor GC copying collection.</p><h4>Old Generation</h4><p>Main stages:</p><h4>Initial Mark</h4><p>Short pause. Marks objects reachable from roots.</p><h4>Concurrent Mark</h4><p>Application continues running while collector marks reachable objects.</p><h4>Remark</h4><p>Short pause. Updates changes that happened during concurrent mark.</p><h4>Concurrent Sweep</h4><p>Application continues running while dead objects are reclaimed.</p><h4>Characteristics</h4><ul><li>Old Generation collected mostly concurrently</li><li>memory fragmentation can appear</li></ul><h4>G1 GC (Garbage First)</h4><p>G1 uses a region-based heap.</p><p>Heap is divided into many equal regions. Regions can act as:</p><ul><li>Eden</li><li>Survivor</li><li>Old</li><li>Free</li></ul><h4>Main Stages</h4><h4>Young GC</h4><p>Collects Young regions and copies live objects elsewhere.</p><h4>Initial Mark</h4><p>Short pause to start marking.</p><h4>Concurrent Mark</h4><p>Marks live objects while application runs.</p><h4>Remark</h4><p>Short pause to finalize marking.</p><h4>Cleanup</h4><p>Selects regions with high reclaimable garbage.</p><h4>Mixed GC</h4><p>Collects Young regions plus selected Old regions.</p><h4>Characteristics</h4><ul><li>region-based design</li><li>compaction through evacuation</li><li>balanced pause time and throughput</li></ul><h4>ZGC</h4><p>ZGC is designed for very low pause times.</p><p>Uses a region-based heap.</p><h4>Main Stages</h4><h4>Pause Mark Start</h4><p>Very short pause to capture roots.</p><h4>Concurrent Mark</h4><p>Marks live objects while application runs.</p><h4>Pause Mark End</h4><p>Short pause to finalize marking.</p><h4>Concurrent Relocation</h4><p>Moves live objects to new regions while application runs.</p><h4>Concurrent Reclaim</h4><p>Releases empty regions.</p><h4>Characteristics</h4><ul><li>most work is concurrent</li><li>very short pauses</li><li>uses load barriers and colored pointers</li></ul><h4>Shenandoah GC</h4><p>Shenandoah is also focused on low pause times.</p><p>Uses a region-based heap.</p><h4>Main Stages</h4><h4>Initial Mark</h4><p>Short pause.</p><h4>Concurrent Mark</h4><p>Marks live objects while application runs.</p><h4>Final Mark</h4><p>Short pause.</p><h4>Concurrent Evacuation</h4><p>Moves live objects while application runs.</p><h4>Update References</h4><p>Fixes references to moved objects.</p><h4>Cleanup</h4><p>Reclaims free regions.</p><h4>Characteristics</h4><ul><li>concurrent compaction</li><li>low pauses</li><li>region-based design</li></ul><h3>Technical Evolution</h3><h4>Serial GC</h4><p>Single-threaded full-stop collector.</p><h4>Parallel GC</h4><p>Same model with multiple threads.</p><h4>CMS</h4><p>Introduced concurrent collection for Old Generation.</p><h4>G1</h4><p>Introduced region-based incremental collection.</p><h4>ZGC / Shenandoah</h4><p>Move and reclaim memory concurrently with minimal pauses.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d6544d448330" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[JVM Internals Overview]]></title>
            <link>https://medium.com/@barbieri.santiago/jvm-internals-overview-d76364b3e54c?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/d76364b3e54c</guid>
            <category><![CDATA[java]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sat, 25 Apr 2026 20:18:13 GMT</pubDate>
            <atom:updated>2026-04-25T20:19:04.278Z</atom:updated>
            <content:encoded><![CDATA[<p>The Java Virtual Machine (JVM) is the runtime environment that executes Java bytecode. JVM internals encompass the core components and mechanisms that make Java programs run efficiently and safely. Key areas include:</p><ul><li><strong>Class Loading</strong>: The process of loading, linking, and initializing classes. Linking consists of three phases: verification (ensuring bytecode is safe and follows JVM specifications), preparation (allocating memory for class variables and setting them to default values), and resolution (converting symbolic references to direct references when needed). Initialization executes static initializers and sets static fields to their initial values.</li><li><strong>Memory Management</strong>: The JVM manages memory through three main areas: the heap (shared memory pool for object allocation), the stack (per-thread memory for method execution including local variables and call frames), and garbage collection (automatic reclamation of memory occupied by objects that are no longer reachable, preventing memory leaks and managing heap space efficiently).</li><li><strong>Execution Engine</strong>: The component responsible for executing Java bytecode. It uses two main approaches: interpretation (executing bytecode instructions one by one, providing portability but slower performance) and Just-In-Time (JIT) compilation (converting frequently executed bytecode to native machine code at runtime for optimal performance). The JVM typically starts with interpretation and switches to JIT compilation for hot methods.</li><li><strong>Thread Management</strong>: Creating and scheduling threads</li><li><strong>Security</strong>: The JVM implements a security model called “sandboxing” that restricts untrusted code from performing potentially harmful operations. This includes bytecode verification to ensure code safety, security managers that enforce access policies, and class loaders that isolate code from different sources. Access control is managed through permissions and security policies that define what operations (like file access, network connections, or system property modifications) are allowed for different code sources.</li><li><strong>Performance Optimization</strong>: The JVM employs Just-In-Time (JIT) compilation to translate frequently executed bytecode into native machine code, significantly improving performance. Adaptive optimization dynamically analyzes runtime behavior to apply optimizations like method inlining, loop unrolling, dead code elimination, and escape analysis. The JVM uses profiling data to decide which methods to compile and at what optimization level, balancing compilation time with execution speed.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*O7KuWDjWu02x0WbeQER1YA.png" /></figure><blockquote>The Java Virtual Machine (JVM) is the environment that runs Java bytecode. Its main internal parts are:</blockquote><blockquote>Class Loading: Loads classes into memory, verifies bytecode safety, prepares variables, resolves references, and runs static initialization.<br>Memory Management: Uses the heap for objects, the stack for method calls and local variables, and garbage collection to free unused memory automatically.<br>Execution Engine: Runs bytecode through interpretation (slower, portable) or JIT compilation (faster, converts hot code into native machine code).<br>Thread Management: Handles creation, scheduling, and execution of multiple threads.<br>Security: Protects the system with sandboxing, bytecode verification, class loaders, and permission-based access control.<br>Performance Optimization: Improves speed using JIT techniques like method inlining, loop optimization, dead code removal, and runtime profiling.</blockquote><h3>Key Components</h3><ul><li><strong>Class Loader Subsystem</strong>: Loads classes from various sources</li><li><strong>Runtime Data Areas</strong>: Memory areas for program execution</li><li><strong>Execution Engine</strong>: Executes bytecode instructions</li><li><strong>Native Interface</strong>: Interacts with native libraries</li><li><strong>Garbage Collector</strong>: Manages memory automatically</li></ul><h3>JVM Memory Model</h3><p>The JVM Memory Model defines how threads interact through memory and what behaviors are allowed in concurrent programs. It specifies:</p><ul><li><strong>Atomicity</strong>: Which operations are indivisible</li><li><strong>Visibility</strong>: When changes made by one thread are visible to others</li><li><strong>Ordering</strong>: The order in which operations appear to execute</li></ul><h3>Happens-Before Relationship</h3><p>The memory model is based on the “happens-before” relationship, which defines the order of operations. If operation A happens-before operation B, then the effects of A are visible to B.</p><p>Common happens-before relationships:</p><ul><li><strong>Program Order</strong>: Within a single thread, operations appear to execute in the order they appear in the program code. This ensures that a thread sees its own operations in the expected sequence.</li><li><strong>Monitor Lock</strong>: When a thread unlocks a monitor (exits a synchronized block), this happens-before any subsequent thread locks the same monitor (enters a synchronized block). This ensures proper synchronization between threads using the same lock.</li><li><strong>Volatile Variables</strong>: A write to a volatile variable happens-before any subsequent read of the same volatile variable by another thread. This provides visibility guarantees for volatile fields across threads.</li><li><strong>Thread Start/Join</strong>: Calling start() on a thread guarantees that everything done before start() is visible to the new thread when it begins execution. Likewise, when join() returns successfully, all actions performed by that thread are guaranteed to be visible to the thread that called join(). This provides safe synchronization for the thread lifecycle.</li></ul><h3>Code Examples</h3><h4>Volatile variables for visibility</h4><pre>public class SharedState {<br>    private volatile boolean ready = false;<br>    private int value;<br>    <br>    public void write() {<br>        value = 42;      // Write value<br>        ready = true;    // Volatile write - establishes happens-before<br>    }<br>    <br>    public int read() {<br>        while (!ready) { // Volatile read<br>            // Spin wait<br>        }<br>        return value;    // Guaranteed to see 42<br>    }<br>}</pre><h4>Synchronized blocks for atomicity</h4><pre>public class Counter {<br>    private int count = 0;<br>    <br>    public synchronized void increment() {<br>        count++;  // Atomic operation<br>    }<br>    <br>    public synchronized int getCount() {<br>        return count;  // Atomic read<br>    }<br>}</pre><blockquote>The JVM Memory Model defines how threads share data in concurrent programs.</blockquote><blockquote>Atomicity: operations that happen completely or not at all.<br>Visibility: when one thread can see changes made by another.<br>Ordering: the sequence in which operations appear to happen.</blockquote><blockquote>It uses the happens-before rule: if action A happens-before action B, then B can see the effects of A.</blockquote><blockquote>Program Order: instructions run in order inside one thread.<br>Monitor Lock: releasing a synchronized lock is visible to the next thread that acquires it.<br>Volatile: writing to a volatile variable is visible to later reads.<br>Start/Join: start() and join() ensure safe synchronization between threads.</blockquote><h3><strong>Thread Stack</strong></h3><p>Each Java thread has its own private stack, used during method execution. It stores:</p><ul><li><strong>Method frames:</strong> data for each active method call, including local variables and temporary values used by the method.</li><li><strong>Return addresses:</strong> the point where execution continues after a method finishes.</li><li><strong>Intermediate results:</strong> values produced during calculations before the final result is obtained.</li></ul><p>The stack works with a <strong>push/pop</strong> mechanism: when a method is called, a new frame is added to the stack; when the method ends, that frame is removed. This helps the JVM manage method calls while keeping each thread’s execution separate.</p><h3>Stack Frame Structure</h3><p>Each stack frame contains:</p><ul><li><strong>Local Variable Array</strong>: Method parameters and local variables</li><li><strong>Operand Stack:</strong> Temporary memory area used by the JVM to perform calculations and store intermediate results during method execution.</li><li><strong>Frame Data:</strong> Additional method execution information, such as references to the constant pool and data used for exception handling.</li></ul><h4>Stack Size Configuration</h4><pre>// JVM options for stack size<br>// -Xss256k          // 256KB per thread<br>// -Xss1m            // 1MB per thread</pre><p><strong>Common Stack Issues</strong></p><ul><li><strong>StackOverflowError:</strong> Occurs when a thread stack exceeds its maximum size, usually caused by deep or infinite recursion. Each method call adds a new frame to the stack, and if calls continue without returning, the stack eventually runs out of space.</li><li><strong>OutOfMemoryError:</strong> Happens when the JVM cannot create new stack frames or allocate memory for additional thread stacks. This may occur when the system has insufficient memory available or when too many threads are created.</li></ul><h3>Code Examples</h3><h4>Recursive method and stack usage</h4><pre>public class Factorial {<br>    public static long factorial(int n) {<br>        if (n &lt;= 1) {<br>            return 1;<br>        }<br>        return n * factorial(n - 1);  // Each call adds frame to stack<br>    }<br>    <br>    public static void main(String[] args) {<br>        // Deep recursion may cause StackOverflowError<br>        try {<br>            System.out.println(factorial(10000));<br>        } catch (StackOverflowError e) {<br>            System.out.println(&quot;Stack overflow occurred&quot;);<br>        }<br>    }<br>}</pre><blockquote>Each stack frame stores the data needed for a method call:</blockquote><blockquote>Local Variable Array: parameters and local variables.<br>Operand Stack: temporary area for calculations and intermediate results.<br>Frame Data: extra information like constant pool references and exception handling data.</blockquote><blockquote>The stack size can be configured with JVM options such as -Xss256k or -Xss1m (default depends on the JVM).</blockquote><blockquote>Common problems:<br>StackOverflowError: stack becomes full, usually due to deep or infinite recursion.<br>OutOfMemoryError: JVM cannot allocate more stack memory.</blockquote><h3>Heap</h3><p>The heap is the runtime data area where objects are allocated. It’s shared among all threads and managed by the garbage collector. Key characteristics:</p><ul><li><strong>Dynamic Allocation</strong>: Objects are created at runtime using the new keyword and allocated on the heap. Memory is freed when objects are no longer referenced.</li><li><strong>Shared Memory</strong>: All threads in the JVM process access the same heap, allowing threads to share objects and data. This requires synchronization mechanisms to prevent data races and ensure thread safety.</li><li><strong>Garbage Collected</strong>: The JVM reclaims memory occupied by unreachable objects through garbage collection.</li><li><strong>Generational</strong>: The heap is divided into generations (Young, Old, and Metaspace) based on object lifetime patterns. Most objects die young, so frequent collection of young generation is efficient. Objects that survive multiple GC cycles are moved to older generations for less frequent collection, optimizing collection pause times.</li></ul><p><strong>Generational Levels</strong>:</p><ul><li><strong>Young Generation</strong> (also called <strong>Nursery</strong>): Where newly created objects are allocated. Contains Eden Space (initial allocation) and two Survivor Spaces. Collected frequently with minor GC because most objects become garbage quickly.</li><li><strong>Old Generation</strong> (also called <strong>Tenured</strong>): Contains long-lived objects that survived multiple minor GC cycles. Collected less frequently with major GC because most objects here are still alive.</li><li><strong>Permanent Generation (Java 7 and earlier)</strong> or <strong>Metaspace (Java 8+)</strong>: Stores class metadata, method data, code cache, and interned strings. Collected during full GC. In Java 7 and earlier, PermGen was part of the heap with a fixed size, often causing OutOfMemoryError when too many classes were loaded. In Java 8+, Metaspace was introduced to address this limitation. Metaspace grows dynamically from <strong>native memory</strong> (off-heap system memory managed by the operating system) rather than from the JVM heap. This means Metaspace can grow as needed up to the system’s available memory, eliminating fixed-size limitations and allowing applications to load more classes without heap space concerns.</li></ul><p>The heap size is configured with JVM options:</p><ul><li>-Xms: Initial heap size</li><li>-Xmx: Maximum heap size</li></ul><h3>Heap Structure</h3><ul><li><strong>Young Generation</strong>: Newly created objects allocated in this generation. Contains:</li><li><strong>Eden Space</strong>: Where all new objects are initially created. Most objects die here within one garbage collection cycle.</li><li><strong>Survivor Space (S0 and S1)</strong>: Objects that survive garbage collection are moved here. Default ratio is 8:1 (Eden:Survivor).</li><li><strong>Old Generation</strong>: Contains long-lived objects that have survived multiple minor garbage collections. Objects move here when they reach a promotion age threshold (default 15 for parallel GC).</li><li><strong>Perm/Metaspace</strong>: Stores class structures and metadata that describe loaded classes and methods. Java 8+ uses Metaspace which resides in native memory instead of heap.</li></ul><h3>Code Examples</h3><h4>Object allocation patterns</h4><pre>public class ObjectAllocation {<br>    public static void main(String[] args) {<br>        // Allocate objects in young generation<br>        List&lt;String&gt; list = new ArrayList&lt;&gt;();<br>        for (int i = 0; i &lt; 100000; i++) {<br>            list.add(&quot;Object &quot; + i);<br>        }<br>        <br>        // Objects become eligible for GC when method exits<br>        // list goes out of scope here<br>        System.gc();  // Suggest garbage collection<br>    }<br>}</pre><h3>Heap Areas</h3><p>The JVM heap is divided into several areas for efficient memory management:</p><ul><li><strong>Young Generation</strong>: Where new objects are allocated</li><li><strong>Eden Space</strong>: Initial allocation area</li><li><strong>Survivor Spaces (S0, S1)</strong>: Objects that survive minor GC</li><li><strong>Old Generation (Tenured)</strong>: Long-lived objects that survive multiple GC cycles</li><li><strong>Permanent Generation/Metaspace</strong>: Class metadata, method data, interned strings</li><li>Replaced by Metaspace in Java 8+</li></ul><h3>Young Generation</h3><ul><li>Objects start in Eden space</li><li>Minor GC moves survivors to survivor spaces</li><li>After multiple survivals, objects move to old generation</li></ul><h3>Old Generation</h3><ul><li>Contains objects that have survived many GC cycles</li><li>Collected during major GC (full GC)</li><li>Usually larger than young generation</li></ul><h3>Metaspace (Java 8+)</h3><ul><li>Stores class metadata</li><li>Grows dynamically (no fixed size limit)</li><li>Can cause OutOfMemoryError if unlimited</li></ul><h3>Code Examples</h3><h4>Configuring heap areas</h4><pre>// JVM options for heap configuration<br>// -Xms512m -Xmx2g           // Heap size: 512MB initial, 2GB max<br>// -Xmn256m                  // Young generation: 256MB<br>// -XX:NewRatio=2            // Old:Young ratio = 2:1<br>// -XX:SurvivorRatio=8       // Eden:Survivor ratio = 8:1<br>// -XX:MaxMetaspaceSize=256m // Metaspace limit</pre><blockquote>JVM Heap Memory<br>The JVM heap is divided into areas for efficient memory management:<br>Young Generation: where new objects are created.<br>Eden Space: initial area where most new objects are allocated. Many objects die here quickly.<br>Survivor Spaces (S0, S1): objects that survive minor garbage collection are moved here.<br>Old Generation (Tenured): stores long-lived objects that survived several garbage collection cycles.<br>Metaspace (Java 8+): stores class metadata and method information in native memory (replaced PermGen). It grows dynamically but can cause OutOfMemoryError if unrestricted.<br>Garbage Collection Flow<br>New objects start in Eden.<br>Surviving objects move to Survivor spaces.<br>After multiple survivals, they are promoted to Old Generation.<br>Common JVM Options<br>-Xms / -Xmx → initial and maximum heap size<br>-Xmn → young generation size<br>-XX:SurvivorRatio → Eden/Survivor ratio<br>-XX:MaxMetaspaceSize → Metaspace limit</blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vu7n9ewbDUE9Qa4_ov4nvQ.png" /></figure><p>The <strong>Java Virtual Machine (JVM)</strong> is the core runtime environment that makes Java programs portable, secure, and efficient. It executes Java bytecode independently of the operating system, allowing the same program to run on multiple platforms.</p><p>The main internal areas of the JVM were analyzed:</p><ul><li><strong>Class Loading</strong>, responsible for loading, linking, and initializing classes.</li><li><strong>Memory Management</strong>, using the heap, stack, and garbage collection.</li><li><strong>Execution Engine</strong>, which runs bytecode through interpretation and JIT compilation.</li><li><strong>Thread Management</strong>, enabling concurrent execution.</li><li><strong>Security</strong>, through sandboxing, bytecode verification, and access control.</li><li><strong>Performance Optimization</strong>, using runtime profiling and adaptive compilation.</li></ul><p>The <strong>JVM Memory Model</strong> was also covered, explaining how threads interact safely through concepts such as <strong>atomicity, visibility, ordering</strong>, and the <strong>happens-before relationship</strong>.</p><p>In addition, the two most important runtime memory areas were studied:</p><ul><li><strong>Stack</strong>, used for method calls, local variables, and thread execution flow.</li><li><strong>Heap</strong>, used for object allocation and divided into Young Generation, Old Generation, and Metaspace.</li></ul><p>Understanding JVM internals is essential for Java developers because it helps improve:</p><ul><li>Application performance</li><li>Memory usage</li><li>Multithreading correctness</li><li>Error diagnosis</li><li>Efficient code design</li></ul><p>The JVM is much more than a program runner: it is a complete execution platform that combines memory management, concurrency control, security, and optimization to make Java reliable and powerful.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d76364b3e54c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Procesos y Hilos en Java: modelo, comunicación y concurrencia]]></title>
            <link>https://medium.com/@barbieri.santiago/diferencias-entre-procesos-y-hilos-en-java-comunicaci%C3%B3n-local-y-en-red-dcc95d4df3ec?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/dcc95d4df3ec</guid>
            <category><![CDATA[concurrency]]></category>
            <category><![CDATA[java]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sun, 19 Apr 2026 21:36:19 GMT</pubDate>
            <atom:updated>2026-04-19T21:37:25.590Z</atom:updated>
            <content:encoded><![CDATA[<h3>Procesos e hilos: aislamiento y comunicación entre procesos</h3><p>Un proceso tiene su propio espacio de memoria independiente, lo que lo hace más aislado y seguro, pero también más pesado en términos de recursos y comunicación. En cambio, los hilos corren dentro de un proceso, comparten la misma memoria, y son más ligeros, pero necesitan sincronización para evitar conflictos<br>Los pipes y los sockets se usan para la comunicación entre procesos, mientras que la memoria compartida permite que varios procesos accedan a la misma zona de memoria.<br>Un pipe es un mecanismo de comunicación entre procesos (generalmente en la misma máquina), permitiendo que intercambien datos de forma sencilla a través del sistema operativo. En cambio, un socket se utiliza para la comunicación entre procesos que pueden estar en diferentes máquinas a través de una red. Así que, los pipes suelen usarse para comunicación local entre procesos, mientras que los sockets se usan para comunicación en red.<br>Los hilos, o threads, corren dentro de un mismo proceso y comparten la misma zona de memoria, lo que los hace más ligeros y eficientes en términos de recursos, porque no tienen que duplicar el espacio de memoria como los procesos. Sin embargo, al compartir memoria, necesitan sincronización para evitar que se produzcan conflictos o datos inconsistentes. Así que, en resumen, son más rápidos y consumen menos recursos, pero requieren un poco más de cuidado con la sincronización.</p><p>Los recursos, como el tiempo de CPU, la memoria, los manejadores de archivos, los bloqueos y las conexiones de red, son aspectos clave en la gestión de hilos. Los hilos suelen competir por el tiempo de CPU y, al compartir datos, hay que tener cuidado con la sincronización.</p><h4>El ciclo de vida de un hilo en Java: desde el estado new hasta terminated</h4><p>Un hilo comienza en el estado “new” cuando se crea. Luego, al llamar al método start(), pasa al estado “runnable”, que significa que está listo para ejecutarse. En este estado, el hilo puede estar ejecutándose o listo para ejecutarse, esperando CPU. En Java no existe un estado separado llamado “running”, todo eso es “runnable”, esperando que el scheduler le asigne tiempo de CPU.</p><p>Si intenta acceder a un recurso sincronizado que está ocupado, puede pasar al estado “blocked”. También puede entrar en estados de espera como “waiting” o “timed waiting” si necesita que ocurra algún evento o esperar cierto tiempo.</p><p>Finalmente, cuando termina su ejecución o ocurre un error, pasa al estado “terminated”.</p><h4>Creación y ejecución de hilos con expresiones lambda</h4><pre>class Example {<br>  public static void main(String[] args) throws Exception {<br>    Thread t = new Thread(() -&gt; {<br>      // work<br>      for (int i = 0; i &lt; 3; i++) {}<br>    });<br>    t.start();<br>    t.join(); // wait for completion<br>  }<br>}</pre><p>En el ejemplo, creamos un nuevo hilo usando una expresión lambda y en su interior ejecutamos un bucle. Luego, iniciamos el hilo con start() y, en el método main, usamos join() para esperar a que termine antes de continuar. Así aseguramos que el hilo finalice antes de que el programa siga.</p><h4>Daemon threads</h4><pre>class DaemonDemo {<br>  public static void main(String[] args) {<br>    Thread t = new Thread(() -&gt; { while (true) {} });<br>    t.setDaemon(true);<br>    t.start();<br>  }<br>}</pre><p>Un hilo demonio, o daemon thread, es un tipo de hilo que no mantiene viva la máquina virtual de Java (JVM) por sí solo. Es decir, si todos los hilos que quedan son demonios, la JVM se apaga. Se usan para tareas en segundo plano, como tareas de limpieza, temporizadores, o procesos que no necesitan finalizar antes de que el programa termine. Eso sí, hay que tener cuidado porque no se garantiza que se ejecuten hasta el final, así que son ideales para tareas que no son críticas.</p><p>Un temporizador es básicamente una tarea que se ejecuta después de un cierto intervalo de tiempo, o de forma repetitiva. Por ejemplo, si quieres que se ejecute una acción cada cierto tiempo, como limpiar un caché de manera periódica, usarías un temporizador. Los hilos demonio son perfectos para este tipo de tareas, ya que no interfieren con el cierre de la aplicación.</p><h4>Thread pools (ExecutorService)</h4><pre>class PoolDemo {<br>  public static void main(String[] args) throws Exception {<br>    ExecutorService pool = Executors.newFixedThreadPool(4);<br>    Future&lt;Integer&gt; f = pool.submit(() -&gt; 40 + 2);<br><br>    int answer = f.get(); // blocks<br>    pool.shutdown();      // stop accepting new tasks<br>  }<br>}</pre><p>El ExecutorService es una interfaz que nos permite gestionar un grupo de hilos de forma más eficiente. En lugar de crear un hilo por tarea, usamos un pool de hilos que se reutilizan. La idea es justamente evitar el costo de crear un hilo nuevo para cada tarea, lo cual puede ser costoso. En su lugar, se usa un pool, que es un conjunto de hilos que se mantienen vivos y se reutilizan.</p><h4>Concurrent collections (when many threads access collections)</h4><p>Las colecciones concurrentes son útiles cuando trabajamos con múltiples hilos, porque nos permiten evitar los problemas de concurrencia, como las condiciones de carrera. Hay clases como ConcurrentHashMap, CopyOnWriteArrayList y otras que están diseñadas para ser seguras en entornos con múltiples hilos.</p><pre>class ConcurrentMapDemo {<br>  static final ConcurrentHashMap&lt;String, Integer&gt; hits = new ConcurrentHashMap&lt;&gt;();<br><br>  static void record(String key) {<br>    hits.merge(key, 1, Integer::sum); // atomic update<br>  }<br>}</pre><p>El primer parámetro es la clave que queremos actualizar o agregar en el mapa. El segundo parámetro es el valor que queremos sumar o usar si la clave no existe. Y el tercer parámetro es una función que define cómo combinar el valor actual con el nuevo valor. En este caso, la función que se pasa simplemente suma uno al valor actual, o lo inicializa con uno si no existe. De esta manera, cada vez que se llame a este método, el valor asociado a esa clave se incrementa en uno de forma segura y atómica.</p><h4>Synchronization &amp; locks</h4><p>La sincronización se asegura de que varios hilos no accedan a los mismos recursos al mismo tiempo, evitando condiciones de carrera. Cuando hablamos de “mutual exclusion”, nos referimos a que solo un hilo puede acceder a un recurso compartido a la vez. La “visibility” asegura que los cambios que hace un hilo sean visibles para los demás. El “monitor lock” es el mecanismo que se utiliza con las palabras clave synchronized para entrar y salir de una sección crítica. El término “happens-before” se refiere a un principio de consistencia que garantiza que las acciones de un hilo se vean antes de las de otro.</p><pre>class Counter {<br>  private int value;<br>  public synchronized void inc() { value++; }<br>  public synchronized int get() { return value; }<br>}</pre><p>En este ejemplo, al usar el modificador synchronized en los métodos inc y get, lo que logras es que solo un hilo pueda acceder a esos métodos a la vez, evitando que dos hilos intenten modificar o leer el valor al mismo tiempo y así se evita la condición de carrera. La palabra clave volatile no es necesaria en este caso, porque el synchronized garantiza la visibilidad y la exclusión mutua. De esta forma, el valor de value siempre estará actualizado y consistente.</p><h4>wait/notify (intrinsic condition waiting)</h4><pre>class OneSlotBuffer {<br>  private Integer slot = null;<br><br>  public synchronized void put(int x) throws InterruptedException {<br>    while (slot != null) wait();<br>    slot = x;<br>    notifyAll();<br>  }<br><br>  public synchronized int take() throws InterruptedException {<br>    while (slot == null) wait();<br>    int x = slot;<br>    slot = null;<br>    notifyAll();<br>    return x;<br>  }<br>}</pre><p>La idea principal es que, cuando un hilo necesita esperar a que se cumpla cierta condición, utiliza el método wait dentro de un bloque sincronizado, y el hilo que cambia esa condición llama a notify() o notifyAll() para despertar a los hilos que estaba esperando. Es importante que el wait se use siempre dentro de un bucle, porque la condición podría cambiar por otras razones, y así se asegura que, cuando el hilo despierte, realmente se cumpla la condición. El concepto de “spurious wakeup” se refiere a que, a veces, un hilo puede despertar sin que realmente se haya cumplido la condición, por eso siempre se usa el bucle para verificarlo.</p><h4>ReentrantLock (explicit lock)</h4><pre>class LockCounter {<br>  private final ReentrantLock lock = new ReentrantLock();<br>  private int value;<br><br>  void inc() {<br>    lock.lock();<br>    try { value++; }<br>    finally { lock.unlock(); }<br>  }<br>}</pre><p>Los ReentrantLock son una alternativa a los bloques sincronizados y ofrecen características más avanzadas. Son reentrantes, lo que significa que el mismo hilo puede adquirir el lock varias veces sin quedar bloqueado. Además, permiten usar un tryLock, que intenta adquirir el lock sin bloquear el hilo. También tienen la opción de fairness, que garantiza un acceso justo a los hilos, y permiten condiciones múltiples, es decir, que puedas tener diferentes condiciones de espera.<br>Tener condiciones múltiples significa que, dentro del mismo lock, puedes crear distintos objetos de condición que permiten que los hilos esperen por diferentes condiciones o eventos. Por ejemplo, tienes varios hilos esperando a que se libere un recurso específico. Con condiciones múltiples, puedes hacer que cada hilo espere a una condición distinta y así no se despiertan todos a la vez, sino solo cuando la condición que les corresponde se cumple.</p><h4>Volatile</h4><pre>class StopFlag {<br>  private volatile boolean stop;<br>  void requestStop() { stop = true; }<br>  void runLoop() { while (!stop) { /* spin */ } }<br>}</pre><p>La palabra clave volatile se utiliza para garantizar la visibilidad de las variables entre hilos. Es decir, cuando un hilo modifica una variable marcada como volatile, el cambio es inmediatamente visible para otros hilos. Sin embargo, volatile no garantiza la atomicidad de operaciones compuestas, como el incremento, que sigue siendo no atómico. En el ejemplo, la variable stop se marca como volatile para que, cuando un hilo cambie su valor a true, el otro hilo que está en el bucle while lo vea de inmediato y pueda salir del bucle. Pero las operaciones compuestas, como count++, no se vuelven atómicas con volatile.</p><h4>Atomic variables</h4><p>AtomicInteger, AtomicLong, etc. provide lock-free atomic operations.</p><pre>class AtomicDemo {<br>  private final AtomicInteger count = new AtomicInteger(0);<br><br>  void inc() { count.incrementAndGet(); }<br>  int get() { return count.get(); }<br>}</pre><p>Las variables atómicas, como AtomicInteger, son muy útiles porque permiten operaciones atómicas sin necesidad de bloques de sincronización. Esto significa que el incremento y la obtención del valor se hacen de forma segura, incluso en entornos con múltiples hilos. En el ejemplo, el método inc incrementa el valor de count y lo devuelve en una sola operación atómica, lo que evita problemas de concurrencia. Y el método get simplemente devuelve el valor actual de manera segura.</p><h4>CountDownLatch</h4><pre>class LatchDemo {<br>  public static void main(String[] args) throws Exception {<br>    CountDownLatch latch = new CountDownLatch(2);<br><br>    new Thread(() -&gt; { latch.countDown(); }).start();<br>    new Thread(() -&gt; { latch.countDown(); }).start();<br><br>    latch.await(); // waits for 2 countDown calls<br>  }<br>}</pre><p>El CountDownLatch es una herramienta para sincronizar hilos, porque permite que uno o varios hilos esperen hasta que se cumpla una cierta cantidad de eventos. En el ejemplo, el CountDownLatch se inicializa con un conteo de 2, lo que significa que el hilo principal esperará hasta que dos eventos se hayan completado. Los hilos que inician con el latch.countDown() van reduciendo ese conteo, y el hilo principal, al llamar a latch.await(), se queda esperando hasta que el conteo llegue a cero. De esta forma, se asegura que el hilo principal no siga adelante hasta que los dos hilos secundarios hayan terminado.</p><h4>Interruption (how to stop threads correctly)</h4><pre>class InterruptDemo {<br>  public static void main(String[] args) throws Exception {<br>    Thread t = new Thread(() -&gt; {<br>      try {<br>        while (!Thread.currentThread().isInterrupted()) {<br>          Thread.sleep(50);<br>        }<br>      } catch (InterruptedException e) {<br>        Thread.currentThread().interrupt(); // restore<br>      }<br>    });<br>    t.start();<br>    t.interrupt();<br>    t.join();<br>  }<br>}</pre><p>La interrupción de hilos es una forma de indicar a un hilo que debería detener su ejecución, pero no es un stop forzoso. Es decir, cuando se interrumpe un hilo, este debe revisar periódicamente si ha sido interrumpido y decidir cómo actuar.</p><p>Si el hilo está bloqueado en una operación como sleep() o wait(), al ser interrumpido lanzará una excepción InterruptedException. Esta excepción limpia el flag de interrupción, por lo que se pierde esa señal.</p><p>La buena práctica es que, cuando se captura la excepción, se restablezca el flag de interrupción si no se va a terminar, para que el hilo pueda salir de su estado de bloqueo de forma limpia.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dcc95d4df3ec" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Java Concurrency. Threads, Synchronization & Patterns (Quick Guide)]]></title>
            <link>https://medium.com/@barbieri.santiago/java-concurrency-threads-synchronization-patterns-quick-guide-56d66e6a3940?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/56d66e6a3940</guid>
            <category><![CDATA[java]]></category>
            <category><![CDATA[threading-in-java]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sun, 19 Apr 2026 21:21:22 GMT</pubDate>
            <atom:updated>2026-04-19T21:21:22.297Z</atom:updated>
            <content:encoded><![CDATA[<h3>Process vs Thread vs Resources</h3><p>Process: separate memory space; heavier; communication via OS (pipes/sockets/shared memory).<br>Thread: runs inside a process; shares heap with other threads; cheaper; needs synchronization.<br>Resources: CPU time, memory, file handles, locks, network connections. Threads mostly compete for CPU + shared data.</p><h4>Thread lifecycle</h4><p>NEW: created, not started.<br>RUNNABLE: eligible to run (may be running or waiting for CPU).<br>BLOCKED: waiting to enter a synchronized block/method (monitor lock).<br>WAITING / TIMED_WAITING: waiting (wait(), join(), park(), sleep() with time).<br>TERMINATED: finished run() (or crashed).</p><h4>Creating and running threads</h4><pre>class Example {<br>  public static void main(String[] args) throws Exception {<br>    Thread t = new Thread(() -&gt; {<br>      // work<br>      for (int i = 0; i &lt; 3; i++) {}<br>    });<br>    t.start();<br>    t.join(); // wait for completion<br>  }<br>}</pre><h4>Daemon threads</h4><p>A daemon thread does not keep the JVM alive; when only daemons remain, the JVM can exit.<br>Use for background helpers (timers/cleanup). Avoid for tasks that must finish (no guarantees at shutdown).</p><pre>class DaemonDemo {<br>  public static void main(String[] args) {<br>    Thread t = new Thread(() -&gt; { while (true) {} });<br>    t.setDaemon(true);<br>    t.start();<br>  }<br>}</pre><h4>Thread pools (ExecutorService)</h4><p>Avoid “one thread per task”. Use pools for bounded concurrency, reuse, and queueing.</p><pre>class PoolDemo {<br>  public static void main(String[] args) throws Exception {<br>    ExecutorService pool = Executors.newFixedThreadPool(4);<br>    Future&lt;Integer&gt; f = pool.submit(() -&gt; 40 + 2);<br><br>    int answer = f.get(); // blocks<br>    pool.shutdown();      // stop accepting new tasks<br>  }<br>}</pre><h4>Concurrent collections (when many threads access collections)</h4><p>Use ConcurrentHashMap, CopyOnWriteArrayList, ConcurrentLinkedQueue, BlockingQueue, etc.<br>They provide thread-safe operations with better scalability than synchronizing a whole HashMap.</p><pre>class ConcurrentMapDemo {<br>  static final ConcurrentHashMap&lt;String, Integer&gt; hits = new ConcurrentHashMap&lt;&gt;();<br><br>  static void record(String key) {<br>    hits.merge(key, 1, Integer::sum); // atomic update<br>  }<br>}</pre><h4>Synchronization &amp; locks</h4><p>synchronized (monitor lock)<br>Ensures mutual exclusion and visibility (enter/exit creates happens-before edges).</p><pre>class Counter {<br>  private int value;<br>  public synchronized void inc() { value++; }<br>  public synchronized int get() { return value; }<br>}</pre><h4>wait/notify (intrinsic condition waiting)</h4><p>Must be used inside synchronized on the same object.<br>Always wait in a loop (spurious wakeups).</p><pre>class OneSlotBuffer {<br>  private Integer slot = null;<br><br>  public synchronized void put(int x) throws InterruptedException {<br>    while (slot != null) wait();<br>    slot = x;<br>    notifyAll();<br>  }<br><br>  public synchronized int take() throws InterruptedException {<br>    while (slot == null) wait();<br>    int x = slot;<br>    slot = null;<br>    notifyAll();<br>    return x;<br>  }<br>}</pre><h4>ReentrantLock (explicit lock)</h4><p>More features: tryLock, fairness option, multiple conditions.</p><pre>class LockCounter {<br>  private final ReentrantLock lock = new ReentrantLock();<br>  private int value;<br><br>  void inc() {<br>    lock.lock();<br>    try { value++; }<br>    finally { lock.unlock(); }<br>  }<br>}</pre><h4>Volatile</h4><p>volatile gives visibility (reads see the latest write) and orders operations.<br>It does not make compound actions atomic (e.g., count++ is still not atomic)</p><pre>class StopFlag {<br>  private volatile boolean stop;<br>  void requestStop() { stop = true; }<br>  void runLoop() { while (!stop) { /* spin */ } }<br>}</pre><h4>Atomic variables</h4><p>AtomicInteger, AtomicLong, etc. provide lock-free atomic operations.</p><pre>class AtomicDemo {<br>  private final AtomicInteger count = new AtomicInteger(0);<br><br>  void inc() { count.incrementAndGet(); }<br>  int get() { return count.get(); }<br>}</pre><h4>CountDownLatch</h4><p>One-time gate: wait until N events happen.</p><pre>class LatchDemo {<br>  public static void main(String[] args) throws Exception {<br>    CountDownLatch latch = new CountDownLatch(2);<br><br>    new Thread(() -&gt; { latch.countDown(); }).start();<br>    new Thread(() -&gt; { latch.countDown(); }).start();<br><br>    latch.await(); // waits for 2 countDown calls<br>  }<br>}</pre><h4>Interruption (how to stop threads correctly)</h4><p>interrupt() is a request, not a force-stop.<br>If a thread is blocked in sleep/wait/join, it throws InterruptedException.<br>Good practice: on catch, restore the interrupt flag or exit.</p><pre>class InterruptDemo {<br>  public static void main(String[] args) throws Exception {<br>    Thread t = new Thread(() -&gt; {<br>      try {<br>        while (!Thread.currentThread().isInterrupted()) {<br>          Thread.sleep(50);<br>        }<br>      } catch (InterruptedException e) {<br>        Thread.currentThread().interrupt(); // restore<br>      }<br>    });<br>    t.start();<br>    t.interrupt();<br>    t.join();<br>  }<br>}</pre><h4>Thread-related exceptions</h4><p>IllegalThreadStateException: calling start() twice.<br>InterruptedException: blocking method interrupted (wait/sleep/join/await).<br>RejectedExecutionException: task submitted to a shutdown executor.<br>ExecutionException: a Future task threw an exception (wrapped).</p><h4>Concurrency patterns</h4><p>Producer–Consumer: use BlockingQueue to connect stages.<br>Immutable objects: share freely without locks.<br>Thread confinement: keep state on one thread (no sharing).<br>Guarded suspension: wait until a condition becomes true (while (!cond) wait()).<br>Fork/Join: split work; use ForkJoinPool for CPU-bound parallelism.</p><pre>class ProducerConsumer {<br>  private final BlockingQueue&lt;Integer&gt; q = new ArrayBlockingQueue&lt;&gt;(10);<br><br>  void start() {<br>    new Thread(() -&gt; { // producer<br>      try { for (int i = 0; i &lt; 100; i++) q.put(i); }<br>      catch (InterruptedException e) { Thread.currentThread().interrupt(); }<br>    }).start();<br><br>    new Thread(() -&gt; { // consumer<br>      try { while (true) q.take(); }<br>      catch (InterruptedException e) { Thread.currentThread().interrupt(); }<br>    }).start();<br>  }<br>}</pre><h3>Overall summary</h3><p>Java concurrency deals with multiple threads executing within a process and competing for shared resources such as CPU time and memory. Threads are lightweight compared to processes, but introduce complexity due to shared state and the need for coordination.</p><p>A thread goes through a defined lifecycle (NEW, RUNNABLE, BLOCKED, WAITING/TIMED_WAITING, TERMINATED) and can be created and managed directly or through higher-level abstractions like thread pools (ExecutorService), which handle execution, reuse, and task scheduling more efficiently.</p><p>Concurrency requires controlling access to shared data. This can be achieved using:</p><p>synchronized blocks and intrinsic locks for mutual exclusion and visibility<br>explicit locks (ReentrantLock) for more advanced control<br>volatile variables for visibility guarantees<br>atomic variables for lock-free operations</p><h4>Thread communication and coordination are handled through:</h4><p>wait/notify mechanisms inside synchronized contexts<br>higher-level utilities such as CountDownLatch and BlockingQueue<br>thread interruption, which provides a cooperative way to stop execution</p><p>Java also provides concurrent collections (e.g., ConcurrentHashMap) that allow safe and scalable access to shared data without full synchronization.</p><p>Common issues in concurrent programs include race conditions, improper synchronization, and incorrect handling of thread lifecycle or interruption. Exceptions such as InterruptedException, IllegalThreadStateException, and ExecutionException are part of normal concurrency control flow.</p><p>Concurrency patterns like producer–consumer, thread confinement, immutability, and task parallelism (Fork/Join) help structure solutions and reduce complexity.</p><p>In all cases, the objective is to ensure correct execution, safe data sharing, and proper coordination between threads while making efficient use of system resources.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=56d66e6a3940" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Java OOP: Domain-Problem Mapping]]></title>
            <link>https://medium.com/@barbieri.santiago/java-oop-domain-problem-mapping-73d4a19531d7?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/73d4a19531d7</guid>
            <category><![CDATA[domain-driven-design]]></category>
            <category><![CDATA[object-oriented]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sat, 18 Apr 2026 19:21:39 GMT</pubDate>
            <atom:updated>2026-04-18T22:48:18.701Z</atom:updated>
            <content:encoded><![CDATA[<p>La programación orientada a objetos (OOP) permite modelar un dominio del mundo real, usando clases, objetos y relaciones.</p><p><strong>Pasos clave:<br>Identificar las entidades del dominio:</strong> Qué conceptos son importantes (por ejemplo, Order, Customer, Stock ).<br><strong>Definir las relaciones entre las entidades:</strong> Cómo interactúan entre sí.<br><strong>Establecer las reglas del negocio:</strong> Qué comportamientos y restricciones existen (por ejemplo, invariantes que siempre deben cumplirse).</p><p><strong>Conceptos clave:<br>Lenguaje ubicuo:</strong> Términos compartidos con todos los stakeholders.<br><strong>Entidades y objetos de valor:</strong> Definición clara de cada elemento del dominio.<br><strong>Servicios del dominio y agregados:</strong> Cómo se agrupan y se gestionan las reglas.<br><strong>Reglas de negocio:</strong> Condiciones invariantes que siempre deben cumplirse.<br><strong>Casos de uso:</strong> Operaciones del sistema (por ejemplo pagar, cancelar ).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-LhkDbrPTEdGCsbP0VPW0Q.png" /></figure><h4>Object-Oriented Conceptos</h4><p><strong>Abstracción:</strong> La idea es enfocarse en lo que realmente importa. Se modelan el comportamiento.<br><strong>Encapsulamiento: </strong>Se protege el estado interno de los objetos y solo se permiten cambios a través de métodos que respetan las reglas definidas.<br><strong>Herencia: </strong>Permite reutilizar código mediante la especialización, pero con cuidado, ya que puede generar acoplamiento. En general, es mejor preferir la composición.<br><strong>Polimorfismo: </strong>Se programa contra interfaces, lo que permite cambiar las implementaciones sin afectar al código que las utiliza.<br><strong>Composición: </strong>Consiste en construir objetos a partir de otros, lo cual suele ser más flexible y permite una mayor reutilización.</p><h4>Design Principles</h4><p><strong>Single Responsibility Principle (SRP): </strong>Una clase debe tener una única razón para cambiar. Esto significa que cada clase debe tener una responsabilidad bien definida y no mezclarse con otras responsabilidades.<br><strong>Open/Closed Principle (OCP): </strong>Las entidades de software (como clases o módulos) deben estar abiertas para su extensión, pero cerradas para su modificación. Esto permite añadir nuevas funcionalidades sin alterar el código existente.<br><strong>Dependency Inversion Principle (DIP): </strong>Debemos depender de abstracciones y no de implementaciones concretas. Esto facilita el cambio de implementaciones sin afectar el resto del sistema.</p><h4><strong>Clases y Objetos:</strong></h4><p><strong>Clase:</strong> Define la estructura y el comportamiento. Es como una plantilla.<br><strong>Objeto:</strong> Es una instancia de una clase, es decir, una entidad concreta con valores específicos.<strong><br>Superclase y Subclase:</strong> La subclase hereda el estado y el comportamiento de la superclase.<br><strong>Relación “es un”:</strong> La subclase es un tipo especializado de la superclase.<br><strong>Sobrescritura de métodos: </strong>Una subclase puede sobrescribir un método de la superclase para modificar su comportamiento.<br><strong>Evitar jerarquías profundas: </strong>Es recomendable mantener las jerarquías simples y claras para que el código sea más fácil de mantener.</p><h4>Clases Abstractas y Finales</h4><p><strong>Clases Abstractas: No se pueden instanciar:</strong> Una clase abstracta sirve como base para otras clases, pero no se puede crear un objeto directamente de ella.<br><strong>Puede tener estado:</strong> Las clases abstractas pueden definir atributos y métodos concretos.<br><strong>Métodos abstractos:</strong> Pueden tener métodos sin implementación que las subclases deben implementar.</p><p><strong>Clases Finales: No se pueden extender:</strong> Una clase final no puede ser heredada.<br><strong>Métodos finales:</strong> Un método final no puede ser sobrescrito por las subclases.<br><strong>Variables finales:</strong> Una variable final no puede ser reasignada después de su inicialización.</p><p><strong>Nota:<br></strong>Aunque un objeto referenciado por una variable final no pueda ser reasignado, sus atributos internos pueden seguir siendo mutables.</p><h4>Interfaces</h4><p>Una interfaz define un contrato que las clases implementan, especificando qué métodos deben estar presentes.<br><strong>Métodos por defecto: </strong>Desde Java 8, las interfaces pueden tener métodos con implementación, lo que facilita la evolución de las interfaces sin romper las clases que las implementan.</p><h4>Casting en Java: Upcast y Downcast</h4><p><strong>Upcast: </strong>Es cuando asignamos una instancia de una subclase a una variable de su superclase.<br><strong>Es seguro implícitamente:</strong> No se necesita un cast explícito porque la subclase “es un tipo” de la superclase.<br><strong>Ejemplo:</strong> Si tenemos una clase A y una clase B que extiende de A, podemos asignar un objeto de B a una variable de tipo A sin problema.</p><p><strong>Downcast: </strong>Es cuando asignamos una variable de una superclase a una subclase.<br><strong>No es seguro implícitamente:</strong> Puede fallar en tiempo de ejecución si el objeto no es realmente de ese tipo.<br><strong>Uso de instaceof:</strong> Antes de realizar un downcast, es recomendable usar el operador instaceof para asegurarnos de que el objeto es del tipo esperado.<br><strong>Pattern Matching (desde Java 16):</strong> Facilita el proceso de verificación y casting en una sola línea.</p><p><strong>Precaución con el downcast: </strong>Si el downcast es incorrecto, se lanzará una excepción ClassCastException en tiempo de ejecución.</p><h4>Clases Internas en Java</h4><p><strong>Inner Class (Clase Interna No Estática):<br>Referencia a la instancia externa:</strong> Las clases internas no estáticas tienen acceso directo a los miembros de la clase externa, incluso a sus miembros privados.<br><strong>Necesita una instancia de la clase externa:</strong> Para crear una instancia de una clase interna no estática, se necesita primero una instancia de la clase externa.</p><p><strong>Static Nested Class (Clase Anidada Estática):<br>No tiene acceso a la instancia externa:</strong> A diferencia de las clases internas no estáticas, las clases anidadas estáticas no tienen acceso implícito a los miembros de la clase externa.<br><strong>Comportamiento similar a una clase normal:</strong> Se comporta como una clase independiente, aunque sigue estando dentro del contexto de la clase externa.</p><p><strong>Anonymous Class (Clase Anónima):<br></strong>Se usa para crear una clase en el momento, generalmente para implementar una interfaz o una clase abstracta.<br><strong>Sin nombre:</strong> No tiene nombre y se declara en el lugar donde se necesita.<br><strong>Uso común:</strong> Antes de las lambdas, se utilizaban mucho para manejar eventos o crear implementaciones rápidas.</p><h4>Mapeo del Dominio: Identificación de Entidades y Casos de Uso</h4><p>Primero, extraemos los sustantivos y los verbos de las declaraciones. Los sustantivos suelen ser candidatos a convertirse en entidades, como por ejemplo: la orden, el cliente, el producto. Los verbos, en cambio, son las acciones o casos de uso, como pagar una orden, cancelar, o reservar stock.<br>Después, clasificamos estos elementos: las entidades son aquellas que tienen una identidad y un ciclo de vida, como la orden. Los objetos de valor son inmutables y se comparan por valor, como el dinero. Y los servicios de dominio contienen lógica que no pertenece a una sola entidad, como el servicio de precios.<br>Con este método, logramos una estructura muy clara y organizada del dominio</p><p>Las invariantes son reglas de negocio que siempre deben mantenerse verdaderas. Son como las condiciones que garantizan la consistencia y la validez de los datos.<br>Por ejemplo, la regla de que una orden pagada no puede ser cancelada es una invariante porque, una vez que se pagó, esa condición no debe cambiar. De igual manera, que la cantidad de un producto sea mayor que cero es una invariante.<br>En resumen, las invariantes son esas reglas que aseguran que el sistema siempre esté en un estado válido.</p><h4>Agregados en el Diseño de Dominio</h4><p>Un agregado es un grupo de objetos que se tratan como una unidad única en la persistencia y en las reglas de negocio.<br>Cada agregado tiene una entidad raíz, que es la única que se expone al exterior y a través de la cual se accede a los demás elementos del agregado. Por ejemplo, en el caso de una orden, la entidad raíz sería la propia orden.<br>El agregado se encarga de mantener su propia consistencia interna, es decir, que todas las reglas de negocio se cumplan dentro de él, y no se permite que se queden en un estado inconsistente.<br>En el caso de una orden, la raíz del agregado es la orden misma, y dentro de ella pueden estar los detalles de los productos, las cantidades, etcétera, pero todo se gestiona a través de la orden.<br>Así, los agregados ayudan a mantener la integridad y la coherencia del modelo.</p><h4>Separación de Capas</h4><p><strong>Capa de Dominio<br>Modelo y Reglas:</strong> Aquí se definen las entidades, los agregados, los objetos de valor y las reglas de negocio. Es el corazón del dominio, donde reside la lógica que representa las reglas del negocio.<br><strong>Capa de Aplicación<br>Orquestación de Casos de Uso:</strong> Aquí se coordinan las operaciones que se deben realizar, pero sin contener lógica de negocio. Su función es orquestar el flujo de trabajo y los casos de uso.<br><strong>Capa de Infraestructura<br>Persistencia y Comunicación:</strong> Se encarga de la persistencia de datos, como bases de datos, y de las comunicaciones, como HTTP o mensajería.</p><h4>Síntesis del Modelo de Dominio</h4><p>La programación orientada a objetos aplicada al domain-problem mapping permite construir modelos de software que representan de forma clara y estructurada un dominio real.<br>A partir de la identificación de entidades, relaciones y reglas de negocio, se define un modelo donde cada componente tiene una responsabilidad específica.<br>A nivel de diseño de dominio, la identificación de entidades y casos de uso, junto con el uso de invariantes, agregados y servicios de dominio, permite asegurar consistencia, claridad y correcta distribución de responsabilidades.<br>Finalmente, la separación en capas (dominio, aplicación, infraestructura) organiza el sistema de forma que el dominio concentra la lógica de negocio, la aplicación coordina los casos de uso y la infraestructura resuelve los aspectos técnicos.<br>De esta manera, el sistema resulta coherente, mantenible y alineado con el problema real que busca resolver.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ue0_AQWyywsAF1UAhQcVTg.png" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=73d4a19531d7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Java OOP + Domain Problem Mapping]]></title>
            <link>https://medium.com/@barbieri.santiago/java-oop-domain-problem-mapping-399d3dfa3f2f?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/399d3dfa3f2f</guid>
            <category><![CDATA[object-oriented]]></category>
            <category><![CDATA[domain-driven-design]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sat, 18 Apr 2026 14:05:33 GMT</pubDate>
            <atom:updated>2026-04-18T23:06:27.462Z</atom:updated>
            <content:encoded><![CDATA[<p>This note summarizes core Object-Oriented Programming (OOP) concepts and how to apply them to model a real business domain and translate it into classes, relationships and rules.</p><h4>Goal of “domain-problem mapping”</h4><p>Turn a real-world problem (domain) into a software model<br>Ubiquitous language: shared terms with stakeholders (Order, Customer, Money, Stock).<br>Concepts → types: entities, value objects, domain services, aggregates.<br>Rules → invariants: conditions that must always hold.<br>Use cases → application layer: system operations (place order, pay, cancel).</p><h4>Essential OOP concepts</h4><p>Abstraction: keep what matters; model behavior + responsibilities, not everything.<br>Encapsulation: protect state; only allow changes through methods that enforce rules.<br>Inheritance: reuse by specialization (“is-a”). Use carefully; composition is often better.<br>Polymorphism: program to interfaces; swap implementations (Strategy, PaymentMethod).<br>Composition: build objects from other objects (“has-a”), typically more flexible.</p><h4>Common design principles you’ll see in good solutions:</h4><p>SRP (Single Responsibility Principle): one primary reason for a class to change.<br>OCP (Open/Closed Principle): open for extension, closed for modification.<br>DIP (Dependency Inversion Principle): depend on abstractions, not concretions.</p><h4>Additional Java OOP concepts</h4><p>Classes and objects: a class defines structure/behavior; an object is an instance.<br>Superclass / Subclass: inheritance via extends (“is-a” relationship).<br>Inheritance: reuse/extend behavior; avoid deep hierarchies unless they clearly help.<br>super: reference to the superclass (call constructors/methods):</p><pre>class Base { Base(int x) {} void hello() {} }<br>class Child extends Base {<br>  Child() { super(1); }<br>  @Override void hello() { super.hello(); }<br>}</pre><p>Abstract class: cannot be instantiated; can hold state + abstract/concrete methods.<br>final class: cannot be extended.<br>final method: cannot be overridden.<br>final variable: cannot be reassigned (note: referenced objects may still be mutable).<br>Interfaces (capabilities): model capabilities/contracts via implements:</p><pre>interface Payable {<br>  void pay();<br>  default boolean isPayable() { return true; } // default method<br>}<br>class CardPayment implements Payable {<br>  @Override public void pay() {}<br>}</pre><p>Default method (default): an interface method with an implementation (useful for API evolution).<br>Overloading: same name, different parameter list (signature) in the same class.<br>Overriding: a subclass redefines a superclass/interface method (<a href="http://twitter.com/Override">@Override</a>).</p><pre>class Printer {<br>  void print(String s) {}      // overloading<br>  void print(int n) {}         // overloading<br>}<br>class A { void run() {} }<br>class B extends A { @Override void run() {} } // overriding</pre><p>Casting:<br>Upcast: safe, implicit (B → A).<br>Downcast: explicit, can fail at runtime (use instanceof).</p><pre>A a = new B();          // upcast<br>if (a instanceof B b) { // pattern matching (Java 16+)<br>  b.run();              // safe downcast<br>}</pre><p>Inner classes:<br>Non-static inner: holds a reference to the outer instance.<br>Static nested: behaves more like a regular nested type.<br>Anonymous classes: inline implementations (common pre-lambdas):</p><pre>Runnable r = new Runnable() {<br>  @Override public void run() {}<br>};</pre><p>Variable scopes:<br>Local: inside a method/block.<br>Instance: non-static fields (per object).<br>Class (static): shared across instances.<br>Access: private, protected, package-private, public.<br>Mutable vs immutable:<br>String is immutable; StringBuilder is mutable.<br>Collections are often mutable (ArrayList) unless wrapped/created immutable.<br>Cloning: clone() is often tricky (shallow copies, awkward contract).<br>Prefer copy constructors, factories, or explicit copying.</p><pre>class User {<br>  final String name;<br>  User(String name) { this.name = name; }<br>  User(User other) { this.name = other.name; } // copy constructor<br>}</pre><h4>Quick mapping workflow</h4><p>Nouns → class candidates: Order, Customer, Product, Money.<br>Verbs → methods/use cases: placeOrder, pay, cancel, reserveStock.<br>Classify types:<br>Entity: has identity + lifecycle (Order).<br>Value Object: immutable, compared by value (Money).<br>Domain Service: logic that doesn’t naturally belong to a single entity (PricingService).<br><strong>Invariants (hard business rules):</strong><br>“A paid order cannot be cancelled.”<br>“Quantity must be &gt; 0.”<br><strong>Aggregates:</strong><br>An aggregate has a root (e.g. Order) that protects internal consistency.<br><strong>Separate layers:</strong><br>Domain: model + rules.<br>Application: orchestrates use cases.<br>Infrastructure: persistence, HTTP, messaging.</p><p>Domain example: Orders<br>Value Object: Money (immutable)</p><pre>public record Money(double amount, String currency) {<br>  public Money {<br>    if (amount &lt; 0) throw new IllegalArgumentException(&quot;amount &lt; 0&quot;);<br>    if (currency == null) throw new IllegalArgumentException(&quot;currency required&quot;);<br>  }<br>  Money add(Money o) {<br>    if (!currency.equals(o.currency)) throw new IllegalArgumentException(&quot;currency mismatch&quot;);<br>    return new Money(amount + o.amount, currency);<br>  }<br>}</pre><p>Immutability (<a href="http://twitter.com/Value">@Value</a>) prevents “half-valid” states.<br>Validation in the constructor enforces invariants early.<br>Entity + Aggregate Root: Order</p><pre>public class Order {<br>  enum Status { DRAFT, PAID, CANCELLED }<br>  private Status status = Status.DRAFT;<br><br>  void addLine(int qty) {<br>    if (status != Status.DRAFT) throw new IllegalStateException(&quot;only DRAFT&quot;);<br>    if (qty &lt;= 0) throw new IllegalArgumentException(&quot;qty &lt;= 0&quot;);<br>  }<br>  void pay()    { if (status == Status.CANCELLED) throw new IllegalStateException(); status = Status.PAID; }<br>  void cancel() { if (status == Status.PAID)      throw new IllegalStateException(); status = Status.CANCELLED; }<br>}</pre><p>Encapsulation: state changes only via rule-checking methods.<br>Invariants: enforced in addLine, cancel.<br>Aggregate root: Order controls consistency of lines and status.</p><h4>Application layer + REST</h4><pre>@Service<br>class PlaceOrderService {<br>  UUID place(PlaceOrderCommand cmd) { return UUID.randomUUID(); } // build+save omitted<br>}<br><br>public record PlaceOrderCommand(UUID customerId, List&lt;Line&gt; lines) {<br>    public record Line(UUID productId, int quantity, Money unitPrice) {}<br>}<br><br>public interface OrderRepository {<br>    void save(Order order);<br>    Optional&lt;Order&gt; findById(UUID id);<br>}<br><br>@RestController<br>@RequestMapping(&quot;/orders&quot;)<br>public class OrderController {<br>  private final PlaceOrderService placeOrderService;<br>  public OrderController(PlaceOrderService s) { this.placeOrderService = s; }<br><br>  @PostMapping<br>  ResponseEntity&lt;Void&gt; place(@RequestBody PlaceOrderCommand body) {<br>    UUID id = placeOrderService.place(body);<br>    return ResponseEntity.created(URI.create(&quot;/orders/&quot; + id)).build();<br>  }<br>}</pre><p>The domain should not depend on Spring.<br>Spring stays in API/application/infrastructure.</p><h4>Common mapping mistakes</h4><p>Anemic domain model: entities are just getters/setters; rules live elsewhere.<br>Inheritance by default: using inheritance where composition fits better.<br>Validation only in controllers: invariants must live in the domain too.<br>DB-driven modeling: modeling tables first instead of business rules and behavior.</p><h3>Final thoughts: From OOP to a meaningful domain model</h3><p>At first glance, Object-Oriented Programming concepts (abstraction, encapsulation, inheritance, polymorphism) may seem like isolated technical tools. However, their real value appears when they are used together to model a real-world problem with clarity and precision.</p><h4>Domain-problem mapping is where everything connects:</h4><p>OOP gives structure → classes, interfaces, and relationships.<br>Domain modeling gives meaning → entities, value objects, and business rules.<br>Application layers give direction → use cases and system behavior.</p><h4>When applied correctly, this approach leads to software that is:</h4><p>Expressive: the code speaks the same language as the business (ubiquitous language).<br>Robust: invariants protect the system from invalid states.<br>Flexible: changes in requirements can be handled through well-defined abstractions.<br>Maintainable: responsibilities are clear and isolated.</p><h4>The key shift is this:</h4><p>We are not just writing code, we are modeling reality through software.</p><h4>This means:</h4><p>Classes are not just data holders, but guardians of rules.<br>Methods are not just actions, but business behaviors.<br>Design is not accidental, but intentional and aligned with the domain.</p><p>Finally, frameworks like Spring Boot are only tools to expose and run our system (REST APIs, persistence, etc.), but they should never define the core of our design. The domain remains independent, clean, and focused on business logic.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=399d3dfa3f2f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Quality Assurance]]></title>
            <link>https://medium.com/@barbieri.santiago/quality-assurance-1d895a2110de?source=rss-8a7f528dad2------2</link>
            <guid isPermaLink="false">https://medium.com/p/1d895a2110de</guid>
            <category><![CDATA[quality-assurance]]></category>
            <category><![CDATA[solid-principles]]></category>
            <dc:creator><![CDATA[Santiago]]></dc:creator>
            <pubDate>Sun, 12 Apr 2026 19:41:42 GMT</pubDate>
            <atom:updated>2026-04-12T19:41:42.416Z</atom:updated>
            <content:encoded><![CDATA[<p>Quality Assurance is the process of ensuring software quality throughout its entire lifecycle.</p><p>🟦 Technical Testing<br>Technical testing focuses on validating the internal behavior of the system.<br>It includes automated testing, performance testing, and security testing.</p><p>🟦 Unit Test<br>A Unit Test is a test that validates a minimal unit of code in isolation.<br>In Java, it typically tests a method or a class.<br>It runs without depending on databases, APIs, or other external systems.<br>Tools such as JUnit and Mockito are commonly used.</p><p>🟦 Black Box Testing<br>Black Box Testing validates the system without knowledge of its internal implementation.<br>It focuses on inputs and outputs.<br>It simulates user behavior.<br>It does not require access to the code.</p><p>🟦 White Box Testing<br>White Box Testing validates the system with knowledge of the internal code.<br>It focuses on logic, structures, and flows.<br>It allows covering specific code paths and scenarios.</p><p>🟦 Objectives of Black Box and White Box Testing<br>The objective of Black Box Testing is to validate functionality from the user’s perspective.<br>The objective of White Box Testing is to ensure correct internal implementation.</p><p>Black Box Testing does not require knowledge of the code, while White Box Testing does.<br>Black Box focuses on external behavior, while White Box focuses on internal logic.<br>Black Box is closer to the user, while White Box is closer to the developer.</p><p>🟦 Integration Test<br>An Integration Test is a test that validates the interaction between multiple components of the system.<br>Unlike Unit Testing, it does not test a single isolated unit, but rather how components work together.</p><p>🟦 Differences Between Integration Test and Unit Test<br>A Unit Test validates an isolated unit, while an Integration Test validates multiple components working together.<br>A Unit Test uses mocks to isolate dependencies, while an Integration Test uses real implementations.<br>A Unit Test detects errors in individual logic, while an Integration Test detects errors in interactions between modules.</p><p>🟦 SoapUI<br>SoapUI is a tool for testing web services.<br>It allows testing both SOAP and REST APIs.<br>It is useful for functional and integration testing.</p><p>🟦 Postman<br>Postman is a tool for testing REST APIs.<br>It allows sending HTTP requests and analyzing responses.</p><p>🟦 JMeter<br>Apache JMeter is a performance testing tool.<br>It allows simulating multiple concurrent users.<br>It is used for load and stress testing.<br>It helps detect system bottlenecks.</p><p>🟦 Mocking<br>Mocking is a technique that simulates the behavior of external dependencies.<br>It allows isolating the unit being tested.</p><p>🟦 Stubbing<br>Stubbing consists of defining predictable responses for specific methods.<br>It is used to control the behavior of dependencies.<br>It does not validate interactions, it only returns predefined data.</p><p>🟦 Spying<br>Spying allows observing the real behavior of an object.<br>It is useful when you want to monitor without fully replacing the object.</p><p>🟦 Difference Between Mocking, Spying and Stubbing<br>Mocking fully simulates an object and allows verifying interactions.<br>Stubbing only defines responses without validating method calls.<br>Spying uses real objects but allows observing their behavior.</p><p>Mocking is used for full isolation,<br>stubbing for controlling data,<br>and spying for monitoring execution.</p><p>🟦 TDD (Test Driven Development)<br>TDD is a development practice where tests are written before the actual code.<br>The TDD cycle consists of three steps: Red, Green, and Refactor.<br>Red means writing a failing test.<br>Green means writing the minimum code required to pass the test.<br>Refactor means improving the code without breaking the tests.</p><p>🟦 BDD (Behavior Driven Development)<br>BDD is an evolution of TDD focused on system behavior.<br>It centers on how the user interacts with the system.<br>It uses natural language to define scenarios.<br>It facilitates communication between business and technology.<br>It uses structures such as Given, When, Then.<br>The main objective is to ensure the system meets the expected business behavior.</p><p>🟦 ATDD (Acceptance Test Driven Development)<br>ATDD is a practice where acceptance tests are defined before development.<br>It focuses on validating requirements from the beginning.</p><p>🟦 Difference Between TDD, BDD and ATDD<br>TDD focuses on code and small units.<br>BDD focuses on system behavior.<br>ATDD focuses on business acceptance criteria.<br>TDD is technical, while BDD and ATDD connect more closely with the business.</p><p>🟦 Load Testing<br>Load Testing is a type of performance testing that evaluates how the system behaves under an expected user load.<br>It helps verify whether the system can handle normal usage without degradation.<br>Metrics such as response time, throughput, and resource usage are measured.<br>It is key for validating system stability before production.</p><p>🟦 Stability Testing<br>Stability Testing evaluates whether the system can continue operating correctly over an extended period of time.<br>It is performed under a constant load.<br>It helps detect memory leaks or progressive degradation.<br>It is important for systems that must remain continuously available.</p><p>🟦 Stress Testing<br>Stress Testing evaluates the system beyond its normal limits.<br>The load is increased until the system fails.<br>It helps identify the breaking point.<br>It also helps understand how the system recovers from failures.</p><p>🟦 Ad Hoc Testing<br>Ad Hoc Testing is an unstructured type of testing.<br>It does not follow predefined test cases.<br>It relies on the tester’s experience and intuition.<br>It is useful for discovering unexpected defects.</p><p>🟦 Concepts and Differences Between Load, Stress and Stability Testing<br>Load Testing evaluates behavior under normal load conditions.<br>Stress Testing evaluates behavior under extreme conditions.<br>Stability Testing evaluates behavior over a long period of time.</p><p>🟦 Clean Code<br>Clean Code is the practice of writing code that is clear, simple, and easy to maintain.</p><p>🟦 Clean Code Architecture<br>Clean Code Architecture aims to organize the system in a modular and decoupled way.<br>It is based on separation of responsibilities.<br>It promotes low coupling and high cohesion.</p><p>🟦 Relation Between Clean Code and Architecture<br>Clean Code applies at the code level.<br>Architecture applies at the system level.</p><p>🟦 Things to Avoid<br>Long and complex methods should be avoided.<br>Unclear naming should be avoided.<br>Code duplication should be avoided.</p><p>🟦 Writing Non-Testable Code<br>Non-testable code has rigid dependencies.<br>It does not allow isolating components.</p><p>🟦 Code Smell<br>A Code Smell is an indication that something is wrong in the code.<br>It is not a bug, but it signals a design problem.<br>Examples include long methods or duplicated code.</p><p>🟦 SOLID Principles<br>SOLID is a set of design principles in object-oriented programming.<br>It improves maintainability and scalability.<br>It reduces coupling.</p><p>🟦 Single Responsibility Principle<br>A class should have only one responsibility.</p><p>🟦 Open/Closed Principle<br>Code should be open for extension but closed for modification.<br>It allows adding functionality without changing existing code.</p><p>🟦 Liskov Substitution Principle<br>Child classes should be able to replace parent classes.<br>They must not break expected behavior.</p><p>🟦 Interface Segregation Principle<br>Specific interfaces should be created.<br>It avoids large and unnecessary interfaces.</p><p>🟦 Dependency Inversion Principle<br>Dependencies should rely on abstractions, not implementations.<br>It facilitates testing and decoupling.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1d895a2110de" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>