ETL: Connecting end-user transactions to business insights
There are two kinds of database design. One is used to accomplish the requirements of an end-user application and the other to provide modeled records for data engineers. The first is designed for OLTP (Online Transactional Processing) systems, used to persist operational data and characterized by a high normalized schema, lots of tables and a large volume of short online transactions (INSERT, UPDATE, DELETE). The former is designed for OLAP (Online Analytical Processing) systems, used to persist consolidated data from foreign operational sources and characterized by a de-normalized schema with fewer tables (mostly using the star model for multi-dimensional expression queries), a small volume of transactions and a better performance to run complex queries over a huge volume of data — including historical records from past loads.

Although OLTP databases are the foundation for any end user application, it’s not the perfect source for most of the queries required in data mining processes. Their common normalized schema, required to avoid redundancies — which could lead to data inconsistencies or the need for long and time consuming transactions — , makes queries too complex and expensive, due to the many required table joins and subqueries.
On the other hand, databases designed for OLAP systems doesn't need to care about data changes and its possible consistency issues, allowing a de-normalized schema structure, with much less relationships. Also, since they commonly serve for the only purpose of data analysis and not to an application state repository, time consuming queries can be done as part of the process when needed.
OLAP systems requires a special kind of schema that allows client tools to run multi-dimensional queries on them. This schema is characterized by a main table, called fact table, in which all dimensions, like date, location, etc, are kept as foreign keys from their respective tables. This schema, commonly called star schema, allows the system to handle the data as hypercubes, which could be sliced and diced, drilled down, rolled up, and pivoted. All these operations are the core from where data engineers give sense to meaningless and raw data.
Since the advent and evolution of data mining processes and algorithms, OLAP databases systems became more necessary, but data must still come from operational sources, which were and will be always constrained by performance and consistency optimality. To allow these both worlds to coexist we need a process to connect and keep them synced. That's what an ETL does.
The ETL (Extract — Transform — Load) is the main process that allows engineers to manage data for business intelligence and data mining analysis, using available operational databases as sources for a consolidated and well modeled container that can handle required queries and experiments, while avoiding side effects to application users.

The Data Warehouse is the consolidated database, created by the ETL process, from one or more sources of data. An important part of this architecture are the Data Marts, small databases with a subject focused dataset from the main DW.
One feature that any architect must take in account when selecting an ETL tool is its capacity to extract information from different types of data sources, as well as to load into them. This reduces the need for other integration systems in the analytical layer — and also for intermediate ETL flows — , which is easier to maintain and extend when a new source of data becomes available.
In the majority of cases the ETL need appears not in the design time, but during the lifecycle of applications. Even when using new databases architectures — like document and graph databases — , which can have a hybrid model for both transactional and analytical processes, the ETL can be a valuable tool to consolidate other sources for a broader analysis. Also, in these cases, the need to sync up the operational database with another, to allow more time and computer resource consuming, is a common scenario that could be handled with the same technology.
Although ETL tools are more common in the enterprise environment, there are plenty of solutions available for low profile companies. One of the best is the scriptella project. As a JVM based project it uses JDBC as its connection framework and an embed script engine for the transform/load phases.
For a more complete solution, there are the Pentaho tools. Besides its GUI ETL tool, the architect can count with a BI platform, a report designer tool and an OLAP database (Mondrian) that provides an analytical interface to the operational database through an in-memory multi-dimensional schema.
Much more can be found in the awesome-etl project, that keeps a list grouped by programming languages, including some other GUI based tools.
Conclusion
In the end an ETL is just a tool serving to a need. In some cases, you can only use your database's replication feature to make things work. It would create a continuous synced instance of your data for an engineer to make long and complex queries without affecting the application's usability. In some other cases, you can use the same database as the application, when its design is appropriate for the job. The thumb rule is: evaluate the effort to manage an ETL — and a data warehouse — and when it doesn't worth, make it simple. The important thing here is to know what are the possibilities, to make good decisions when the application requirements start turning into new problems.
Happy Coding!!