There are many examples on how to program Motif applications, but sometimes locating certain good practices required to write complete Motif applications is harder. This series is going to focus more on the practical small features that are required to write a complete application.
Many applications will require command line arguments. While you can parse those command line arguments manually (as in any regular C program), it is wise to use the methods provisioned by Xt and Motif for three reasons:
Motif was once the UNIX standard widget toolkit. It has been used as the core UNIX GUI library that has built most of the scientific, industrial and mission critical software in the 90s.
While its relevance has gone long time ago, it is still used in legacy software and it can be still used in new projects. It is well documented, extremely robust, stable, lightweight and efficient.
While now it seems like an ancient user interface, Motif was a breakthrough innovation at its time.
Data preparation is expected to be 80% of the effort in AI and ML projects. This value comes mainly from the needs for data cleansing but there are also activities related to data normalisation that take a considerable amount of time, especially until the right approach is found.
As it happens with many other aspects of Data Science, the outcome in terms of the amount of code is not large as we are dealing with more precise, well written compact algorithms rather than with large programming tasks.
One of the activities often found in financial data processing is the normalisation…
UNIX® was first developed in 1969 by Ken Thompson with the support and ideas from Rudd canaday, Doug McIlroy, Joe Ossanna and Dennis Ritchie.
The original team convinced Bell Labs to support and fund the project with the promise of delivering a document management system. That was likely the corporate internal selling to fund the whole project, which was much more ambitious and general purpose: the final result was a complete operating system which is now one of the key pillars of the information society revolution.
The fact that UNIX® was originally conceived as a documents processing platform led to…
Whoever has handled timestamps with timezones in time series analysis has always faced some headache. No matter how many times I deal with time series, which in my case invariantly involves dealing with time zones, I always have to revisit how it works because it is often counter-intuitive.
Working with time series often involves dealing with different timezones. Even working with just one time zone, you might still want to visualise data in your local time zone, which might change depending on the country of the user.
This situation makes it convenient to use UTC as a reference. Why? Because…
I tend to feel like a dinosaur when I use C. Nobody knows C anymore. It is no longer mainstream and its usage is mostly related to driver or OS programming and embedded systems.
Is it worth learning C anymore? I initially thought that knowing C++ would make C irrelevant. But during my C++ learning, I began to notice that it might be the case where C still has its place.
Despite their simplicity CSV files are still ubiquitous. …
Data Science visualisation normally gravitates around displaying individual charts and graphs. Therefore, there is a lot of material covering graphic libraries and frameworks to generate charts.
Most advanced charting and plotting libraries allow interaction with data, but normally this interaction is restricted to the web and covers individual charts.
Sometimes the data needs to be actionable. In the industry I work for, a typical example is found in trading software, where multiple charts display information about a given financial instrument or asset and the trader or analyst requires to interact with the data. Quite often, the interaction with one chart…
The amounts and throughput of data to be analysed in financial markets data analysis can be daunting. This is by no means specific to the financial world, as it happens in many other data analysis fields too. What is pretty unique to this specific industry is that data is highly structured (something that does not happen so often in other fields). Huge amounts of small size and mostly unrelated data messages is what constitutes financial market data: it is easy to end up with hundreds of millions of small messages to be received, stored, decoded, parsed and correlated.
Proprietary trading firms and hedge funds normally use market neutral portfolios, they do not do directional trading. This is so common that the iconic view of long-term value fund managers is much less common than we tend to think.
By directional trading I mean the way that small investors, retail traders, professional day traders and long term value investors usually do: a portfolio of assets (or just a single asset/instrument) is selected and a long or short position is taken with the expectation that it will increase (or decrease) its value in the market. …