TWS-iMetrica: The Automated Intraday Financial Trading Interface Using Adaptive Multivariate Direct Filtering

Figure 1: The TWS-iMetrica automated financial trading platform. Featuring fast performance optimization, analysis, and trading design features unique to iMetrica for building direct real-time filters to generate automated trading signals for nearly any tradeable financial asset. The system was built using Java, C, and the Interactive Brokers IB API in Java.

Figure 1: The TWS-iMetrica automated financial trading platform. Featuring fast performance optimization, analysis, and trading design features unique to iMetrica for building direct real-time filters to generate automated trading signals for nearly any tradeable financial asset. The system was built using Java, C, and the Interactive Brokers IB API in Java.

Introduction

I realize that I’ve been MIA (missing in action for non-anglophones) the past three months on this blog, but I assure you there has been good reason for my long absence. Not only have I developed a large slew of various optimization, analysis, and statistical tools in iMetrica for constructing high-performance financial trading signals geared towards intraday trading which I will (slowly) be sharing over the next several months (with some of the secret-sauce-recipes kept to myself and my current clients of course), but I have also built, engineered, tested, and finally put into commission on a daily basis the planet’s first automated financial trading platform completely based on the recently developed FT-AMDFA (adaptive multivariate direct filtering approach for financial trading). I introduce to you iMetrica’s little sister, TWS-iMetrica.

Coupled with the original software I developed for hybrid econometrics, time series analysis, signal extraction, and multivariate direct filter engineering called iMetrica, the TWS-iMetrica platform was built in a way to provide an easy to use yet powerful, adaptive, versatile, and automated trading machine for intraday financial trading with a variety of options for building your own day trading strategies using MDFA based on your own financial priorities.  Being written completely in Java and gnu c, the TWS-iMetrica system currently uses the Interactive Brokers (IB) trading workstation (TWS) Java API in order to construct the automated trades, connect to the necessary historical data feeds, and provide a variety of tick data. Thus in order to run, the system will require an activated IB trading account. However, as I discuss in the conclusion of this article, the software was written in a way to be seamlessly adapted to any other brokerage/trading platform API, as long as the API is available in Java or has Java wrappers available.

The steps for setting up and building an intraday financial trading environment using iMetrica + TWS-iMetrica are easy. There are four of them. No technical analysis indicator garbage is used here, no time domain methodologies, or stochastic calculus. TWS-iMetrica is based completely on the frequency domain approach to building robust real-time multivariate filters that are designed to extract signals from tradable financial assets at any fixed observation of frequencies (the most commonly used in my trading experience with FT-AMDFA being 5, 15, 30, or 60 minute intervals). What makes this paradigm of financial trading versatile is the ability to construct trading signals based on your own trading priorities with each filter designed uniquely for a targeted asset to be traded. With that being said, the four main steps using both iMetrica and TWS-iMetrica are as follows:

  1. The first step to building an intraday trading environment is to construct what I call an MDFA portfolio (which I’ll define in a brief moment). This is achieved in the TWS-iMetrica interface that is endowed with a user-friendly portfolio construction panel shown below in Figure 4.
  2. With the desired MDFA portfolio, selected, one then proceeds in connecting TWS-iMetrica to IB by simply pressing the Connect button on the interface in order to download the historical data (see Figure 3).
  3. With the historical data saved, the iMetrica software is then used to upload the saved historical data and build the filters for the given portfolio using the MDFA module in iMetrica (see Figure 2). The filters are constructed using a sequence of proprietary MDFA optimization and analysis tools. Within the iMetrica MDFA module, three different types of filters can be built 1) a trend filter that extracts a fast moving trend 2) a band-pass filter for extracting local cycles, and 3) A multi-bandpass filter that extracts both a slow moving trend and local cycles simultaneously.
  4. Once the filters are constructed and saved in a file (a .cft file), the TWS-iMetrica system is ready to be used for intrady trading using the newly constructed and optimized filters (see Figure 6).
Figure 4: The iMetrica MDFA module for constructing the trading filters. Features dozens of design, analysis, and optimization components to fit the trading priorities of the user and is used in conjunction with the TWS-iMetrica interface.

Figure 2: The iMetrica MDFA module for constructing the trading filters. Features dozens of design, analysis, and optimization components to fit the trading priorities of the user and is used in conjunction with the TWS-iMetrica interface.

In the remaining part of this article, I give an overview of the main features of the TWS-iMetrica software and how easily one can create a high-performing automated trading strategy that fits the needs of the user.

The TWS-iMetrica Interface

The main TWS-iMetrica graphical user interface is composed of several components that allow for constructing a multitude of various MDFA intraday trading strategies, depending on one’s trading priorities. Figure 3 shows the layout of the GUI after first being launched. The first component is the top menu featuring TWS System, some basic TWS connection variables which, in most cases, these variables are left in their default mode, and the Portfolio menu. To access the main menu for setting up the MDFA trading environment, click Setup MDFA Portfolio under the Portfolio menu. Once this is clicked, a panel is displayed (shown in Figure 4) featuring the required a priori parameters for building the MDFA trading environment that should all be filled before MDFA filter construction and trading is to take place. The parameters and their possible values are given below Figure 4.

Figure 3 - The TWS-iMetrica interface when first launched and everything blank.

Figure 3 – The TWS-iMetrica interface when first launched and everything blank.

The Setup MDFA Portfolio panel featuring all the setting necessary to construct the automated trading MDFA environment.

Figure 4 – The Setup MDFA Portfolio panel featuring all the setting necessary to construct the automated trading MDFA environment.

  1. Portfolio – The portfolio is the basis for the MDFA trading platform and consists of two types of assets 1) The target asset from which we construct the trading signal, engineer the trades, and use in building the MDFA filter 2) The explanatory assets that provide the explanatory data for the target asset in the multivariate filter construction. Here, one can select up to four explanatory assets.
  2. Exchange – The exchange on which the assets are traded (according to IB).
  3. Asset Type – If the input portfolio is a selection of Stocks or Futures (Currencies and Options soon to be available).
  4. Expiration – If trading Futures, the expiration date of the contract, given as a six digit number of year then month (e.g. 201306 for June 2013).
  5. Shares/Contracts – The number of shares/contracts to trade (this number can also be changed throughout the trading day through the main panel).
  6. Observation frequency – In the MDFA financial trading method, we consider uniformly sampled observations of market data on which to do the trading (in seconds). The options are 1, 2, 3, 5, 15, 30, and 60 minute data. The default is 5 minutes.
  7. Data – For the intraday observations, determines the nature of data being extracted. Valid values include TRADES, MIDPOINT, BID, ASK, and BID_ASK. The default is MIDPOINT
  8. Historical Data – Selects how many days are used to for downloading the historical data to compute the initial MDFA filters. The historical data will of course come in intervals chosen in the observation frequency.

Once all the values have been set for the MDFA portfolio, click the Set and Build button which will first begin to check if the values entered are valid and if so, create the necessary data sets for TWS-iMetrica to initialize trading. This all must be done while TWS-iMetrica is connected to IB (not set in trading mode however). If the build was successful, the historical data of the desired target financial asset up to the most recent observation in regular market trading hours will be plotted on the graphics canvas. The historical data will be saved to a file named (by default) “lastSeriesData.dat” and the data will be come in columns, where the first column is the date/time of the observation, the second column is the price of the target asset, and remaining columns are log-returns of the target and explanatory data. And that’s it, the system is now setup to be used for financial trading. These values entered in the Setup MDFA Portfolio will never have to be set again (unless changes to the MDFA portfolio are needed of course).

Continuing on to the other controls and features of TWS-iMetrica, once the portfolio has been set, one can proceed to change any of the settings in main trading control panel. All these controls can be used/modified intraday while in automated MDFA trading mode. In the left most side of the panel at the main control panel (Figure 5) of the interface includes a set of options for the following features:

Figure 7 - The main control panel for choosing and/or modifying all the options during intraday trading.

Figure 5 – The main control panel for choosing and/or modifying all the options during intraday trading.

  1. In contracts/shares text field, one enters the amount of share (for stocks) or contracts (for futures)  that one will trade throughout the day. This can be adjusted during the day while the automated trading is activated, however, one must be certain that at the end of the day, the balance between bought and shorted contracts is zero, otherwise, you risk keeping contracts or shares overnight into the next trading day.Typically, this is set at the beginning before automated trading takes place and left alone.
  2. The data input file for loading historical data. The name of this file determines where the historical data associated with the MDFA portfolio constructed will be stored. This historical data will be needed in order to build the MDFA filters. By default this is “lastSeriesData.dat”. Usually this doesn’t need to be modified.
  3. The stop-loss activation and stop-loss slider bar, one can turn on/off the stop-loss and the stop-loss amount. This value determines how/where a stop-loss will be triggered relative to the price being bought/sold at and is completely dependent on the asset being traded.
  4. The interval search that determines how and when the trades will be made when the selected MDFA signal triggers a buy/sell transaction. If turned off, the transaction (a limit order determined by the bid/ask) will be made at the exact time that the buy/sell signal is triggered by the filter. If turned on, the value in the text field next to it gives how often (in seconds) the trade looks for a better price to make the transaction. This search runs until the next observation for the MDFA filter. For example, if 5 minute return data is being used to do the trading, the search looks every seconds for 5 minutes for a better price to make the given transaction. If at the end of the 5 minute period no better price has been found, the transaction is is made at the current ask/bid price. This feature has been shown to be quite useful during sideways or highly volatile markets.

The middle of the main control panel features the main buttons for both connecting to disconnecting from Interactive Brokers, initiating the MDFA automated trading environment, as well as convenient buttons used for instantaneous buy/sell triggers that supplement the automated system. It also features an on/off toggle button for activating the trades given in the MDFA automated trading environment. When checked on, transactions according to the automated MDFA environment will proceed and go through to the IB account. If turned off, the real-time market data feeds and historical data will continue to be read into the TWS-iMetrica system and the signals according to the filters will be automatically computed, but no actual transactions/trades into the IB account will be made.

Finally, on the right hand side of the main control panel features the filter uploading and selection boxes. These are the MDFA filters that are constructed using the MDFA module in iMetrica. One convenient and useful feature of TWS-iMetrica is the ability to utilize up to three direct real-time filters in parallel and to switch at any given moment during market hours between the filters. (Such a feature enhances the adaptability of the trading using MDFA filters. I’ll discuss more about this in further detail shortly).  In order to select up to three filters simultaneously, there is a filter selection panel (shown in bottom right corner of Figure 6 below) displaying three separate file choosers and a radio button corresponding to each filter. Clicking on the filter load button produces a file dialog box from which one selects a filter (a *.cft file produced by iMetrica). Once the filter is loaded properly, on success, the name of the filter is displayed in the text box to the right, and the radio button to the left is enabled. With multiple filters loaded, to select between any of them, simply click on their respective radio button and the corresponding signal will be plotted on the plot canvas (assuming market data has been loaded into the TWS-iMetrica using the market data file upload and/or has been connected to the IB TWS for live market data feeds). This is shown in Figure 6.

Figure 5 - The TWS-iMetrica main trading interface features many control options to design your own automated MDFA trading strategies.

Figure 6 – The TWS-iMetrica main trading interface features many control options to design your own automated MDFA trading strategies.

And finally, once the historical data file for the MDFA portfolio has been created, up to three filters have been created for the portfolio and entered in the filter selection boxes, and the system is connected to Interactive Brokers by pressing the Connect button, the market and signal plot panel can then be used for visualizing the different components that one will need for analyzing the market, signal, and performance of the automated trading environment. In the panel just below the plot canvas features and array of checkboxes and radiobuttons. When connected to IB and the Start MDFA Trading has been pressed, all the data and plots are updated in real-time automatically at the specific observation frequency selected in the MDFA Portfolio setup. The currently available plots are as follows:

Figure 8 - The plots for the trading interface. Features price, log-return, account cumulative returns, signal, buy/sell lines, and up to two additional  auxiliary signals.

Figure 8 – The plots for the trading interface. Features price, log-return, account cumulative returns, signal, buy/sell lines, and up to two additional auxiliary signals.

  • Price – Plots in real-time the price of the asset being traded, at the specific observation frequency selected for the MDFA portfolio.
  • Log-returns – Plots in real-time the log-returns of the price, which is the data that is being filtered to construct the trading signal.
  • Account – Shows the cumulative returns produced by the currently chosen MDFA filter over the current and historical data period (note that this does not necessary reflect the actual returns made by the strategy in IB, just the theoretical returns over time if this particular filter had been used).
  • Buy/Sell lines – Shows dashed lines where the MDFA trading signal has produced a buy/sell transaction. The green lines are the buy signals (entered a long position) and magenta lines are the sell (entered a short position).
  • Signal – The plot of the signal in real-time. When new data becomes available, the signal is automatically computed and replotted in real-time. This gives one the ability to closely monitory how the current filter is reacting to the incoming data.
  • Aux Signal 1/2 – (If available) Plots of the other available signals produced by the (up to two) other filters constructed and entered in the system. To make either of these auxillary signals the main trading signal simply select the filter associated with the signal using the radio buttons in the filter selection panel.

Along with these plots, to track specific values of any of these plots at anytime, select the desired plot in the Track Plot region of the panel bar. Once selected, specific values and their respective times/dates are displayed in the upper left corner of the plot panel by simply placing the mouse cursor over the plot panel. A small tracking ball will then be moved along the specific plot in accordance with movements by the mouse cursor.

With the graphics panel displaying the performance in real-time of each filter, one can seamlessly switch between a band-pass filter or a timely trend (low-pass) filter according to the changing intraday market conditions. To give an example, suppose at early morning trading hours there is an unusual high amount of volume pushing an uptrend or pulling a downtrend. In such conditions a trend filter is much more appropriate, being able to follow the large-variation in log-returns much better than a band-pass filter can. One can glean from the effects of the trend filter on the morning hours of the market. After automated trading using the trend filter, with the volume diffusing into the noon hour, the band-pass filter can then be applied in order to extract and trade at certain low frequency cycles in the log-return data. Towards the end of the day, with volume continuously picking up, the trend filter can then be selected again in order to track and trade any trending movement automatically.

I am in the process of currently building an automated algorithm to “intelligently” switch between the uploaded filters according to the instantaneous market conditions (with triggering of the switching being set by volume and volatility. Otherwise, for the time being, currently the user must manually switch between different filters, if such switching is at all desired (in most cases, I prefer to leave one filter running all day. Since the process is automated, I prefer to have minimal (if any) interaction with the software during the day while it’s in automated trading mode).

Conclusion

As I mentioned earlier, the main components of the TWS-iMetrica were written in a way to be adaptable to other brokerage/trading APIs. The only major condition is that the API either be available in Java, or at least have (possibly third-party?) wrappers available in Java. That being said, there are only three main types of general calls that are made automated to the connected broker 1) retrieve historical data for any asset(s), at any given time, at most commonly used observation frequencies (e.g. 1 min, 5 min, 10 min, etc.), 2) subscribe to automatic feed of bar/tick data to retrieve latest OHLC and bid/ask data, and finally 3) Place an order (buy/sell) to the broker with different any order conditions (limit, stop-loss, market order, etc.) for any given asset.

If you are interested in having TWS-iMetrica be built for your particular brokerage/trading platform (other than IB of course) and the above conditions for the API are met, I am more than happy to be hired at certain fixed compensation, simply get in contact with me. If you are interested seeing how well the automated system has performed thus far, interested in future collaboration, or becoming a client in order to use the TWS-iMetrica platform, feel free to contact me as well.

Happy extracting!

High-Frequency Financial Trading with Multivariate Direct Filtering I: FOREX and Futures

Animation 1: Click to see animation of the Japanese Yen filter in action on 164 hourly out-of-sample observations.

Animation 1: Click to see animation of the Japanese Yen filter in action on 164 hourly out-of-sample observations.

I recently acquired over 300 GBs of financial data that includes tick data for over 7000 financial assets traded on multiple markets for the past 5 years up until February 1st 2013. This USB drive packed with nearly every detail of world financial markets coupled with iMetrica gave me an opportunity to explore at any fashion to my desire the ability of multivariate direct filtering to produce high performance financial trading signals on nearly any high-frequency. Let me begin this article with saying that I am more than ecstatic with the results, as I hope you will too after reading this article.  In this first article in a series of high-frequency trading with MDFA and iMetrica that I plan to write, I provide some initial experiments with building and extracting financial trading signals for high-frequency intraday observations on foreign exchange (FOREX) data, and by high-frequency in the context of this article, I mean higher frequencies than the daily log-returns I’ve been working with in my previous articles. In the first part of this high-frequency series, I begin by exploring hourly, 30 minute, and 15 minute log-returns, and test different strategies, mostly using low-pass and the recently introduced multi-bandpass (MBP) filter to deduce the best approach to tackle the problem of building successful trading signals in higher frequency data.

In my previous articles, I was working uniquely with daily log-return data from different time spans from a year to a year and a half. This enabled the in-sample period of computing the filter coefficients for the signal extraction to include all the most recent annual phases and seasons of markets, from holiday effects, to the transitioning period of August to September that is regularly highly influential on stock market prices and commodities as trading volume increases a significant amount. One immediate question that is raised in migrating to higher-frequency intraday data is what kind of in-sample/out-of-sample time spans should be used to compute the filter in-sample and then for how long do we apply the filter out-of-sample to produce the trades? Another question that is raised with intraday data is how do we account for the close-to-open variation in price? Certainly, after close, the after-hour bids and asks will force a jump into the next trading day. How do we deal with this jump in an optimal manner? As the observation frequency gets higher, say from one hour to 30 minutes, this close-to-open jump/fall should most likely be larger. I will start by saying that, as you will see in the results of this article, with a clever choice of the extractor \Gamma and explanatory series, MDFA can handle these jumps beautifully (both aesthetically and financially). In fact, I would go so far as to say that the MDFA does a superb job in predicting the overnight variation.

One advantage of building trading signals for higher intraday frequencies is that the signals produce trading strategies that are immediately actionable. Namely one can act upon a signal to enter a long or short position immediately when they happen. In building trading signals for the daily log-return, this is not the case since the observations are not actionable points, namely the log difference of today’s ending price with yesterday’s ending price are produced after hours and thus not actionable during open market hours and only actionable the next trading day. Thus trading on intraday observations can lead to better efficiency in trading.

In this first installment in my series on high-frequency financial trading using multivariate direct filtering in iMetrica, I consider building trading signals on hourly returns of foreign exchange currencies. I’ve received a few requests after my recent articles on the Frequency Effect in seeing iMetrica and MDFA in action on the FOREX sector. So to satisfy those curiosities, I give a series of (financially) satisfying and exciting results in combining MDFA and the FOREX. I won’t give all my secretes away into building these signals (as that would of course wipe out my competitive advantage), but I will give some of the parameters and strategies used so any courageously curious reader may try them at home (or the office). In the conclusion, I give a series of even more tricks and hacks.  The results below speak for themselves  So without further ado, let the games begin.

Japanese Yen

Frequency: One hour returns
30 day out-of-sample ROI: 12 percent
Trade success ratio: 92 percent

Yen Filter Parameters: \lambda = 9.2 \alpha = 13.2, \omega_0 = \pi/5
Regularization: smooth = .918, decay = .139, decay2 = .79, cross = 0

In the first experiment, I consider hourly log-returns of a ETF index that mimics the Japanese Yen called FXY. As for one of the explanatory series, I consider the hourly log-returns of the price of GOLD which is traded on NASDAQ. The out-of-sample results of the trading signal built using a low-pass filter and the parameters above are shown in Figure 1.  The in-sample trading signal (left of cyan line) was built using 400 hourly observations of the Yen during US market hours dating back to 1 October 2012. The filter was then applied to the out-of-sample data for 180 hours, roughly 30 trading days up until Friday, 1 February 2013.

Figure 3: Out-of-sample results for the Japanese Yen. The in-sample trading  signal was built using 400 hourly observations of the Yen during US market hours  dating back to November 1st, 2012. The out-of-sample portion passed the cyan line is on 180 hourly observations, about 30 trading days.

Figure 1: Out-of-sample results for the Japanese Yen. The in-sample trading signal was built using 400 hourly observations of the Yen during US market hours dating back to October 1st, 2012. The out-of-sample portion passed the cyan line is on 180 hourly observations, about 30 trading days.

This beauty of this filter is that it yields a trading signal exhibiting all the characteristics that one should strive for in building a robust and successful trading filter.

  1. Consistency: The in-sample portion of the filter performs exactly as it does out-of-sample (after cyan line) in both trade success ratio and systematic trading performance. 
  2. Dropdowns: One small dropdown out-of-sample for a loss of only .8 percent (nearly the cost of the transaction).
  3. Detects the cycles as it should: Although the filter is not able to pinpoint with perfect accuracy every local small upturn during the descent of the Yen against the dollar, it does detect them nonetheless and knows when to sell at their peaks (the magenta lines).
  4. Self-correction: What I love about a robust filter is that it will tend to self-correct itself very quickly to minimize a loss in an erroneous trade. Notice how it did this in the second series of buy-sell transactions during the only loss out-of-sample. The filter detects momentum but quickly sold right before the ensuing downfall. My intuition is that only frequency-based methods such as the MDFA are able to achieve this consistently. This is the sign of a skillfully smart filter.

The coefficients for this Yen filter are shown below. Notice the smoothness of the coefficients from applying the heavy smooth regularization and the strong decay at the very end.  This is exactly the type of smooth/decay combo that one should desire. There is some obvious correlation between the first and second explanatory series in the first 30 lags or so as well. The third explanatory series seems to not provide much support until the middle lags .

Coefficients of the Yen filter. Here we use three different explanatory series to extract  the trading signal shown in Figure 1.

Figure 2: Coefficients of the Yen filter. Here we use three different explanatory series to extract the trading signal.

One of the first things that I always recommend doing when first attempting to build a trading signal is to take a glance at the periodogram. Figure 2 shows the periodogram of the log-return data of the Japanese Yen over 580 hours.  Compare this with the periodogram of the same asset using log-returns of daily data over 580 days, shown in Figure 3.  Notice the much larger prominent spectral peaks at the lower frequencies in the daily log-return data. These prominent spectral peaks renders multibandpass filters much more advantageous and to use as we can take advantage of them by placing a band-pass filter directly over them to extract that particular frequency (see my article on multibandpass filters). However, in the hourly data, we don’t see any obvious spectral peaks to consider, thus I chose a low-pass filter and set the cutoff frequency at $\pi/5$, a standard choice, and good place to begin.

Figure 1: Periodogram of hourly log-returns of the Japanese Yen over 580 hours.

Figure 3: Periodogram of hourly log-returns of the Japanese Yen over 580 hours.

Figure 3: Periodogram of Japanese Yen using 580 daily log-return observations.

Figure 4: Periodogram of Japanese Yen using 580 daily log-return observations. Many more spectral peaks are present in the lower frequencies.

Japanese Yen

Frequency: 15 minute returns
7 day out-of-sample ROI: 5 percent
Trade success ratio: 82 percent

Yen Filter Parameters: \lambda = 3.7 \alpha = 13, \omega_0 = \pi/9
Regularizationsmooth = .90, decay = .11, decay2 = .09, cross = 0

In the next trading experiment, I consider the Japanese Yen again, only this time I look at trading on even high-frequency log-return data than before, namely on 15 minute log-returns of the Yen from the opening bell to market close.  This presents slightly new challenges than before as the close-to-open jumps are much larger than before, but these larger jumps do not necessarily pose problems for the MDFA. In fact, I look to exploit these and take advantage to gain profit by predicting the direction of the jump.  For this higher frequency experiment, I considered 350 15-minute in-sample observations to build and optimize the trading signal, and then applied it over the span of 200 15-minute out-of-sample observations. This produced the results shown in the Figure 5 below. Out of 17 total trades out-of-sample, there were only 3 small losses each less than .5 percent drops and thus 14 gains during the 200 15-minute out-of-sample time period.  The beauty of this filter is its impeccable ability to predict the close-to-open jump in the price of the Yen. Over the nearly 7 day trading span, it was able to correctly deduce whether to buy or short-sell before market close on every single trading day change. In the figure below, the four largest close-to-open variation in Yen price is marked with a “D” and you can clearly see how well the signal was able to correctly deduce a short-sell before market close. This is also consistent with the in-sample performance as well, where you can notice the buys and/or short-sells at the largest close-to-open jumps (notice the large gain in the in-sample period right before the out-of-sample period begins, when the Yen jumped over 1 percent over night.  This performance is most likely aided by the explanatory time series I used for helping predict the close-to-open variation in the price of the Yen. In this example, I only used two explanatory series (the price of Yen, and another closely related to the Yen).

Figure : Out-of-sample performance of the Japanese Yen filter on 15 minute log-return data.

Figure 5: Out-of-sample performance of the Japanese Yen filter on 15 minute log-return data.

We look at the filter transfer functions to see what frequencies they are being privileged in the construction of the filter. Notice that some noise leaks out passed the frequency cutoff at \pi/9, but this is typically normal and a non-issue. I had to balance for both timeliness and smoothness in this filter using both the customization parameters \lambda and \alpha. Not much at frequency 0 is emphasized, with more emphasis stemming from the large spectral peak found right at \pi/9.

Figure : The filter transfer functions.

Figure 6: The filter transfer functions.

British Pound

Frequency: 30 minute returns
14 day out-of-sample ROI: 4 percent
Trade success ratio: 76 percent

British Pound Filter Parameters: \lambda = 5 \alpha = 15, \omega_0 = \pi/9
Regularizationsmooth = .109, decay = .165, decay2 = .19, cross = 0

In this example we consider the frequency of the data to 30 minute returns and attempt to build a robust trading signal for a derivative of the British Pound (BP) on this higher frequency. Instead of using the cash value of the BP, I use 30 minute returns of the BP Futures contract expiring in March (BPH3). Although I don’t have access to tick data from the FOREX, I do have tick data from GLOBEX for the past 5 years.  Thus the futures series won’t be an exact replication of the cash price series of the BP, but it should be quite close due to very low interest rates.

The results of the out-of-sample performance of the BP futures filter are shown in Figure 7. I constructed the filter using an initial in-sample size of 390 30 minute returns dating back to 1 December 2012. After pinpointing a frequency cutoff in the frequency domain for the \Gamma that yielded decent trading results in-sample, I then proceeded to optimize the filter in-sample on smoothness and regularization to achieve similar out-of-sample performance. Applying the resulting filter out-of-sample on 168 30-minute log-return observations of the BP futures series along with 3 explanatory series, I get the results shown below. There were 13 trades made and 10 of them were successful. Notice that the filter does an exquisite job at triggering trades near local optimums associated with the frequencies inside the cutoff of the filter.

Figure 5: The out-of-sample results of the British Pound using 30-minute return data.

Figure 7: The out-of-sample results of the British Pound using 30-minute return data.

In looking at the coefficients of the filter for each series in the extraction, we can clearly see the effects of the regularization: the smoothness of the coefficients the fast decay at the very end. Notice that I never really apply any cross regularization to stress the latitudinal likeliness between the 3 explanatory series as I feel this would detract from the predicting advantages brought by the explanatory series that I used.

Figure 6: The coefficients for the 3 explanatory series of the BP futures,

Figure 8: The coefficients for the 3 explanatory series of the BP futures,

Euro

Frequency: 30 min returns
30 day out-of-sample ROI: 4 percent
Trade success ratio: 71 percent

Euro Filter Parameters: \lambda = 0, \alpha = 6.4, \omega_0 = \pi/9
Regularizationsmooth = .85, decay = .27, decay2 = .12, cross = .001

Continuing with the 30 minute frequency of log-returns, in this example I build a trading signal for the Euro futures contract with expiration on 18 March 2013 (UROH3 on the GLOBEX). My in-sample period, being the same as my previous experiment, is from 1 December 2012 to 4 January 2013 on 30 minute returns using three explanatory time series.  In this example, after inspecting the periodogram, I decided upon a low-pass filter with a frequency cutoff of \pi/9. After optimizing the customization and applying the filter to one month of 30 minute frequency return data out-of-sample (month of January 2013, after cyan line) we see the performance is akin to the performance in-sample, exactly what one strives for. This is due primarily to the heavy regularization of the filter coefficients involved. Only four very small losses of less than .02 percent are suffered during the out-of-sample span that includes 10 successful trades, with the losses only due to the transaction costs. Without transaction costs, there is only one loss suffered at the very beginning of the out-of-sample period.

Figure : Out-of-sample performance on the 30-min log-returns of Euro futures contract UROH3.

Figure 9 : Out-of-sample performance on the 30-min log-returns of Euro futures contract UROH3.

As in the first example using hourly returns, this filter again exhibits the desired characteristics of a robust and high-performing financial trading filter. Notice the out-of-sample performance behaves akin to the in-sample performance, where large upswings and downswings are pinpointed to high-accuracy. In fact, this is where the filter performs best during these periods. No need for taking advantage of a multibandpass filter here, all the profitable trading frequencies are found at less than \pi/9.  Just as with the previous two experiments with the Yen and the British Pound, notice that the filter cleanly predicts the close-to-open variation (jump or drop) in the futures value and buys or sells as needed.  This can be seen from many of the large jumps in the out-of-sample period (after cyan line).

One reason why these trading signals perform so well is due to their approximation power of the symmetric filter. In comparing the trading signal (green) with a high-order approximation of the symmetric filter (gray line) transfer function \Gamma shown in Figure 10, we see that trading signal does an outstanding job at approximating the symmetric filter uniformly. Even at the latest observation (the right most point), the asymmetric filter hones in on the symmetric signal (gray line) with near perfection. Most importantly, the signal crosses zero almost exactly where required.  This is exactly what you want when building a high-performing trading signal.

Figure : Plot of approximation of the real-time trading signal for UROH3 with a high order approximation of the symmetric filter transfer function.

Figure 10: Plot of approximation of the real-time trading signal for UROH3 with a high order approximation of the symmetric filter transfer function.

In looking at the periodogram of the log-return data and the output trading signal differences (colored in blue), we see that the majority of the frequencies were accounted for as expected in comparing the signal with the symmetric signal. Only an inconsequential amount of noise leakage passed the frequency cutoff of \pi/9 is found.  Notice the larger trading frequencies, the more prominent spectral peaks, are located just after \pi/6. These could be taken into account with a smart multibandpass filter in order to manifest even more trades, but I wanted to keep things simple for my first trials with high-frequency foreign exchange data.  I’m quite content with the results that I’ve achieved so far.

Figure : Comparing the periodogram of the signal with the log-return data.

Figure 11: Comparing the periodogram of the signal with the log-return data.

Conclusion

I must admit, at first I was a bit skeptical of the effectiveness that the MDFA would have in building any sort of successful trading signal for FOREX/GLOBEX high frequency data. I always considered the FOREX market rather ‘efficient’ due to the fact that it receives one of the highest trading volumes in the world.  Most strategies that supposedly work well on high-frequency FOREX all seem to use some form of technical analysis or charting (techniques I’m particularly not very fond of), most of which are purely time-domain based. The direct filter approach is a completely different beast, utilizing a transformation into the frequency domain and a ‘bending and warping’ of the metric space for the filter coefficients to extract a signal within the noise that is the log-return data of financial assets.  For the MDFA to be very effective at building timely trading signals, the log-returns of the asset need to diverge from white noise a bit, giving room for pinpointing intrinsically important cycles in the data. However, after weeks of experimenting, I have discovered that building financial trading signals using MDFA and iMetrica on FOREX data is as rewarding as any other.

As my confidence has now been bolstered and amplified even more after my experience with building financial trading signals with MDFA and iMetrica for high-frequency data on foreign exchange log-returns at nearly any frequency, I’d be willing to engage in a friendly competition with anyone out there who is certain that they can build better trading strategies using time domain based methods such as technical analysis or any other statistical arbitrage technique.  I strongly believe these frequency based methods are the way to go, and the new wave in financial trading.  But it takes experience and a good eye for the frequency domain and periodograms to get used to. I haven’t seen many trading benchmarks that utilize other types of strategies, but i’m willing to bet that they are not as consistent as these results using this large of an out-of-sample to in-sample ratio (the ratios in these experiments were between .50 and .80).  If anyone would like to take me up on my offer for a friendly competition (or know anyone that would), feel free to contact me.

After working with a multitude of different financial time series and building many different types of filters, I have come to the point where I can almost eyeball many of the filter parameter choices including the most important ones being the extractor \Gamma along with the regularization parameters, without resorting to time consuming, and many times inconsistent, optimization routines.  Thanks to iMetrica, transitioning from visualizing the periodogram to the transfer functions and to the filter coefficients and back to the time domain to compare with the approximate symmetric filter in order to gauge parameter choices is an easy task, and an imperative one if one wants to build successful trading signals using MDFA.

Here are some overall tips and tricks to build your own high performance trading signals on high-frequency data at home:

  • Pay close attention to the periodogram. This is your best friend in choosing the extractor \Gamma. The best performing signals are not the ones that trade often, but trade on the most important frequencies found in the data. Not all frequencies are created equal. This is true when building either low-pass or multibandpass frequencies. 
  • When tweaking customization, always begin with \alpha, the parameter for smoothness. \lambda for timeliness should be the last resort. In fact, this parameter will most likely be next to useless due to the fact that the log-return of financial data is stationary. You probably won’t ever need it.
  • You don’t need many explanatory series. Like most things in life, quality is superior to quantity. Using the log-return data of the asset you’re trading along with one and maybe two explanatory series that somewhat correlate with the financial asset you’re trading on is sufficient. Anymore than that is ridiculous overkill, probably leading to over-fitting (even the power of regularization at your fingertips won’t help you).

In my next article, I will continue with even more high-frequency trading strategies with the MDFA and iMetrica where I will engage in the sector of Funds and ETFs. If any curious reader would like even more advice/hints/comments on how to build these trading signals on high-frequency data for the FOREX (or the coefficients built in these examples), feel free to get in contact with me via email. I’ll be happy to help.

Happy extracting!

Realizing the Future with iMetrica and HEAVY Models

In this article we steer away from multivariate direct filtering and signal extraction in financial trading and briefly indulge ourselves a bit in the world of analyzing high-frequency financial data, an always hot topic with the ever increasing availability of tick data in computationally convenient formats. Not only has high-frequency intraday data been the basis of higher frequency risk monitoring and forecasting, but it also provides access to building ‘smarter’ volatility prediction models using so-called realized measures of intraday volatility. These realized measures have been shown in numerous studies over the past 5 years or so to provide a solidly more robust indicator of daily volatility.   While daily returns only capture close-to-close volatility, leaving much to be said about the actual volatility of the asset that was witnessed during the day, realized measures of volatility using higher frequency data such as second or minute data provide a much clearer picture of open-to-close variation in trading.

In this article, I briefly describe a new type of volatility model that takes into account these realized measures for volatility movement called  High frEquency bAsed VolatilitY (HEAVY) models developed and pioneered by Shephard and Sheppard 2009. These models take as input both close-to-close daily returns r_t as well as daily realized measures to yield better forecasting dynamics. The models have been shown to be endowed with the ability to not only track momentum in volatility, but also adjust for mean reversion effects as well as adjust quickly to structural breaks in the level of the volatility process.  As the authors (Sheppard and Shephard, 2009) state in their original paper, the focus of these models is on predictive properties, rather than on non-parametric measurement of volatility. Furthermore, HEAVY models are much easier and more robust estimation wise than single source equations (GARCH, Stochastic Volatility) as they bring two sources of volatility information to identify a longer term component of volatility.

The goal of this article is three-fold. Firstly, I briefly review these HEAVY models and give some numerical examples of the model in action using a gnu-c library and Java package called heavy_model that I develped last year for the iMetrica software. The heavy_model package is available for download (either by this link or e-mail me) and features many options that are not available in the MATLAB code provided by Sheppard (bootstrapping methods, Bayesian estimation, track reparameterization, among others). I will then demonstrate the seamless ability to model volatility with these High frEquency bAsed VolatilitY models using iMetrica, where I also provide code for computing realized measures of volatility in Java with the help of an R package called highfrequency (Boudt, Cornelissen, and Payseur 2012).  

HEAVY Model Definition

Let’s denote the daily returns as r_1, r_2, \ldots, r_T, where T is the total amount of days in the sample we are working with. In the HEAVY model, we supplement information to the daily returns by a so-called realized measure of intraday volatility based on higher frequency data, such as second, minute or hourly data. The measures are called daily realized measures and we will denote them as RM_1, RM_2, \ldots, RM_T for the total number of days in the sample.  We can think of these daily realized measures as an average of variance autocorrelations during a single day. They are supposed to provide a better snapshot of the ‘true’ volatility for a specific day t. Although there are numerous ways of computing a realized measure, the easiest is the realized variance computed as RM_t = \sum_j (X_{t+t_{j,t}} - X_{t+t_{j-1,t}})^2 where t_{j,t} are the normalized times of trades on day t. Other methods for providing realized measures includes using Kernel based methods which we will discuss later in this article (see for example http://papers.ssrn.com/sol3/papers.cfm?abstract_id=927483).

Once the realized measures have been computed for T days, the HEAVY model is given by:

Var(r_t | \mathcal{F}_{t-1}^{HF}) = h_t = \omega_1 + \alpha RM_{t-1} + \beta h_{t-1} + \lambda r^2_t

E(RM_t | \mathcal{F}_{t-1}^{HF}) = \mu_t = \omega_2+ \alpha_R RM_{t-1} + \beta_R \mu_{t-1},

where the stability constraints are  \alpha, \omega_1 \geq 0, \beta \in [0,1] and \omega_2, \alpha_R \geq 0 with \lambda + \beta \in [0,1] and \beta_R + \alpha_R \in [0,1]. Here, the \mathcal{F}_{t-1}^{HF} denotes the high-frequency information from the previous day t-1. The first equation models the close-to-close conditional variance and is akin to a GARCH type model, whereas the second equation models the conditional expectation of the open-to-close variation.  

With the formulation above, one can easily see that slight variations to the model are perfectly plausible. For example, one could consider additional lags in either the realized measure RM_t (akin to adding additional moving average parameters) or the conditional mean/variance variable (akin to adding autoregression parameters). One could also leave out the dependence on the squared returns r^2_t by setting \lambda to zero, which is what the original others recommended. A third variation is adding yet another equation to the pack that models a realized measure that takes into account negative and positive momentum to yield possibly better forecasts as it tracks both losses and gains in the model. In this case, one would add the third component by introducing a new equation for a realized semivariance to parametrically model statistical leverage effects, where falls in asset prices are associated with increases in future volatility.  With realized semivariance computed for the T days as RMS_1, \ldots RMS_T, the third equation becomes

E(RMS_t | \mathcal{F}_{t-1}^{HF}) = \phi_t = \omega_3 + \alpha_{RS} RMS_{t-1} + \beta_{RS} \phi_{t-1}

where \alpha_{RS} + \beta_{RS} < 1 and both positive.

HEAVY modeling in C and Java

To incorporate these HEAVY models into iMetrica, I began by writing a gnu-c library for providing a fast and efficient framework for both quasi-likelihood evaluation and a posteriori analysis of the models. The structure of estimating the models follows very closely to the original MATLAB code provided by Sheppard. However, in the c library I’ve added a few more useful tools for forecasting and distribution analysis. The Java code is essentially a wrapper for the c heavy_model library to provide a much cleaner approach to modeling and analyzing the HEAVY data such as the parameters and forecasts.  While there are many ways to declare, implement, and analyze HEAVY models using the c/java toolkit I provide, the most basic steps involved are as follows.

heavyModel heavy = new heavyModel();
heavy.setForecastDimensions(n_forecasts, n_steps);
heavy.setParameterValues(w1, w2, alpha, alpha_R, lambda, beta, beta_R);
heavy.setTrackReparameter(0);
heavy.setData(n_obs, n_series, series);
heavy.estimateHeavyModel();

The first line declares a HEAVY model in java, while the second line sets the number of forecasts samples to compute and how many forecast steps to take. Forecasted values are provided for both the return variable r_t (using a bootstrapping methodology) and the h_t, \mu_t variables. In the next line, the parameter values for the HEAVY model are initialized. These are the initial points that are utilized in the quasi-maximum likelihood optimization routine and can be set to any values that satisfy the model constraints.   Here, w1 = \omega_1, w2 = \omega_2.

The fourth line is completely optional and is used for toggling (0=off, 1=on) a reparameterization of the HEAVY model so the intercepts of both equations in the HEAVY model are explicitly related to the unconditional mean of squared returns r^2 and realized measures RM_t. The reparameterization of the model has the advantage that it eliminates the estimation of \omega_1, \omega_2 and instead uses the unconditional means, leaving two less degrees of freedom in the optimization. See page 12 of the Shephard and Sheppard 2009 paper for a detailed explanation of the reparameterization. After setting the initial values, the data is set for the model by inputting the total number of observation T, the number of series (normally set to 2 and the data in column-wise format (namely a double array of length n_obs x n_series, where the first column is the return data r_t and the second column is the daily realized measure data.  Finally, with the data set and the parameters initialized  we estimate the model in the 6th line. Once the model is finished estimating (should take a few seconds, depending on the number of observations), the heavyModel java object stores the parameter values, forecasts, model residuals, likelihood values, and more. For example, one can print out the estimated model parameters and plot the forecasts of h_t using the following:


heavy.printModelParameters();
heavy.plotForecasts();
Output:
w_1 = 0.063 w_2 = 0.053
beta = 0.855 beta_R = 0.566
alpha = 0.024 alpha_R = 0.375
lambda = 0.087

Figure 1 shows the plot of the filtered h_t, \mu_t values for 300 trading days from June 2011 to June 2012 of AAPL with the final 20 points being the forecasted values. Notice that the multistep ahead forecast shows momentum which is one of the attractive characteristics of the HEAVY models as mentioned in the original paper by Shephard and Sheppard.

Figure 1: Plots of the filtered returns and realized measures with 20 step forecasts for Verizon for 300 trading days.

Figure 1: Plots of the filtered returns and realized measures with 20 step forecasts for Verizon for 300 trading days.

We can also easily plot the estimated joint distribution function F_{\zeta, \eta} by simply using the filtered h_t, \mu_t and computing the devolatilized values \zeta_t = r_t/ \sqrt{h_t}, \eta_t = (RM_t/\mu_t)^{1/2}, leading to the innovations for the model for t = 2,\ldots,T.

Figure 2 below shows the empirical distribution of F_{\zeta, \eta} for 600 days (nearly two years of daily observations from AAPL).  The $\zeta_t$ sequence should be roughly a martingale difference sequences with unit variance and the $\eta_t$ sequence  should have unit conditional means and of course be uncorrelated.  The empirical results validate the theoretical values.

Figure 2: Scatter plot of the empirical distribution of devolatilized values for h and mu.

Figure 2: Scatter plot of the empirical distribution of devolatilized values for h and mu.

In order to compile and run the heavy_model library and the accompanying java wrapper, one must first be sure to meet the requirements for installation. The programs were extensively tested on a 64bit Linux machine running Ubuntu 12.04. The heavy_model library written in c uses the GNU Scientific Library (GSL) for the matrix-vector routines along with a statistical package in gnu-c called apophenia (Klemens, 2012) for the optimization routine. I’ve also included a wrapper for the GSL library called multimin.c which enables using the optimization routines from the GSL library, but were not heavily tested.  The first version (version 00) of the heavy_model library and java wrapper can be downloaded at sourceforge.net/projects/highfrequency.  As a precautionary warning, I must confess that none of the files are heavily commented in any way as this is still a project in progress. Improvements in code, efficiency, and documentation will be continuously coming.

After downloading the .tar.gz package, first ensure that GSL and Apophenia are properly installed and the libraries are correctly installed to the appropriate path for your gnu c compiler. Second, to compile the .c code, copy the makefile.test file to Makefile and then type make. To compile the heavyModel library and utilize the java heavyModel wrapper (recommended), copy makefile.lib to Makefile, then type make. After it constructs the libheavy.so, compile the heavyModel.java file by typing javac heavyModel.java. Note that the java files were complied successfully using the Oracle Java 7 SDK.  If you have any questions about this or any of the c or java files, feel free to contact me. All the files were written by me (except for the optional multimin.c/h files for the optimization) and some of the subroutines (such as the HEAVY model simulation) are based on the MATLAB code by Sheppard. Even though I fully tested and reproduced the results found in other experiments exploring HEAVY models, there still could be bugs in the code. I have not fully tested every aspect (especially the Bayesian estimation components, an ongoing effort) and if anyone would like to add, edit, test, or comment on any of the routines involved in either the c or java code, I’d be more than happy to welcome it.

HEAVY Modeling in iMetrica

The Java wrapper to the gnu-c heavy_model library was installed in the iMetrica software package and can be used for GUI style modeling of high-frequency volatility. The HEAVY modeling environment is a feature of the BayesCronos module in iMetrica that also features other stochastic models for capturing and forecasting volatility such as (E)GARCH, stochastic volatility, mutlivariate stohastic factor modeling, and ARIMA modeling, all using either standard (Q)MLE model estimation or a Bayesian estimation interface (with histograms showing the MCMC results of the parameter chains).

Modeling volatility with HEAVY models is done by first uploading the data into the BayesCronos module (shown in Figure 3) through the use of either the BayesCronos Menu (featured on the top panel) or by using the Data Control Panel (see my previous article on Data Control).

Figure 3: BayesCronos interface in iMetrica for HEAVY modeling.

Figure 3: BayesCronos interface in iMetrica for HEAVY modeling.

In the BayesCronos control panel shown above, we estimate a HEAVY model for the uploaded data (600 observations of r_t, RM_t) that were simulated from a model with omega_1 = 0.05, omega_2 = 0.10, beta = 0.8 beta_R = 0.3, alpha = 0.02, alpha_R = 0.3 (the simulation was done in the Data Control Module).

The model type is selected in the panel under the Model combobox. The number of forecasting steps and forecasting samples (for the r_t variable) are selected in the Forecasting panel. Once those values are set, the model estimates are computed by pressing the “MLE” button in the bottom lower left corner. After the computing is done, all the available plots to analyze the HEAVY model are available by simply clicking the appropriate plotting checkboxes directly below the plotting canvas.   This includes up to 5 forecasts, the original data, the filtered h_t, \mu_t values,  the residuals/empirical distributions of the returns and realized measures, and the pointwise likelihood evaluations for each observation. To see the estimated parameter values, simply click the “Parameter Values” button in the “Model and Parameters” panel and pop-up control panel will appear showing the estimated values for all the parameters.

Realized Measures in iMetrica

Figure : Computing Realized measures in iMetrica using a convenient realized measure control panel.

Figure 4: Computing Realized measures in iMetrica using a convenient realized measure control panel.

Importing and computing realized volatility measures in iMetrica is accomplished by using the control panel shown in Figure 4. With access to high frequency data, one simply types in the ticker symbol in the “Choose Instrument” box, sets the starting and ending date in the standard CCYY-MM-DD format, and then selects the kernel used for assembling the intraday measurements. The Time Scale sets the frequency of the data (seconds, minutes  hours) and the period scrollbar sets the alignment of the data. The Lags combo box determines the bandwidth of the kernel measuring the volatility. Once all the options have been set, clicking on the “Compute Realized Volatility” button will then produce three data sets for the time period given between start date and end data: 1) The daily log-returns of the asset r_1, \ldots, r_T 2) The log-price of the asset, and 3) The realized volatility measure RM_1, \ldots, RM_T. Once the Java-R highfrequency routine has finished computing the realized measures, the data sets are automatically available in the Data Control Module of iMetrica. From here, one can annualize the realized measures using the weight adjustments in the Data Control Module (see Figure 5). Once content with the weighting, the data can then be exported to the MDFA module or the BayesCronos module for estimating and forecasting the volatility of GOOG using HEAVY models.

Figure 4: The log-return data (blue) and the (annualized) realized measure data using 5 minute returns (pink).

Figure 5: The log-return data (blue) and the (annualized) realized measure data using 5 minute returns (pink) for Google from 1-1-2011 to 6-19-2012.

The Realized Measure uploading in iMetrica utilizes a fantastic R package for studying and working with high frequency financial data called highfrequency (Boudt, Cornelissen, and Payseur 2012). To handle the analysis of high frequency financial data in java, I began by writing a Java wrapper to the R functions of the highfrequency R package to enable GUI interaction shown above in order to download the data into java and then iMetrica. The java environment uses library called RCaller that opens a live R kernel in the Java runtime environment from which I can call and R routines and directly load the data into Java. The initializing sequence looks like this.


caller.getRCode().addRCode("require (Runiversal)");
caller.getRCode().addRCode("require (FinancialInstrument)");
caller.getRCode().addRCode("require (highfrequency)");
caller.getRCode().addRCode("loadInstruments('/HighFreqDataDirectoryHere/Market/instruments.rda')");
caller.getRCode().addRCode("setSymbolLookup.FI('/HighFreqDataDirectoryHere/Market/sec',use_identifier='X.RIC',extension='RData')");

Here, I’m declaring the R packages that I will be using (first three lines) and then I declare where my HighFrequency financial data symbol lookup directory is on my computer (next two lines). This
then enables me to extract high frequency tick data directly into Java. After loading in the desired intrument ticker symbol names, I then proceed to extract the daily log-returns for the given time frame, and then compute the realized measures of each asset using the rKernelCov function in highfrequency R package. This looks something like

for(i=0;i<n_assets;i++)
{
String mark = instrum[i] + "<-" + instrum[i] + "['T09:30/T16:00',]";

caller.getRCode().addRCode(mark);

String rv = "rv"+i+"<-rKernelCov("+instrum[i]+"$Trade.Price,kernel.type ="+kernels[kern]+", kernel.param="+lags+",kernel.dofadj = FALSE, align.by ="+frequency[freq]+", align.period="+period+", cts=TRUE, makeReturns=TRUE)"

caller.getRCode().addRCode(rv);

caller.getRCode().addRCode("names(rv"+i+")<-'rv"+i+"'");
rvs[i] = "rv_list"+i;

caller.getRCode().addRCode("rv_list"+i+"<-lapply(as.list(rv"+i+"), coredata)");
}

In the first line, I’m looping through all the asset symbols (I create Java strings to load into the RCaller as commands). The second line effectively retrieves the data during market hours only (America/New_York time), then creates a string to call the rKernelCov function in R. I give it all the user defined parameters defined by strings as well. Finally, in the last two lines, I extract the results and put them into an R list from which the java runtime environment will read.

Conclusion

In this article I discussed a recently introduced high frequency based volatility model by Shephard and Sheppard and gave an introduction to three different high-performance tools beyond MATLAB and R that I’ve developed for analyzing these new stochastic models. The heavyModel c/java package that I made available for download gives a workable start for experimenting in a fast and efficient framework the benefit of using high frequency financial data and most notably realized measures of volatility to produce better forecasts. The package will continuously be updated for improvements in both documentation, bug fixes, and overall presentation. Finally, the use of the R package highfrequency embedded in java and then utilized in iMetrica gives a fully GUI experience to stochastic modeling of high frequency financial data that is both conveniently easy to use and fast.

Happy Extracting and Volatilitizing!

Building a Multi-Bandpass Financial Portfolio

Animation 1: The changing periodogram for different in-sample sizes and selecting an appropriate band-pass component to the multi-bandpass filter.

Animation 1: Click the image to view the animation. The changing periodogram for different in-sample sizes and selecting an appropriate band-pass component to the multi-bandpass filter.

In my previous article, the third installment of the Frequency Effect trilogy, I introduced the multi-bandpass (MBP) filter design as a practical device for the extraction of signals in financial data that can be used for trading in multiple types of market environments.  As depicted through various examples using daily log-returns of Google (GOOG) as my trading platform, the MBP demonstrated a promising ability to tackle the issue of combining both lowpass filters to include a local bias and slow moving trend while at the same time providing access to higher trading frequencies for systematic trading during sideways and volatile market trajectories. I identified four different types of market environments and showed through three different examples how one can attempt to pinpoint and trade optimally in these different environments.

After reading a well-written and informative critique of my latest article, I became motivated to continue along on the MBP bandwagon by extending the exploration of engineering robust trading signals using the new design. In Marc’s words (the reviewer) regarding the initial results of this latest design in MDFA signal extraction for financial trading : “I tend to believe that some of the results are not necessarily systematic and that some of the results – Chris’ preference – does not match my own priority. I understand that comparisons across various designs of the triptic may require a fixed empirical framework (Google/Apple on a fixed time span).  But this restricted setting does not allow for more general inference (on other assets and time spans). And some of the critical trades are (at least in my perspective) close to luck.”

As my empirical framework was fixed in that I applied the designed filters to only one asset throughout the study and for a fixed time span of a year worth of in-sample data applied to 90 days out-of-sample, results showing the MBP framework applied to other assets and time frames might have made my presentation of this new design more convincing. Taking this relevant issue of limited empirical framework into account, I am extending my previous article many steps further by presenting in this article the creation of a collection of financial trading signals based entirely on the MBP filter.  The purpose of this article is to further solidify the potential for MBP filters and extend applications of the new design to constructing signals for various types of financial assets and in-sample/out-of-sample time frames. To do this I will create a portfolio of assets comprised of a group of well known companies coupled with two commodity ETFs (exchange traded funds) and apply the MBP filter strategy to each of the assets using various out-of-sample time horizons. Consequently, this will generate a portfolio of trading signals that I can track over the next several months.

Portfolio selection

In choosing the assets for my portfolio, I arranged a group of companies/commodities whose products or services I use on a consistent basis (as arbitrary as any other portfolio selection method, right?). To this end, I chose  Verizon (VZ) (service provider for my iPhone5), Microsoft (MSFT) (even though I mostly use Linux for my computing needs), Toyota (TM) ( I drive a Camry), Coffee (JO) (my morning espresso keeps the wheels turning), and Gold (GLD) (who doesn’t like Gold, a great hedge to any currency).  For each of these assets, I built a trading signal using various in-sample time periods beginning summer of 2011 and ending toward the end of summer 2012, to ensure all seasonal market effects were included. The out-of-sample time period in which I test the performance of the filter for each asset ranges anywhere from 90 days to 125 days out-of-sample. I tried to keep the selection of in-sample and out-of-sample points as arbitrary as possible.

Portfolio Performance

And so here we go. The performance of the portfolio.

Coffee (NYSEARCA:JO)

  • Regularization: smooth = .22, decay = .22, decay2 = .02, cross = 0
  • MBP = [0, .2], [.44,.55]
  • Out-of-sample performance: 32 percent ROI in 110 days

In order to work with commodities in this portfolio, the easiest way is through the use of ETFs that are traded in open markets just as any other asset. I chose the Dow Jones-UBS Coffee Subindex JO which is intended to reflect the returns that are potentially available through an unleveraged investment in one futures contract on the commodity of coffee as well as the rate of interest that could be earned on cash collateral invested in specified Treasury Bills.  To create the MBP filter for the JO index, I used JO and USO (a US Oil ETF) as the explanatory series from the dates of 5-5-2011 until 1-13-2013 (just a random date I picked from mid 2011, cinqo de mayo) and set the initial low-pass portion for the trend component of the MBP filter to [0, .17]. After a significant amount of regularization was applied, I added a bandpass portion to the filter by initializing an interval at [.4, .5]. This corresponded to the principal spectral peak in the periodogram which was located just below \pi/6 for the coffee fund. After setting the number of out-of-sample observations to 110,  I then proceeded to optimize the regularization parameters in-sample while ensuring that the transfer functions of the filter were no greater than 1 at any point in the frequency domain. The result of the filter is plotted below in Figure 1, with the transfer functions of the filters plotted below it. The resulting trading signal from the MBP filter is in green and the out-of-sample portion after the cyan line, with the cumulative return on investment (ROI) percentage in blue-pink and the daily price of JO the coffee fund in gray.

Figure : The MBP filter for JO applied 110 Out-of-sample points (after cyan line).

Figure 1: The MBP filter for JO applied 110 Out-of-sample points (after cyan line).

Figure : Transfer function for the JO and USO MBP filters.

Figure 2: Transfer function for the JO and USO MBP filters.

Notice the out-of-sample portion of 110 observations behaving akin to the in-sample portion before it, with a .97 rank coefficient of the cumulative ROI resulting from the trades. The ROI in the out-of-sample portion was 32 percent total and suffered only 4 small losses out of 18 trades. The concurrent transfer functions of the MBP filter clearly indicate where the principal spectral peak for JO (blue-ish line) is directly under the bandpass portion of the filter. Notice the signal produced no trades during the steepest descent and rise in the price of coffee, while pinpointing precisely at the right moment the major turning point (right after the in-sample period). This is exactly what you would like the MBP signal to achieve.

Gold (SPDR Gold Trust, NYSEARCA:GLD)

As one of the more difficult assets to form a well-performing signal both in-sample and out-of-sample using the MBP filter, the GLD (NYSEARCA:GLD) ETF proved to be quite cumbersome in not only locating an optimal bandpass portion to the MBP, but also finding a relevant explaining series for GLD. In the following formulation, I settled upon using a US dollar index given by the PowerShares ETF UUP (NYSEARCA:UUP), as it ended up giving me a very linear performance that is consistent both in-sample and out-of-sample. The parameterization for this filter is given as follows:

  • Regularization: smooth = .22, decay = .22, decay2 = .02, cross = 0
  • MBP = [0, .2], [.44,.55]
  • Out-of-sample performance: 11 percent ROI in 102 days
Figure : Out-of-sample results of the MBP applied to the GLD ETF for 102 observations

Figure 3 : Out-of-sample results of the MBP applied to the GLD ETF for 102 observations

Figure : The Transfer Functions for the GLD and DIG filter.

Figure 4 : The Transfer Functions for the GLD and DIG filter.

Figure : Coefficients for the GLD and DIG filters. Each are of length 76.

Figure 5: Coefficients for the GLD and DIG filters. Each are of length 76.

The smoothness and decay in the coefficients is quite noticeable along with a slight lag correlation along the middle of the coefficients between lags 10 and 38.  This trio of characteristics in the above three plots is exactly what one strives for in building financial trading signals. 1) The smoothness and decay of the coefficients, 2) the transfer functions of the filter not exceeding 1 in the low and band pass, and 3) linear performance both in-sample and out-of-sample of the trading signal.

Verizon (NYSE:VZ)

  • Regularization: smooth = .22, decay = 0, decay2 = 0, cross = .24
  • MBP = [0, .17], [.58,.68]
  • Out-of-sample performance: 44 percent ROI in 124 days trading

The experience of engineering a trading signal for Verizon was one of the longest and more difficult experiences out of the 5 assets in this portfolio. Strangely a very difficult asset to work with. Nevertheless, I was determined to find something that worked. To begin, I ended up using AAPL as my explanatory series (which isn’t a far fetched idea I would imagine. After all, I utilize Verizon as my carrier service for my iPhone 5).  After playing around with the regularization parameters in-sample, I chose a 124 day out-of-day horizon for my Verizon to apply the filter to and test the performance. Surprisingly, the cross regularization seemed to produce very good results both out-of-sample. This was the only asset in the portfolio that required a significant amount of cross regularization, with the parameter touching the vicinity of .24. Another surprise was how high the timeliness parameter \lambda was (40) in order to produce good in-sample and out-of-sample trading results. By far the highest amount of the 5 assets in this study. The amount of smoothing from the weighting function $W(\omega; \alpha)$ was also relatively high, reaching a value of 20.

The out-of-sample performance is shown in Figure 6. Notice how dampened the values of the trading signal are in this example, where the local bias during the long upswings is present, but not visible due to the size of the plot. The out-of-sample performance (after the cyan line) seems to be superior to that of the in-sample portion. This is most likely due to the fact that the majority of the frequencies that we were interested in, near \pi/6, failed to become prominent in the data until the out-of-sample portion (there were around 120 trading days not shown in the plot as I only keep a maximum of 250 plotted on the canvas).  With 124 out-of-sample observations, the signal produced a performance of 44 percent ROI. The filter seems to cleanly and consistently pick out local turning points, although not always at their optimal point, but the performance is quite linear, which is exactly what you strive for.

Figure : The out-of-sample performance on 124 observations from 7-2012 to 1-13-2013.

Figure 6: The out-of-sample performance on 124 observations from 7-2012 to 1-13-2013.

Figure : Coefficients of lag up to 76 of the Verizon-Apple filter,

Figure 7: Coefficients of lag up to 76 of the Verizon-Apple filter,

In the coefficients for the VZ and AAPL data shown in Figure 7, one can clearly see the distinguishing effects of the cross regularization along with the smooth regularization. Note that no decay regularization was needed in this example, with the resulting number of effective degrees of freedom in the construction of this filter being 48.2 an important number to consider when applying regularization to filter coefficients (filter length was 76),

Microsoft (NASDAQ:MSFT) 

  • Regularization: smooth = .42, decay = .24, decay2 = .15, cross = 0
  • MBP = [0, .2], [.59,.72]
  • Out-of-sample performance: 31 percent ROI in 90 days trading

In the Microsoft data I used a time span of a year and three months for my in-sample period and a 90 day out-of-sample period from August through 1-13-2012. My explanatory series was GOOG (the search engine Bing and Google seem to have quite the competition going on, so why not) which seemed to correlate rather cleanly with the share price of MSFT. The first step in obtaining a bandpass after setting my lowpass filter to [0, .2] was to locate the principal spectral peak (shown in the periodogram figure below). I then adjusted the width until I had near monotone performance in-sample. Once the customization and regularization parameters were found, I applied the MSFT/AAPL filter to the 90 day out-of-sample period and the result is shown below. Notice that the effect of the local bias and slow moving trends from the lowpass filter are seen in the output trading signal (green) and help in identifying the long down swings found in the share price. During the long down swings, there are no trades due to the local bias from frequency zero.

Figure : Microsoft trading signal for 90 out-of-sample observations. The ROI out-of-sample is 31 percent.

Figure 8: Microsoft trading signal for 90 out-of-sample observations. The ROI out-of-sample is 31 percent.

Figure : Aggregate periodogram of MSFT and Google showing the principal spectral peak directly inside the bandpass.

Figure 9: Aggregate periodogram of MSFT and Google showing the principal spectral peak directly inside the bandpass.

Figure : The coefficients for the MSFT and GOOG series up to lag 76.

Figure 10: The coefficients for the MSFT and GOOG series up to lag 76.

With a healthy amount of regularization applied to the coefficient space, we can clearly see the smoothness and decay towards the end of the coefficient lags. The cross regularization parameter provided no improvement to either in-sample or out-of-sample performance and was left set to 0.

Despite the superb performance of the signal out-of-sample with a 31 percent ROI in 90 days in a period which saw the share price descend by 10 percent, and relatively smooth decaying coefficients with consistent performance both in and out-of-sample, I still feel like I could improve on these results with a better explanatory series than AAPL. That is one area of this methodology in which I struggle, namely finding “good” explanatory series to better fortify the in-sample metric space and produce more even more anticipation in the signals. At this point it’s a game of trial and error. I suppose I should find a good market economist to direct these questions to.

Toyota (NYSE:TM)

  • Regularization: smooth = .90, decay = .14, decay2 = .72, cross = 0
  • MBP = [0, .21], [.49,.67]
  • Out-of-sample performance: 21 percent ROI in 85 days trading

For the Toyota series, I figured my first explanatory series to test things with would be an asset pertaining to the price of oil. So I decided to dig up some research and found that DIG ( NYSEARCA:DIG), a ProShares ETF, provides direct exposure to the global price of oil and gas (in fact it is leveraged so it corresponds to twice the daily performance of the Dow Jones U.S. Oil & Gas Index).  The out-of-sample performance, with heavy regularization in both smooth and decay, seems to perform quite consistently with in-sample, The signal shows signs of patience during volatile upswings, which is a sign that the local bias and slow moving trend extraction is quietly at work. Otherwise, the gains are consistent with just a few very small losses. At the end of the out-of-sample portion, namely the past several weeks since Black Friday (November 23rd), notice the quick climb in stock price of Toyota. The signal is easily able to deduce this fast climb and is now showing signs of slowdown from the recent rise (the signal is approaching the zero crossing, that’s how I know).  I love what you do for me, Toyota! (If you were living in the US in the1990s, you’ll understand what I’m referring to).

Figure : Out-of-sample performance of the Toyota trading signal on 85 trading days.

Figure 11: Out-of-sample performance of the Toyota trading signal on 85 trading days.

Figure : Coefficients for the  TM and DIG log-return series.

Figure 12: Coefficients for the TM and DIG log-return series.

Figure : The transfer functions for the TM and DIG filter coefficients.

Figure 13: The transfer functions for the TM and DIG filter coefficients.

The coefficients for the TM and DIG series depicted in Figure 12 show the heavy amount of smooth and decay (and decay2) regularization, a trio of parameters that was not easy to pinpoint at first without significant leakage above one in the filter transfer functions (shown in Figure 13). One can see that two major spectral peaks are present under the lowpass portion and another large one in the bandpass portion that accounts for the more frequent trades.

Conclusion

With these trading signals constructed for these five assets, I imagine I have a small but somewhat diverse portfolio, ranging from tech and auto to two popular commodities. I’ll be tracking the performance of these trading signals together combined as a portfolio over the next few months and continuously give updates. As the in-sample periods for the construction of these filters ended around the end of last summer and were already applied to out-of-sample periods ranging from 90 days to 124 (roughly one half to one third of the original in-sample period), with the significant amount of regularization applied, I am quite optimistic that the out-of-sample performance will continue to be the same over the next few months, but of course one can never be too sure of anything when it comes to market behavior. In the worse case scenario, I can always look into digging though my dynamic adaptive filtering and signal extraction toolkit.

Some general comments as I conclude this article. What I truly enjoy about these trading signals constructed for this portfolio experiment (and robust trading signals in general per my other articles on financial trading) is that when any losses out-of-sample or even in-sample occur, they tend to be extremely small relative to the average size of the gains. That is the sign of a truly robust signal I suppose; that not only does it perform consistently both in-sample and out-of-sample, but also that when losses do arrive, they are quite small. One characteristic that I noticed in all robust and high performing trading signals that I tend to stick with is that no matter what type of extraction definition you are targeting (lowpass, bandpass, or MBP), when an erroneous trade is executed (leading to a loss), the signal will quickly correct itself to minimize the loss. This is why the losses in robust signals tend to be small (look at any of the 5 trading signals produced for the portfolio in this article).  Of course, all these good trading signal characteristics are in addition to the filter characteristics (smooth, slightly decaying coefficients with minimal effective degrees of freedom, transfer functions less than or equal to one everywhere, etc.)

Overall, although I’m quite inspired and optimistic with these results. there is still slight room for improvement in building these MBP filters, especially for low volatility sideways markets (for example, the one occurring in the Toyota stock price in the middle of the plot in Figure 11). In general, this is a difficult type of stock price movement in which any type of signal will have success. With low volatility and no trending movements, the log-returns are basically white noise – there is no pertinent information to extract. The markets are currently efficient and there is nothing you can do about it. Only good luck will win (in that case you’re as well off building a signal based on a coin flip). Typically the best you can do in these types of areas is prevent trading altogether with some sort of threshold on the signal, which is an idea I’ve had in my mind recently but haven’t implemented, or make sure any losses are small, which is exactly what my signal achieved in Figure 11 (and which is what any robust signal should do in the first place.)

Lastly, if you have a particular financial asset for which you would like to build a trading signal (similar to the examples shown above), I will be happy to take a stab at it using iMetrica (and/or give you pointers in the right direction if you would prefer to pursue the endeavor yourself). Just send me what asset you would like to trade on, and I’ll build the filter and send you the coefficients along with the parameters used. Offer holds for a limited time only!

Happy extracting.

The Frequency Effect Part III: Revelations of Multi-Bandpass Filters and Signal Extraction for Financial Trading

Animation of the out-of-sample performance of one of the multibandpass filters built in this article for the daily returns of the price of Google. The resulting trading signal was extracted and yielded a trading performance near 39 percent ROI during an 80 day out-of-sample period on trading shares of Google.

Animation of the out-of-sample performance of one of the multibandpass filters built in this article for the daily returns of the price of Google. The resulting trading signal was extracted and yielded a trading performance near 39 percent ROI during an 80 day out-of-sample period on trading shares of Google.

To conclude the trilogy on this recent voyage through various variations on frequency domain configurations and optimizations in financial trading using MDFA and iMetrica, I venture into the world of what I call multi-bandpass filters that I recently implemented in iMetrica.  The motivation of this latest endeavor in highlighting the fundamental importance of the spectral frequency domain in financial trading applications was wanting to gain better control of extracting signals and engineering different trading strategies through many different types of market movement in financial assets. There are typically four different basic types of movement a price pattern will take during its fractalesque voyage throughout the duration that an asset is traded on a financial market. These patterns/trajectories include

  1. steady up-trends in share price
  2. low volatility sideways patterns (close to white noise)
  3. highly volatile sideways patterns (usually cyclical)
  4. long downswings/trends in share price.

Using MDFA for signal extraction in financial time series, one typically indicates an a priori trading strategy through the design of the extractor, namely the target function \Gamma(\omega) (see my previous two articles on The Frequency Effect). Designating a lowpass or bandpass filter in the frequency domain will give an indication of what kind of patterns the extracted trading signal will trade on. Traditionally one can set a lowpass with the goal of extracting trends (with the proper amount of timeliness prioritized in the parameterization), or one can opt for a bandpass to extract smaller cyclical events for more systematic trading during volatile periods. But now suppose we could have the best of both worlds at the same time. Namely, be profitable in both steady climbs and long tumbles, while at the same time systematically hacking our way through rough sideways volatile territory, making trades at specific frequencies embedded in the share price actions not found in long trends. The answer is through the construction of multi-band pass filters. Their construction is relatively simple, but as I will demonstrate in this article with many examples, they are a bit more difficult to pinpoint optimally (but it can be done, and the results are beautiful… both aesthetically and financially).

With the multi-bandpass defined as two separate bands given by A := 1_{[\omega_0, \omega_1]}B := 1_{[\omega_2, \omega_3]} with 0 \leq \omega_0 and \omega_1 < \omega_2, zero everywhere else, it is easy to see that the motivation here is to seek a detection of both lower frequencies and low-mid frequencies in the data concurrently. With now up to four cutoff frequencies to choose from, this adds yet another few wrinkles in the degrees of freedom in parameterizing the MDFA setup. If choosing and optimizing one cutoff frequency for a simple low-pass filter in addition to customization and regularization parameters wasn’t enough, now imagine extracting signals with the addition of up to three more cutoff frequencies. Despite these additional degrees of freedom in frequency interval selection, I will later give a couple of useful hacks that I’ve found helpful to get one started down the right path toward successful extraction.

With this multi-bandpass definition for \Gamma comes the responsibility to ensure that the customization of smoothness and timeliness is adjusted for the additional passband. The smoothing function W(\omega; \alpha) for \alpha \geq 0 that acts on the periodogram (or discrete Fourier transforms in multivariate mode) is now defined piecewise according to the different intervals [0,\omega_0], [\omega_1, \omega_2], and [\omega_3, \pi].  For example, \alpha = 20 gives a piecewise quadratic weighting function (an example shown in Figure 1) and for \alpha = 10, the weighting function is piecewise linear. In practice, the piecewise power function smooths and rids of unwanted frequencies in the stop band much better than using a piecewise constant function. With these preliminaries defined, we now move on to the first steps in building and applying multiband pass filters.

Figure 1: Plot of the Piecewise Smoothing Function for alpha = 15 on a mutli-band pass filter.

Figure 1: Plot of the Piecewise Smoothing Function for alpha = 15 on a mutli-band pass filter.

To motivate this newly customized approach to building financial trading signals, I begin with a simple example where I build a trading signal for the daily share price of Google. We begin with a simple lowpass filter defined by \Gamma(\omega) = 1 if \omega \in [0,.17], and 0 otherwise. This formulation, as it includes the zero frequency, should provide a local bias as well as extract very slow moving trends. The trick with these filters for building consistent trading performance is ensure a proper grip on the timeliness characteristics of the filter in a very low and narrow filter passage. Regularization and smoothness using the weighting function shouldn’t be too much of a problem or priority as typically just only a small fraction of the available degrees of freedom on the frequency domain are being utilized, so not much concern for overfitting as long as you’re not using too long of a filter.  In my example, I maxed out the timeliness \lambda parameter and set the \lambda_{smooth} regularization parameter to .3. Fortunately, no optimization of any parameter was needed in this example, as the performance was spiffy enough nearly right after gauging the timeliness parameter \lambda. Figure 2 shows the resulting extracted trend trading signal in both the in-sample portion (left of the cyan colored line) and applied to 80 out-of-sample points (right of the cyan line, the most recent 80 daily returns of Google, namely 9-29-12 through today, 1-10-13). The blue-pink line shows the progression of the trading account, in return-on-investment percentage. The out-of-sample gains on the trades made were 22 percent ROI during the 80 day period.

Figure 1: The in-sample and out-of-sample gains made by constructing a low-pass filter employing a very high timeliness parameter and small amount of regularization in smoothness. The out-of-sample gains are nearly 30 percent and no losses on any trades.

Figure 2: The in-sample and out-of-sample gains made by constructing a low-pass filter employing a very high timeliness parameter and small amount of regularization in smoothness. The out-of-sample gains are nearly 30 percent and no losses on any trades.

Although not perfect, the trading signal produces a monotonic performance both in-sample and out-of-sample, which is exactly what you strive for when building these trend signals for trading. The performance out-of-sample is also highly consistent (in regards to trading frequency and no losses on any trades) with the in-sample performance. With only 4 trades being made, they were done at very interesting points in the trajectory of the Google share price. Firstly, notice that the local bias in the largest upswing is accounted for due to the inclusion of frequency zero in the low pass filter. This (positive) local bias continues out-of-sample until, interestingly enough, two days before one of the largest losses in the share price of Google over the past couple years. A slightly earlier exit out of this long position (optimally at the peak before the down turn a few days before) would have been more strategic; perhaps further tweaking of various parameters would have achieved this, but I happy with it for now. The long position resumes a few days after the dust settles from the major loss, and the local bias in the signal helps once again (after trade 2). The next few weeks sees shorter downtrending cyclical effects, and the signal fortunately turns positively increasingly right before another major turning point for an upswing in the share price. Finally, the third transaction ends the long position at another peak (3), perfect timing. The fourth transaction (no loss or gain) was quickly activated after the signal saw another upturn, and thus is now in the long position (hint: Google trending upward).  Figure 3 shows the transfer functions \hat{\Gamma} for both the sets of explanatory log-return data and Figure 4 depicts the coefficients for the filter. Notice that in the coefficients plot, much more weight is being assigned to past values of the log-return data with extreme (min and max values) at around lags 15 and 30 for the GOOG coefficients (blue-ish line). The coefficients are also quite smooth due to the slight amount of smooth regularization imposed.

Figure 3: Transfer functions for the concurrent trend filter applied to GOOG.

Figure 3: Transfer functions for the concurrent trend filter applied to GOOG.

Figure 4: The filter coefficients for the log-return data.

Figure 4: The filter coefficients for the log-return data.

Now suppose we wish to extract a trading signal that performs like a trend signal during long sweeping upswings or downswings, and at the same time shares the property that it extracts smaller cyclical swings during a sideways or highly volatile period. This type of signal would be endowed with the advantage that we could engage in a long position during upswings, trade systematically during sideways and volatile times, and on the same token avoid aggressive long-winded downturns in the price. Financial trading can’t get more optimistic then that, right? Here is where the magic of the multi-bandpass comes in. I give my general “how-to” guidelines in the following paragraphs as a step-by-step approach. As a forewarning, these signals are not easy to build, but with some clever optimization and patience they can be done.

In this new formulation, I envision not only being able to extract a local bias embedded in the log-return data but also gain information on other important frequencies to trade on while in sideways markets. To do this, I set up the lowpass filter as I did earlier on [0,\omega_0]. The choice of \omega_0 is highly dependent on the data and should be located through a priori investigations (as I did above, without the additional bandpass).

Animation 2: Example of constructing a multiband pass using the Target Filter control panel in iMetrica. Initially, a low-pass filter is set, then the additional bandpass is added by clicking "Multi-Pass" checkbox. The location is then moved to the desired location using the scrollbars. The new filters are computed automaticall if "Auto" is checked on (lower left corner).

Click on the Animation 2: Example of constructing a multiband pass using the Target Filter control panel in iMetrica. Initially, a low-pass filter is set, then the additional bandpass is added by clicking “Multi-Pass” checkbox. The location is then moved to the desired location using the scrollbars. The new filters are computed automaticall if “Auto” is checked on (lower left corner).

Before setting any parameterization regarding customization, regularization, or filter constraints, I perform a quick scan of the periodogram (averaged periodogram if in multivariate mode) to locate what I call principal trading frequencies in the data. In the averaged periodogram, these frequencies are located at the largest spectral peaks, with the most useful ones for our purposes of financial trading typically before \pi/4. The largest of these peaks will be defined from here on out as the principal spectral peak (PSP). Figure 6 shows an example of an averaged periodogram of the log-return for GOOG and AAPL with the PSP indicated. You might note that there exists a much larger spectral peak located at 7\pi/12, but no need to worry about that one (unless you really enjoy transaction costs). I locate this PSP as a starting point for where I want my signal to trade.

Figure 5: Principal spectral peak in the log-return data of GOOG and AAPL.

Figure 5: Principal spectral peak in the log-return data of GOOG and AAPL.

In the next step, I place a bandpass of width around .15 so that the PSP is dead-centered in the bandpass. Fortunately with iMetrica, this is a seamlessly simple task with just the use of a scrollbar to slide the positioning of this bandpass (and also adjust  the lowpass) to where I desire. Animation 2 above (click on it to see the animation) shows this process of setting a multi-passband in the MDFA Target Filter control panel. Notice as I move the controls for the location of the bandpass, the filter is automatically recomputed and I can see the changes in the frequency response functions \hat{\Gamma} instantaneously.

With the bandpass set along with the lowpass, we can now view how the in-sample performance is behaving at the initial configuration. Slightly tweaking the location of the bandpass might be necessary (width not so much, in my experience between .15 and .20 is sufficient).  The next step in this approach is now to not only adjust for the location of the bandpass while keeping the PSP located somewhat centered, but also adding the effects of regularization to the filter as well. With this additional bandpass, the filter has a tendency to succumb to overfitting if one is not careful enough.

In my first filter construction attempt, I placed my bandpass at [.49,.65] with the PSP directly under it. I then optimized the regularization controls in-sample (a feature I haven’t discussed yet) and slightly tweaked the timeliness parameter (ended up setting it to 3) and my result (drumroll…)  is shown in Figure 6.

Figure 6: The trading performance and signal for the initial attempt at a building a multiband pass fitler.

Figure 6: The trading performance and signal for the initial attempt at a building a multiband pass fitler.

Not bad for a first attempt. I was actually surprised at how few trades there were out-of-sample. Although there are no losses during the 80 days out-of-sample (after cyan line), and the signal is sort of what I had in mind a priori, the trades are minimal and not yielding any trading action during the period right after the large loss in Google when the market was going sideways and highly volatile. Notice that the trend signal gained from the lowpass filter indeed did its job by providing the local bias during the large upswing and then selling directly at the peak (first magenta dotted line after the cyan line).  There are small transactions (gains) directly after this point, but still not enough during the sideways market after the drop.  I needed to find a way to tweak the parameters and/or cutoff to include higher frequencies in the transactions.

In my second attempt, I kept the regularization parameters as they were but this time increased the bandpass to the interval [.51, .68], with the PSP still underneath the bandpass, but now catching on to a few more higher frequencies then before.  I also slightly increased the length of the filter to see if that had any affect. After optimizing on the timeliness parameter \lambda in-sample, I get a much improved signal. Figure 7 shows this second attempt.

Figure 7: The trading performance and signal for the second attempt at construction a multiband pass filter. This one included a few more higher frequencies.

Figure 7: The trading performance and signal for the second attempt at construction a multiband pass filter. This one included a few more higher frequencies.

Upon inspection, this signal behaves more consistently with what I had in mind. Notice that directly out-of-sample during the long upswing, the signal (barely) shows signs of the local bias, but enough not to make any trades fortunately. However, in this signal, we see that filter is much too late in detecting the huge loss posted by Google, and instead sells immediately after (still a profit however). Then during the volatile sideways market, we see more of what we were wishing for; timely trades to the earn the signal a quick 9 percent in the span of a couple weeks. Then the local bias kicks in again and we see not another trade posted during this short upswing, taking advantage of the local trend. This signal earned a near 22 percent ROI during the 80 day out-of-sample trading period, however not as good as the previous signal at  32 percent ROI.

Now my priority was to find another tweak that I could perform to change the trading structure even more. I’d like it to be even more sensitive to quick downturns, but at the same time keep intact the sideways trading from the signal in Figure 7. My immediate intuition was to turn on the i2 filter constraint and optimize the time-shift, similar to what I did in my previous article, part deux of the Frequency Effect. I also lessened the amount of smoothing from my weighting function W(\omega; \alpha), turned off any amount of decay regularization that I had and voila, my final result in Figure 8.

Figure 8: Third attempt at building a multiband pass filter. Here, I turn on i2 filter constraint and optimize the time shift.

Figure 8: Third attempt at building a multiband pass filter. Here, I turn on i2 filter constraint and optimize the time shift.

While the consistency with the in-sample performance to out-of-sample performance is somewhat less than my previous attempts, out-of-sample performs nearly exactly how I envisioned. There are only two small losses of less than 1 percent each, and the timeliness of choosing when to sell at the tip of the peak in the share price of Google couldn’t have been better. There is systematic trading governed by the added multiband pass filter during the sideways and slight upswing toward the end. Some of the trades are made later than what would be optimal (the green lines enter a long position, magenta sells and enters short position), but for the most part, they are quite consistent.  It’s also very quick in pinpointing its own erronous trades (namely no huge losses in-sample or out of sample). There you have it, a near monotonic performance out-of-sample with 39 percent ROI.

In examining the coefficients of this filter in Figure 9, we see characteristics of a trend filter as coefficients are largely weighting the middle lags much more than than initial or end lags (note that no decay regularization was added to this filter, only smoothness) . While at the same time however, the coefficients also weight the most recent log-return observations unlike the trend filter from Figure 4, in order to extract signals for the more volatile areas. The undulating patterns also assist in obtaining good performance in the cyclical regions.

Figure 9: The coefficients of the final filter depicting characteristics of both a trend and bandpass filter, as expected.

Figure 9: The coefficients of the final filter depicting characteristics of both a trend and bandpass filter, as expected.

Finally, the frequency response functions of the concurrent filters show the effect of including the PSP in the bandpass (figure 10). Notice, the largest peak in the bandpass function is found directly at the frequency of the PSP, ahh the PSP. I need to study this frequency with more examples to get a more clear picture to what it means. In the meantime, this is the strategy that I would propose. If you have any questions about any of this, feel free to email me. Until next time, happy extracting!

Figure 10: The frequency response functions of the multiband filter.

Figure 10: The frequency response functions of the multi-bandpass filter.

The Frequency Effect Part Deux: Shifting Time at Frequency Zero For Better Trading Performance

Animation 1: The out-of-sample performance over 60 trading days of signal built using a custom i2 criterion. With 5 trades and 4 successful, the ROI is nearly 40 percent.

Animation 1: The out-of-sample performance over 60 trading days of a signal built using an optimized time-shift criterion. With 5 trades and 4 successful, the ROI is nearly 40 percent over 3 month.

What is an optimized time-shift? Is it important to use when building successful financial trading signals? While the theoretical aspects of the frequency zero and vanishing time-shift can be discussed in a very formal and mathematical manner,  I hope to answer these questions in a more simple (and applicable) way in this article. To do this, I will give an informative and illustrated real world example in this unforeseen continuation of my previous article on the frequency effect a few days ago. I discovered something quite interesting after I got an e-mail from Herr Doktor Marc (Wildi) that nudged me even further into my circus of investigations in carving out optimal frequency intervals for financial trading (see his blog for the exact email and response).  So I thought about it  and soon after I sent my response to Marc, I began to question a few things even further at 3am in the morning while sipping on some Asian raspberry white tea (my sleeping patterns lately have been as erratic as fiscal cliff negotiations), and came up with an idea. Firstly, there has to be a way to include information about the zero-frequency (this wasn’t included in my previous article on optimal frequency selection). Secondly, if I’m seeing promising results using a narrow band-pass approach after optimizing the location and distance, is there anyway to still incorporate the zero-frequency and maybe improve results even more with this additional frequency information?

Frequency zero is an important frequency in the world of nonstationary time series and model-based time series methodologies as it deals with the topic of unit roots, integrated processes,  and (for multivariate data) cointegration. Fortunately for you (and me), I don’t need to dwell further into this mess of a topic that is cointegration since typically, the type of data we want to deal with in financial trading (log-returns) is closer to being stationary (namely close to being white noise, ehem, again, close, but not quite). Nonetheless, a typical sequence of log-return data over time is never zero-mean, and full of interesting turning points at certain frequency bands. In essence, we’d somehow like to take advantage of that and perhaps better locate local turning points intrinsic to the optimal trading frequency range we are dealing with.

The perfect way to do this is through the use of the time-shift value of the filter. The time-shift is defined by the derivative of the frequency response (or transfer) function at zero. Suppose we have an optimal bandpass set at (\omega_0, \omega_1) \subset [0,\pi] where \omega_0 > 0. We can introduce a constraint on the filter coefficients so as to impose a vanishing time-shift at frequency zero. As Wildi says on page 24 of the Elements paper: “A vanishing time-shift is highly desirable because turning-points in the filtered series are concomitant with turning-points in the original data.” In fact, we can take this a step further and even impose an arbitrary time-shift with the value s at frequency zero, where s is any real number. In this case, the derivative of the frequency response function (transfer function) \hat{\Gamma}(\omega) at zero is s. As explained on page 25 of Elements,  this is implemented as \frac{d}{d \omega} |_{\omega=0} \sum_{l=0}^{L-1} b_j \exp(-i j \omega) = s, which implies b_1 + 2b_2 + \cdots + (L-1) b_{L-1} = s.

This constraint can be integrated into the MDFA formulation, but then of course adds another parameter to an already full-flight of parameters.  Furthermore, the search for the optimal s with respect to a given financial trading criterion is tricky and takes some hefty computational assistance by a robust (highly nonlinear) optimization routine, but it can be done. In iMetrica I’ve implemented a time-shift turning point optimizer, something that works well so far for my taste buds, but takes a large burden of computational time to find.

To illustrate this methodology in a real financial trading application, I return to the same example I used in my previous article, namely using daily log-returns of GOOG and AAPL from 6-3-2011 to 12-31-2012 to build a trading signal. This time to freshen things up a but, I’m going to target and trade shares of Apple Inc. instead of Google.  Quickly, before I begin, I will swiftly go through the basic steps of building trading signals. If you’re already familiar, feel free to skip down two paragraphs.

As I’ve mentioned in the past, fundamentally the most important step to building a successful and robust trading signal is in choosing an appropriate preliminary in-sample metric space in which the filter coefficients for the signal are computed. This preliminary in-sample metric space represents by far the most critically important aspect of building a successful trading signal and is built using the following ingredients:

  • The target and explanatory series (i.e. minute, hourly, daily log-returns of financial assets)
  • The time span of in-sample observations (i.e. 6 hours, 20 days, 168 days, 3 years, etc.)

Choosing the appropriate preliminary in-sample metric space is beyond the scope of this article, but will certainly be discussed in a future article.  Once this in-sample metric space has been chosen, one can then proceed by choosing the optimal extractor (the frequency bandpass interval) for the metric space. While concurrently selecting the optimal extractor, one must  begin warping and bending the preliminary metric space through the use of the various customization and regularization tools (see my previous Frequency Effect article, as well as Marc’s Elements paper for an in-depth look at the mathematics of regularization and customization). These are the principle steps.

Now let’s look at an example. In the .gif animation at the top of this article, I featured a signal that I built using this time-shift optimizer and a frequency bandpass extractor heavily centered around the frequency \pi/12, which is not a very frequent trading frequency, but has its benefits, as we’ll see. The preliminary metric space was constructed by an in-sample period using the daily log-returns of GOOG and AAPL and AAPL as my target is from 6-4-2011 to 9-25-2012, nearly 16 months of data. Thus we mention that the in-sample includes many important news events from Apple Inc. such as the announcement of the iPad mini, the iPhone 4S and 5, and the unfortunate sad passing of Steve Jobs. I then proceeded to bend the preliminary metric space with a heavy dosage of regularization, but only a tablespoon of customization¹. Finally, I set the time-shift constraint and applied my optimization routine in iMetrica to find the value s that yields the best possible turning-point detector for the in-sample metric space. The result is shown in Figure 1 below in the slide-show. The in-sample signal from the last 12 months or so (no out-of-sample yet applied) is plotted in green, and since I have future data available (more than 60 trading days worth from 9-25 to present), I can also approximate the target symmetric filter (the theoretically optimal target signal) in order to compare things (a quite useful option available with the click of a button in iMetrica I might add). I do this so I can have a good barometer of over-fitting and concurrent filter robustness at the most recent in-sample observation. Figure 1 in the slide-show below, the trading signal is in green, the AAPL log-return data in red, and the approximated target signal in gray (recall that if you can approximate this target signal (in gray) arbitrarily well, you win, big).

Notice that at the very endpoint (the most challenging point to achieve greatness) of the signal in Figure 1, the filter does a very fine job at getting extremely close. In fact, since the theoretical target signal is only a Fourier approximation of order 60, my concurrent signal that I built might even be closer to the ‘true value’, who knows. Achieving exact replication of the target signal (gray) elsewhere is a little less critical in my experience. All that really matters is that it is close in moving above and below zero to the targeted intention (the symmetric filter) and close at the most recent in-sample observation. Figure 2 above shows the signal without the time-shift constraint and optimization. You might be inclined to say that there is no real big difference. In fact, the signal with no time-shift constraint looks even better. It’s hard to make such a conclusion in-sample, but now here is where things get interesting.

We apply the filter to the out-of-sample data, namely the 60 tradings days. Figure 3 shows the out-of-sample performance over these past 60 trading days, roughly October, November, and December, (12-31-2012 was the latest trading day), of the signal without the time-shift constraint. Compare that to Figure 4 which depicts the performance with the constraint and optimization. Hard to tell a difference, but let’s look closer at the vertical lines. These lines can be easily plotted in iMetrica using the plot button below the canvas named Buy Indicators. The green line represents where the long position begins (we buy shares) and the exit of a short position. The magenta line represents where selling the shares occurs and the entering of a short position. These lines, in other words, are the turning point detection lines. They determine where one buys/sells (enter into a long/short position). Compare the two figures in the out-of-sample-portion after the light cyan line (indicated in Figure 4 but not Figure 3, sorry).

Figure 3: Out-of-sample performance of the signal built without time-shift constraint The out-of-sample period beings where the light cyan line is from Figure 4.

Figure 3: Out-of-sample performance of the signal built without time-shift constraint The out-of-sample period beings where the light cyan line is from Figure 4 below.

Figure 4: Out-of-sample performance of the signal built with time-shift constraint and optimized for turning point-detection,  The out-of-sample period beings where the light cyan is.

Figure 4: Out-of-sample performance of the signal built with time-shift constraint and optimized for turning point-detection, The out-of-sample period beings where the light cyan is.

Notice how the optimized time-shift constraint in the trading signal in Figure 4 pinpoints to a close perfection where the turning points are (specifically at points 3, 4,and 5).  The local minimum turning point was detected exactly at 3, and nearly exact at 4 and 5. The only loss out of the 5 trades occurred at 2, but this was more the fault of the long unexpected fall in the share price of Apple in October. Fortunately we were able to make up for those losses (and then some) at the next trade exactly at the moment a big turning point came (3).  Compare this to the non optimized time-shift constrained signal (Figure 3), and how the second and third turning points are a bit too late and too early, respectively. And remember, this performance is all out-of-sample, no adjustments to the filter have been made, nothing adaptive. To see even more clearly how the two signals compare, here are gains and losses of the 5 actual trades performed out-of-sample (all numbers are in percentage according to gains and losses in the trading account governed only by the signal. Positive number is a gain, negative a loss)

                       Without Time-Shift Optimization              With Time-Shift Optimization

Trade 1:                              29.1 -> 38.7 =  9.6                          14.1 -> 22.3  =  8.2
Trade 2:                              38.7 -> 32.0  = -6.7                         22.3 -> 17.1  = -5.2
Trade 3:                              32.0 -> 40.7  =  8.7                         17.1  -> 30.5  = 13.4
Trade 4:                              40.7 -> 48.2  =  7.5                         30.5 -> 41.2   = 10.7
Trade 5:                              48.2 -> 60.2  = 12.0                        41.2 -> 53.2   = 12.0

The optimized time-shift signal is clearly better, with an ROI of nearly 40 percent in 3 months of trading. Compare this to roughly 30 percent ROI in the non-constrained signal. I’ll take the optimized time-shift constrained signal any day. I can sleep at night with this type of trading signal. Notice that this trading was applied over a period in which Apple Inc. lost nearly 20 percent of its share price.

Another nice aspect of this trading frequency interval that I used is that trading costs aren’t much of an issue since only 10 transactions (2 transaction each trade) were made in the span of 3 months, even though I did set them to be .01 percent for each transaction nonetheless.

To dig a bit deeper into plausible reasons as to why the optimization of the time-shift constraint matters (if only even just a little bit), let’s take a look at the plots of the coefficients of each respective filter. Figure 5 depicts the filter coefficients with the optimized time-shift constraint, and Figure 6 shows the coefficients without it.  Notice how in the filter for the AAPL log-return data (blue-ish tinted line) the filter privileges the latest observation much more, while slowly modifying the others less. In the non optimized time-shift filter, the most recent observation has much less importance, and in fact, privileges a larger lag more. For timely turning point detection, this is (probably) not a good thing.  Another interesting observation is that the optimized time-shift filter completely disregards the latest observation in the log-return data of GOOG (purplish-line) in order to determine the turning points. Maybe a “better” financial asset could be used for trading AAPL? Hmmm…. well in any case I’m quite ecstatic with these results so far.  I just need to hack my way into writing a better time-shift optimization routine, it’s a bit slow at this point.  Until next time, happy extracting. And feel free to contact me with any questions.

Figure 5: The filter coefficients with time-shift optimization.

Figure 5: The filter coefficients with time-shift optimization.

Figure 6: The filter coefficients without the time-shift optimization.

Figure 6: The filter coefficients without the time-shift optimization.

¹ I won’t disclose quite yet how I found these optimal parameters and frequency interval or reveal what they are as I need to keep some sort of competitive advantage as I presently look for consulting opportunities 😉 .

Hierarchy of Financial Trading Parameters

Figure 1: A trading signal produced in iMetrica for the daily price index of GOOG (Google) using the log-returns of GOOG and AAPL (Apple) as the explanatory data, The blue-pink line represents the account wealth over time, with a 89 percent return on investment in 16 months time (GOOG recorded a 23 percent return during this time). The green line represents the trading signal built using the MDFA module using the hierarchy of parameters described in this article. The gray line is the log price of GOOG from June 6 2011 to November 16 2012.

In any computational method for constructing binary buy/sell signals for trading financial assets, most certainly a plethora of parameters are involved and must be taken into consideration when computing and testing the signals in-sample for their effectiveness and performance. As traders and trading institutions typically rely on different financial priorities for navigating their positions such as risk/reward priorities, minimizing trading costs/trading frequency, or maximizing return on investment , a robust set of parameters for adjusting and meeting the criteria of any of these financial aims is needed. The parameters need to clearly explain how and why their adjustments will aid in operating the trading signal to their goals in mind. It is my strong belief that any computational paradigm that fails to do so  should not be considered a candidate for a transparent, robust, and complete method for trading financial assets.

In this article, we give an in-depth look at the hierarchy of financial trading parameters involved in building financial trading signals using the powerful and versatile real-time multivariate direct filtering approach (MDFA, Wildi 2006,2008,2012), the principle method used in the financial trading interface of iMetrica.  Our aim is to clearly identify the characteristics of each parameter involved in constructing trading signals using the MDFA module in iMetrica as well as what effects (if any) the parameter will have on building trading signals and their performance.

With the many different parameters at one’s disposal for computing a signal for virtually any type of financial data and using any financial priority profile, naturally there exists a hierarchy associated with these parameters that all have well-defined mathematical definitions and properties. We propose a categorization of these parameters into three levels according to the clarity on their effect in building robust trading signals. Below are the four main control panels used in the MDFA module for the Financial Trading Interface (shown in Figure 1). They will be referenced throughout the remainder of this article.

Figure 2: The interface for controlling many of the parameters involved in MDFA. Adjusting any of these parameters will automatically compute the new filter and signal output with the new set of parameters and plot the results on the MDFA module plotting canvases.

Figure 3: The main interface for building the target symmetric filter that is used for computing the real-time (nonsymmetric) filter and output signal. Many of the desired risk/reward properties are controlled in this interface. One can control every aspect of the target filter as well as spectral densities used to compute the optimal filter in the frequency domain.

Figure 4: The main interface for constructing Zero-Pole Combination filters, the original paradigm for real-time direct filtering. Here, one can control all the parameters involved in ZPC filtering, visualize the frequency domain characteristics of the filter, and inject the filter into the I-MDFA filter to create “hybrid” filters.

Figure 5: The basic trading regulation parameters currently offered in the Financial Trading Interface. This panel is accessed by using the Financial Trading menu at the top of the software. Here, we have direct control over setting the trading frequency, the trading costs per transaction, and the risk-free rate for computing the Sharpe Ration, all controlled by simply sliding the bars to the desired level. One can also set the option to short sell during the trading period (provided that one is able to do so with the type of financial asset being traded).

The Primary Parameters:

  • Trading Frequency. As the title entails, the trading frequency governs how often buy/sell signal will occur during the span of the trading horizon. Regardless of minute data, hourly data, or daily data, the trading frequency regulates when trades are signaled and is also a key parameter when considering trading costs. The parameter that controls the trading frequency is defined by the cutoff frequency in the target filter of the MDFA and is regulated in either the Target Filter Design interface (see Figure 3) or, if one is not accustomed to building target filters in MDFA, a simpler parameter is given in the Trading Parameter panel (see Figure 5). In Figure 3, the pass-band and stop-band properties are controlled by any one of the sliding scrollbars. The design of the target filter is plotted in the Filter Design canvas (not shown).
  • Timeliness of signal. The timeliness of the signal controls the quality of the phase characteristics in the real-time filter that computes the trading signal. Namely, it can control how well turning points (momentum changes) are detected in the financial data while minimizing the phase error in the filter. Bad timeliness properties will lead to a large delay in detecting up/downswings in momentum. Good timeliness properties lead to anticipated detection of momentum in real-time. However, the timeliness must be controlled by smoothness, as too much timeliness leads to the addition of unwanted noise in the trading signal, leading to unnecessary unwanted trades. The timeliness of the filter is governed by the \lambda parameter that controls the phase error in the MDFA optimization. This is done by using the sliding scrollbar marked \lambda in the Real-Time Filter Design in Figure 2. One can also control the timeliness property for ZPC filters using the \lambda scrollbar in the ZPC Filter Design panel (Figure 4).
  • Smoothness of signal.  The smoothness of the signal is related to how well the filter has suppressed the unwanted frequency information in the financial data, resulting in a smoother trading signal that corresponds more directly to the targeted signal and trading frequency. A signal that has been submitted to too much smoothing however will lose any important timeliness advantages, resulting in delayed or no trades at all. The smoothness of the filter can be adjusted through using the \alpha parameter that controls the error in the stop-band between the targeted filter and the computed concurrent filter. The smoothness parameter is found on the Real-Time Filter Design interface in the sliding scrollbar marked W(\omega) (see Figure 2) and in the sliding scrollbar marked \alpha in the ZPC Filter Design panel (see Figure 4).
  • Quantization of information.   In this sense, the quantization of information relates to how much past information is used to construct the trading signal. In MDFA, it is controlled by the length of the filter L and is found on the Real-Time Filter Design interface (see Figure 2). In theory, as the filter length L gets larger. the more past information from the financial time series is used resulting in a better approximation of the targeted filter. However, as the saying goes, there’s no such thing as a free lunch: increasing the filter length adds more degrees of freedom, which then leads to the age-old problem of over-fitting. The result: increased nonsense at the most concurrent observation of the signal and chaos out-of-sample. Fortunately, we can relieve the problem of over-fitting by using regularization (see Secondary Parameters). The length of the filter is controlled in the sliding scrollbar marked Order-L in the Real-Time Filter Design panel (Figure 2).

As you might have suspected, there exists a so-called “uncertainty principle” regarding the timeliness and smoothness of the signal. Namely, one cannot achieve a perfectly timely signal (zero phase error in the filter) while at the same time remaining certain that the timely signal estimate is free of unwanted “noise” (perfectly filtered data in the stop-band of the filter).   The greater the timeliness (better phase error), the lesser the smoothness (suppression of unwanted high-frequency noise). A happy combination of these two parameters is always desired, and thankfully there exists in iMetrica an interface to optimize these two parameters to achieve a perfect balance given one’s financial trading priorities. There has been much to say on this real-time direct filter “uncertainty” principle, and the interested reader can seek the gory mathematical details in an original paper by the inventor and good friend and colleague Professor Marc Wildi here.

The Secondary Parameters 

Regularization of filters is the act of projecting the filter space into a lower dimensional space,reducing the effective number of degrees of freedom. Recently introduced by Wildi in 2012 (see the Elements paper), regularization has three different members to adjust according to the preferences of the signal extraction problem at hand and the data. The regularization parameters are classified as secondary parameters and are found in the Additional Filter Ingredients section in the lower portion of the Real-Time Filter Design interface (Figure 2). The regularization parameters are described as follows.

  • Regularization: smoothness. Not to be confused with the smoothness parameter found in the primary list of parameters, this regularization technique serves to project the filter coefficients of the trading signal into an approximation space satisfying a smoothness requirement, namely that the finite differences of the coefficients up to a certain order defined by the smoothness parameter are kept relatively small. This ultimately has the effect that the parameters appear smoother as the smooth parameter increases. Furthermore, as the approximation space becomes more “regularized” according to the requirement that solutions have “smoother” solutions, the effective degrees of freedom decrease and chances of over-fitting will decrease as well. The direct consequences of applying this type of regularization on the signal output are typically quite subtle, and depends clearly on how much smoothness is being applied to the coefficients. Personally, I usually begin with this parameter for my regularization needs to decrease the number of effective degrees of freedom and improve out-of-sample performance.
  • Regularization: decay. Employing the decay parameter ensures that the coefficients of the filter decay to zero at a certain rate as the lag of the filter increases. In effect, it is another form of information quantization as the trading signal will tend to lessen the importance of past information as the decay increases. This rate is governed by two decay parameter and higher the value, the faster the values decrease to zero. The first decay parameter adjusts the strength of the decay. The second parameter adjusts for how fast the coefficients decay to zero. Usually, just a slight touch on the strength of the decay and then adjusting for the speed of the decay is the order in which to proceed for these parameters. As with the smoothing regularization, the number of effective degrees of freedom will (in most cases) decreases as the decay parameter decreases, which is a good thing (in most cases).
  • Regularization: cross correlation.  Used for building trading signals with multivariate data only, this regularization effect groups the latitudinal structure of the multivariate time series more closely, resulting in more weighted estimate of the target filter using the target data frequency information. As the cross regularization parameter increases, the filter coefficients for each time series tend to converge towards each other. It should typically be used in a last effort to control for over-fitting and should only be used if the financial time series data is on the same scale and all highly correlated.

The Tertiary Parameters

  • Phase-delay customization. The phase-delay of the filter at frequency zero, defined by the instantaneous rate of change of a filter’s phase at frequency zero, characterizes important information related to the timeliness of the filter. One can directly ensure that the phase delay of the filter at frequency zero is zero by adding constraints to the filter coefficients at computation time. This is done by setting the clicking the i2 option in the Real-Time Filter Design interface. To go further, one can even set the phase delay to an fixed value other than zero using the i2 scrollbar in the Additional Filter Ingredients box. Setting this value to a certain value (between -20 and 20 in the scrollbar) ensures that the phase delay at zero of the filter reacts as anticipated. It’s use and benefit is still under investigation. In any case, one can seamlessly test how this constraint affects the trading signal output in their own trading strategies directly by visualizing its performance in-sample using the Financial Trading canvas.
  • Differencing weight. This option, found in the Real-Time Filter Design interface as the checkbox labeled “d” (Figure 2), multiplies the frequency information (periodogram or discrete Fourier transform (DFT)) of the financial data by the weighting function f(\omega) = 1/(1 - \exp(i \omega)), \omega \in (0,\pi), which is the reciprocal of the differencing operator in the frequency domain. Since the Financial Trading platform in iMetrica strictly uses log-return financial time series to build trading signals, the use of this weighting function is in a sense a frequency-based “de-differencing” of the differenced data. In many cases, using the differencing weight provides better timeliness properties for the filter and thus the trading signal.

In addition to these three levels of parameters used in building real-time trading signals, there is a collection of more exotic “parameterization” strategies that exist in the iMetica MDFA module for fine tuning and constructing boosting trading performance. However, these strategies require more time to develop, a bit of experimentation, and a keen eye for filtering. We will develop more information and tutorials about these advanced filtering techniques for constructing effective trading signals in iMetrica in future articles on this blog coming soon. For now, we just summarize their main ideas.

Advanced Filtering Parameters

  • Hybrid filtering. In hybrid filtering, the goal is to filter a target signal additionally by injecting it with another filter of a different type that was constructed using the same data, but different paradigm or set of parameters. One method of hybrid filtering that is readily available in the MDFA module entails constructing Zero-Pole Combination filters using the ZPC Filter Design interface (Figure 4) and injecting the filter into the filter constructed in the Real-Time Filter Design interface (Figure 2) (see Wildi ZPC for more information). The combination (or hybrid) filter can then be accessed using one of the check box buttons in the filter interface and then adjusted using all the various levels of parameters above, and then used in the financial trading interface. The effect of this hybrid construction is to essentially improve either the smoothness or timeliness of any computed trading signal, while at the same time not succumbing to the nasty side-effects of over-fitting.
  • Forecasting and Smoothing signals. Smoothing signals in time series, as its name implies, involves obtaining a smoother estimate of certain signal in the past. Since the real-time estimate of a signal value in the past involves using more recent values, the signal estimation becomes more symmetrical as past and future values at a point in the past are used to estimate the value of the signal. For example, if today is after market hours on Friday, we can obtain a better estimate of the targeted signal for Wednesday since we have information from Thursday and Friday. In the opposite manner, forecasting involves projecting a signal into the future. However, since the estimate becomes even more “anti-symmetric”, the estimate becomes more polluted with noise. How these smoothed and forecasted signals can be used for constructing buy/sell trading signals in real-time is still purely experimental. With iMetrica, building and testing strategies that improve trading performance using either smoothed and forecasted signals (or both), is available.To produce either a smoothed or forecasted signal, there is a lag scrollbar available in the Real-Time Filter Design interface under Additional Filter Ingredients that enables one to compute either a smooth or forecasted signal. Setting the lag value k in the scrollbar to any integer between -10 and 10 and the signal with the set lag applied is automatically computed. For negative lag values k, the method produces a k step-ahead forecast estimate of the signal. For positive values, the method produces a smoothed signal with a delay of $k$ observations.
  • Customized spectral weighting functions. In the spirit of customizing a trading signal to fit one’s priorities in financial trading, one also has the option of customizing the spectral density estimate of the data generating process to any design one wishes. In the computation of the real-time filter, the periodogram (or DFTs in multivariate case) is used as the default estimate of the spectral density weighting function. This spectral density weighting function in theory is supposed to serve as the spectrum of the underlying data generating process (DGP). However, since we have no possible idea about the underlying DGP of the price movement of publicly traded financial assets (other than it’s supposed to be pretty darn close to a random walk according to the Efficient Market Hypothesis), the periodogram is the best thing to an unbiased estimate a mortal human can get and is the default option in the MDFA module of iMetrica. However, customization of this weighting function is certainly possible through the use of the Target Filter Design interface. Not only can one design their target filter for the approximation of the concurrent filter, but the spectral density weighting function of the DGP can also be customized using some of the available options readily available in the interface. We will discuss these features in a soon-to-come discussion and tutorial on advanced real-time filtering methods.
  • Adaptive filtering. As perhaps the most advanced feature of the MDFA module, adaptive filtering is an elegant way to achieve building smarter filters based on previous filter realizations. With the goal of adaptive filtering being to improve certain properties of the output signal at each iteration without compensating with over-fitting, the adaptive process is of course highly nonlinear. In short, adaptive MDFA filtering is an iterative process in which a one begins with a desired filter, computes the output signal, and then uses the output signal as explanatory data in the next filtering round. At each iteration step, one has the freedom to change any properties of the filter that they desire, whether it be customization, regularization, adding negative lags, adding filter coefficient constraints, applying a ZPC filter, or even changing the pass-band in the target filter. The hope is to improve on certain properties of filter at each stage of the iterative process. An in-depth look at adaptive filtering and how to easily produce an adaptive filter using iMetrica is soon to come later this week.

Model comparison with data sweeps

This slideshow requires JavaScript.

A useful exercise in modeling economic time series is to perform a “sliding window” analysis of the data that computes models in subsets of  the data and tests for the robustness of signal extractions, forecasts, and parameter variance relative to a growing subset of the data. For instance, for a time series of length 300, one could estimate a model on a shorter subset of the data, say for the first 200 observations, and then increase the amount of observations, re-estimate, and then see how the model parameter values change as the number of observations or data subset increases. One can also see how the signal extractions and forecasts change with additional data. Ideally, if the model is specified correctly for the data, there should be a very small variance in the estimated parameters as more data is added to the time series. It signifies the stability of the model selection. Normally, such an exercise would be tedious to carry out with X-13ARIMA-SEATS, or any other software such as MATLAB or R as scripts or spec files would have to be written for each individual re-estimation and then re-plotted. In the uSimX13 module of iMetrica however, this task has been rendered an easy one with the addition of a sliding windows tool.  In this blog entry, we describe this so-called “sliding windows” process and show just how fast and seamless it is to perform model choice robustness and comparisons in iMetrica.

We begin by describing the sliding span/window tool in the iMetrica-uSimX13 module. Once time series data has been loaded into the uSimX13 module from either the uSimX13 main menu or imported from the Data Control module, the uSimX13 computation engine must first be turned on from the uSimX13 menu. Then to access the sliding windows interface,  simply click on the “Sliding Span/Window Activate” check box in the main uSimX13 menu (see Figure 1).

Figure 1. Main drop down menu for the uSimX13 module, showing the “Sliding Span/Window Activate” check box.

Once clicked, the entire plotting canvas will turn to a dark shade of blue, which indicates the windowed region in which model estimation occurs. To control the sliding window, place the mouse cursor along one of the edges of the canvas and slowly glide the mouse with the left-mouse button held down either left or right, depending on which edge of the plot canvas you are on. Moving to the left or right with the left mouse button held down, the windowed area will shrink or expand. The model parameters are estimated instantaneously as the window adjusts and in effect, all the available model statistics, diagnostics, signals, and forecasts are computed as well. For example, as the window expands or shrinks, the trend, seasonally adjusted data, and 24-step ahead forecasts can be plotted and viewed in real-time as the window changes (see Figure 2). One can also slide the window to the left or right by placing the mouse anywhere inside the blue-windowed region, holding down the left mouse button and moving along the time domain. This way, the window length will remain fixed, but the window center will move along different subsets of the data. This can be useful for seeing how model parameters can change within regions of data that exhibit regime changes, namely a sequence in the series that suddenly changes in seasonal or cyclical structure after a certain time observation. The data can now be modeled in both sections before and after the regime change occurs in order to compare the estimated parameter values.

Figure 2. The window sliding across different subsets of the data. The signal extractions, forecast, and model parameters are recomputed automatically as the window changes. Forecast comparisons with the real data as the window span moves is now trivial. Here, the plot in cyan represents the original time series data in-sample and the 24 step forecast out-of-sample, and the light green plot is the time series data adjusted for outliers, as indicated in the model box. One can select the plots using the “series components” plot box. The data in gray represents the time series data not used in the model estimation.

Data Sweep

With the ability to seamlessly capture partitions of the data and model within the given partition using the sliding window, a natural extension of this mouse-on-canvas utility is to employ it somehow in comparing different models of the time series data. We call this method of model comparison time series data sweeping (or simply data sweeping) and it involves selecting an initial window of data from the first observation to the n-th observation where n is some number much less than the total number of observations $latex N$ in the data set (say, one third the amount). The data sweep then computes the sliding window from n as the final observation all the way to N, in increments of one (see Figure 3). At each addition to the length of the window, the forecast is computed for up to 24 steps ahead. Of course, since the true time series data is known in the out-of-sample region of computation, we can compute the forecast error for up to h \leq 24 steps ahead and sum up these errors as n increases to N. We can do this data sweep for several models, computing the aggregate forecast errors over time. The idea is that the best model for the data will ideally have the smallest forecast error, and thus comparing this forecast error with several models will identify the model with the best overall forecasting ability.

To access the data sweep, simply go to the main uSimX13 menu, shown in Figure 1, and click “Sweep Time Series Control Panel”. This will bring up the main interface for the data sweep (shown in Figures 4-6). To begin the sweep, first select the model and regressors desired to model the data with inside the model selection panel of the main uSimX13 interface. Then choose at which observation you’d like to carry out the data sweep (starting at observation n=60 is the default). Lastly, select how many forecast steps you’d like to use in computing the forecast error (1-24). Once content with the settings, click the “Compute time series sweep” button and watch as the window span increases from n to N, recomputing parameters, signals, and forecasts at each step (see slideshow at top of post).  Once the sweep is complete, the parameter statistics, Ljung-Box mean value at two different lags, and the total forecast error is displayed in the control panel. To compare this with another model, save the results of the sweep by clicking “save parameters” in the uSimX13 menu, and then choose another model and recompute (while using the same settings as the previous sweep, of course).

To give an example of this process, we begin by simulating a time series data set of length N = 300 from a SARIMA model of dimension (0,1,2)(0,1,1)_{12}, namely a seasonal auto-regressive integrated moving-average process with two non-seasonal moving-average parameters, and one seasonal moving average parameter. The data sweep is performed on the simulated data with a forecast error horizon of length 23 using three different SARIMA models, (a) (0,1,1)(0,1,1)_{12}, (b) (1,1,0)(0,1,1)_{12}, (c) (0,1,2)(0,1,1)_{12}, the true model. See Figures 4-6 below to see the data sweep results and the estimated parameter mean and standard deviation, the average Ljung-Box statistics at lag 12 and 0, and the forecast errors for each model. Notice the forecast error for the true model (c) (figure 6) is the lowest followed by model (b) (figure 6) and then (a) (figure 4), which is exactly what we would want.

Figure 4. Model (a) and the parameter statistics, forecast error, and data sweep controls.

Figure 5. Model (b) and the parameter statistics, forecast error, and data sweep controls.

Figure 6. Model (c) (the true model) and the parameter statistics, forecast error, and data sweep controls.

iMetrica and Hybridometrics: Introduction

The high-frequency Financial Trading interface of iMetrica. Easily construct in-sample trading strategies with an array of optimizers unique to iMetrica and then employ the strategies out-of-sample to test and fine-tune the trading performance.

This blog serves as an introduction and tutorial to Hybridometrics using iMetrica. Hybridometrics is a term used to express the analysis, modeling, signal extraction, and forecasting of univariate and multivariate financial and economic time series data using a combination of model-based and non-model-based methodologies. Ideal combinations of computational paradigms and methodologies used in hybridometrics include, but are not limited to, traditional stochastic models such as (S)ARIMA models, GARCH models, and multivariate stochastic volatiluty models   combined with empirical mode decomposition techniques and the multivariate direct filter approach (MDFA). The goal of hybridometric modeling is to obtain signal extractions and forecasts, for official use or government use, all the way to building high-frequency financial trading strategies, that perform better than using only model or non-model based methods alone. In other words, hybridometrics seeks to extract the advantages of different paradigms combined to outperform traditional approaches to time series modeling. The iMetrica software package offers the most versatile and computationally efficient portal to this newly proposed time series modeling paradigm, all while remaining surprisingly easy to use.

The iMetrica software package is a unique system of econometric and financial trading tools that focuses on speed, user interaction, visualization tools, and point-and-click simplicity in building models for time series data of all types. Written entirely in GNU C and Fortran with a rich interactive interface written in Java, the iMetrica software offers an abundance of econometric tools for signal extraction and forecasting in multivariate time series that are both easily accessible with the click of a mouse button and fast with results computed and plotted instantaneously without the need for creating output data files or calling exterior plotting devices.

One powerful feature that is unique to the iMetrica software is the innate capability of easily combining both model-based and non-model based methodologies for designing data forecasts, signal extraction filters, or high-frequency financial trading strategies. Furthermore, the strategies can be computed and tested both in-sample and out-of-sample using an easy to use built-in data partitioner that effectively partitions the data into an in-sample storage where models and filters are computed and then an out-of-sample storage where new data is applied to the in-sample strategy to test for robustness, over-fitting, and many other desired properties. This gives the user complete liberty in creating a fast and efficient test-bed for implementing signal extractions, forecasting regimes, or financial trading strategies.

The iMetrica software environment includes five interacting time series analysis modules for building hybrid forecasts, signal extractions, and trading strategies.

  • uSimX13 – A computational environment for univariate seasonal auto-regressive integrated moving-average (SARIMA) modeling and simulation using X-13ARIMA-SEATS. Features an interactive approach to modeling seasonal economic time series with SARIMA models and automatic outlier detection, trading day, and holiday regressor effects. Also includes a suite of model comparison tools using both modern and goodness-of-fit signal extraction diagnostics.
  • BayesCronos – An interactive  time series module for signal extraction and forecasting of multivariate economic and financial time series focusing on Bayesian computation and simulation. This module includes a multitude of models including ARIMA, GARCH, EGARCH, Stochastic Volatility, Multivariate Factor Stochastic Volatility, Dynamic Factor, and Multivariate High-Frequency-Based Volatility (HEAVY), with more models continuously being added. For most of the models featured, one can compute the Bayesian and/or the Quasi-Maximum-Likelihood estimated model fits using either a Metropolis-Hastings Monte Carlo Markov Chain approach (Bayesian) or a QMLE formulation for computing the model parameters estimates. Using a convenient model selection panel interface, complete access to model-type, model parameter dimensions, prior distribution parameters is seamlessly available. In the case of Bayesian estimation, one has complete control over the prior distributions of the model parameters and offers interactive visualization of the Monte Carlo Markov Chain parameter samples. For each model, up to 10 sample 36-steps ahead forecasts can be produced and visualized instantaneously along with other important model features such as model residuals, computed volatility, forecasted volatility, factor models, and more. The results can then be easily exported to other modules in iMetrica for additional filtering and/or modeling.
  • MDFA – An interactive interface to the most comprehensive multivariate real-time direct filter analysis and computation environment in the world. Build real-time filters using both I-MDFA and Zero-Pole Combination (ZPC) filter constructions. The module includes interactive access to timeliness, smoothing, and accuracy controls for filter customization along with parameters for filter regularization to control overfitting. More advanced features include an interface for building adaptive filters, and many controls for filter optimization, customization, data forecasting, and target filter construction.
  • State Space Modeling – A module for building observed component ARIMA and regression models for univariate economic time series. Similar to the uSimX13 module, the State Space Modeling environment focuses on modeling and forecasting economic time series data, but with much more generality than SARIMA models. An aggregation of observed stochastic components in the form of ARIMA models are stipulated for the time series data (for example trend + seasonal + irregular) and then regression components to model outliers, holiday, and trading day effects are added to the stochastic components giving ultimate flexibility in model building. The module uses regCMPNT, a suite of Fortran code written at the US Census Bureau, for the maximum likelihood and Kalman filter computational routines.
  • EMD – The EMD module offers a time-frequency decomposition environment for the time-frequency analysis of time series data.  The module offers both the original empirical mode decomposition technique of Huan et al. using cubic splines, along with an adaptive approach using reproducing kernels and direct-filtering. This empirical decomposition technique decomposes nonlinear and nonstationary time series into amplitude modulated and frequency modulated (AM-FM) components and then computes the intrinsic phase and instantaneous frequency components from the FM components. All plots of the components as well as the time-frequency heat maps are generated instantaneously.

Along with these modules, there is also a data control module that handles all aspects of time series data input and export. Within this main data control hub, one can import multivariate time series data from a multitude of file formats, as well as download financial time series data directly from Yahoo! finance or another source such as Reuters for higher-frequency financial data.  Once the data is loaded, the data can be normalized, scaled, demeaned, and/or log-transformed with a simple slider and button controls, with the effects being plotted on the graphic canvas instantaneously.

Another great feature of the iMetrica software is the ability to learn more about time series modeling through the using of data simulators. The data control module includes an array of data simulating panels for simulating data from a multitude of both univariate and multivariate time series models.  With access to control the number of observations, the random seed for the innovation process, the innovation process distribution, and the model parameters, simulated data can be constructed for any type of economic or financial time time series imaginable. The different types of models include (S)ARIMA models, GARCH models, correlated cycle models, trend models, multivariate factor stochastic volatility models, and HEAVY models. From simulating data and toggling the parameters, one can visualize instantly the effects of the each parameter on the simulated data. The data can then be exported to any of the modules for practicing and honing one’s skills in hybrid modeling, signal extraction, and forecasting.

Keep visiting this blog frequently for continuous updates, tutorials, and proposals in the field of econometrics, signal extraction, forecasting, and high-frequency financial trading. using hybridometrics and iMetrica.