首页    期刊浏览 2024年10月05日 星期六
登录注册

文章基本信息

  • 标题:The behaviour of stock prices in the Nigerian capital market: a Markovian analysis.
  • 作者:Eriki, Peter O. ; Idolor, Eseoghene J.
  • 期刊名称:Indian Journal of Economics and Business
  • 印刷版ISSN:0972-5784
  • 出版年度:2010
  • 期号:December
  • 语种:English
  • 出版社:Indian Journal of Economics and Business
  • 摘要:This study presents a method of Markovian analysis of changes in stock prices over time. It examines eight stocks randomly selected from the banking sector of the Nigerian Stock Exchange for the period January 4th 2005, to June 30th 2008. Given a time series of prices, a Markov chain is defined by letting one state represent a rise in price, another state to represent a fall in price and a third state to represent stability in price. The assumption was that the transition probabilities of the Markov Chain were equal to one irrespective of prior years. This definition of the set of states allows both the magnitude and the direction of change to be incorporated in the analysis. Standard statistical tests for homogeneity and order of the chain are applied. In addition, the hypothesis of stationarity and dependence in vector process Markov-Chain models is tested. Empirical results for the individual process and vector process Markov Chain confirms heterogeneity in the chains. It also suggests that price movements seem to be described by a first or higher order nonstationary Markov Chain. Because of the heterogeneity of the individual and collective vector processes, it is recommended that a three state (rise drop and stable) vector Markov Chain be used to describe the dynamics of the daily price behaviour in the Nigerian Stock Exchange, especially if it is aimed at describing the random nature of prices in the market and not for predictive purposes.
  • 关键词:Markov processes;Stock prices;Stocks

The behaviour of stock prices in the Nigerian capital market: a Markovian analysis.


Eriki, Peter O. ; Idolor, Eseoghene J.


Abstract

This study presents a method of Markovian analysis of changes in stock prices over time. It examines eight stocks randomly selected from the banking sector of the Nigerian Stock Exchange for the period January 4th 2005, to June 30th 2008. Given a time series of prices, a Markov chain is defined by letting one state represent a rise in price, another state to represent a fall in price and a third state to represent stability in price. The assumption was that the transition probabilities of the Markov Chain were equal to one irrespective of prior years. This definition of the set of states allows both the magnitude and the direction of change to be incorporated in the analysis. Standard statistical tests for homogeneity and order of the chain are applied. In addition, the hypothesis of stationarity and dependence in vector process Markov-Chain models is tested. Empirical results for the individual process and vector process Markov Chain confirms heterogeneity in the chains. It also suggests that price movements seem to be described by a first or higher order nonstationary Markov Chain. Because of the heterogeneity of the individual and collective vector processes, it is recommended that a three state (rise drop and stable) vector Markov Chain be used to describe the dynamics of the daily price behaviour in the Nigerian Stock Exchange, especially if it is aimed at describing the random nature of prices in the market and not for predictive purposes.

Keywords: Markov Chains, Markov Processes, Stock Behaviour, Stock Price Transition, Random Walk.

I. INTRODUCTION

Numerous empirical studies have appeared in recent years concerning the behaviour of stock market prices (see, for example Obodos, 2007; Mcqueen and Thorley, 1991; Hamilton, 1989; Cecchetti, Lam and Mark, 1990; Turner, Startz and Nelson, 1989; Samuelson, 1988; Gregory and Sampson, 1987; Ryan, 1973; Fielitz and Bhargava, 1973; and Feilitz, 1969, 1971 to cite only a few).

While a few writers believe that certain price trends and patterns exists to enable the investor to make better predictions of the expected values of future stock market price changes, the majority of these studies conclude that past price data alone cannot form the basis for predicting the expected values of price movements in the stock market.

The purpose of this study is to reinvestigate in terms of a simple Markov Chain the question of dependency among the price movements of common stocks. The procedure employed here is to consider the behaviour of changes in the prices of securities, both for the market as a whole (on the basis of our sample) in terms of a vector Markov Chain, and for a single stock in terms of its particular Markov Chain.

Markov theory is seen to be relevant to the analysis of stock prices in two ways: As a useful tool for making probabilistic statements about future stock price levels and secondly as an extension of the random walk hypothesis. In this role, it constitutes an alternative to the more traditional regression forecasting techniques to which it is in some unique way superior in the analysis of stock price behaviour. Markov theory is concerned with the transition of a system from one state to another. In the case of a sequence of observations on stock prices, the states of the system may be thought of as the set of all possible prices that might be observed for a given stock. Since the number of states so defined is virtually infinite, it is sometimes convenient to group prices into price ranges, or price classes. That security prices may be interpreted as a Markov process means certain theorems relating to the theory of Markov processes may be brought to bear, enabling us to answer certain questions concerning the future price level of a given stock (Ryan, 1973).

In this study, the model considered is that of a first-order Markov Chain. Also, the particular Markov Chain studied here has a finite number of states and a finite number of points at which observations are made. In the analysis, use is made of standard methods, as developed by Anderson and Goodman (1957) (and applied by Bhargava, 1962; and Fielitz and Bhargava, 1973) for drawing statistical inferences in time when Markov Chains are applied.

Research Objectives

The main objective of this study is to investigate in terms of a simple Markov Chain, the question of dependency among the price movements of common stocks in the Nigerian capital market. Thus, the study aims at the following objectives:

(i) ascertain the predictive ability of Markov Chains in stock price analysis

(ii) determine if stock prices follow the random walk hypothesis in the Nigerian stock market.

II. MARKOV PROCESSES

The occurrence of a future state in a Markov process depends on the immediately preceding state and only on it.

If [t.sub.0] < [t.sub.1] < ... [t.sub.n] (n = 0, 1, 2, ...) represents points in time, the family of random variables {[[xi].sub.tn]} is a Markov process if it possesses the following Markovian property:

P {[[xi].sub.tn] = [X.sub.n] | [[xi].sub.tn-1] = [X.sub.n-1], ..., [[xi].sub.t0] = [X.sub.0]} = P {[[xi].sub.tn] = [X.sub.n] | [[xi].sub.tn-1] = [X.sub.n-1]} (1)

For all possible values of [[xi].sub.t0], [[xi].sub.t1], ..., [[xi].sub.tn].

The probability [P.sub.xn-1], [X.sub.n] = P {[[xi].sub.tn] = [X.sub.n] | [[xi].sub.tn-1] = [X.sub.n-1]} is called the transition probability. It represents the conditional probability of the system being in [X.sub.n] at [t.sub.n], given it was in [X.sub.n-1] at [t.sub.n-1] (with X representing the states and t the time). This probability is also referred to as the one-step transition because it describes the system between [t.sub.n-1] and [t.sub.n] (Taha, 2001). An m-step transition probability is thus defined by

[P.sub.xn], [X.sub.n+m] = P {[[xi].sub.tn+m] = [X.sub.n+m] | [[xi].sub.tn] = [X.sub.n]} (2)

Markov Chains

Markov chains are a special class of mathematical technique which is often applicable to decision problems. Named after a Russian Mathematician who developed the method. It is a useful tool for examining and forecasting the frequency with which customers remain loyal to one brand or switch to others. For it is generally assumed that customers do not shift from one brand to another at random, but instead will choose to 'buy brands in future that reflects their choices in the past. Other applications of Markov Chain analysis include models in manpower planning, models for assessing the behaviour of stock prices, models for estimating bad debts or models for credit management (Agbadudu, 1996).

A Markov Chain is a series of states of a system that has the Markov property. At each time the system may have changed from the state it was in the moment before, or it may have stayed in the same state. This changes of state is called transitions. If a sequence of states has the Markov property, it means that every future state is conditionally independent of every prior state given the current state (Obodos, 2007).

Markov Chains is a sequence of events or experiments in which the probability of occurrence for an event depends upon the immediately preceding event. It is also referred to as first-order Markov Chain Process, first-order-Markov process or Markov Chain.

For a finite Markov Chain, we assume that the sequence of experiments (or events) has the following properties:

1. The outcome of each experiment is one of a finite number of possible outcomes [a.sub.1], [a.sub.2], ..., [a.sub.n].

2. The probability of outcome [a.sub.j] on any given experiment is not necessarily independent of the outcomes of previous experiments but depends at most upon the outcome, [a.sub.i] of the immediately preceding experiment.

3. There are given numbers [P.sub.ij] which represent the probability of outcome [a.sub.j] on any given experiment, given that outcome [a.sub.i] occurred on the preceding experiment. That is, the probability of moving from position i to position j in one step, or in one movement, or in one experiment is [P.sub.ij].

The outcomes [a.sub.1], [a.sub.2], ..., [a.sub.n] are called states and the numbers [P.sub.ij] are called transition probabilities. The number of experiments, or number of movements are sometimes referred to as steps. At times the probability distribution of the initial state is given, but this may not be necessary when determining steady state equilibrium (Agbadudu, 1996). The number [P.sub.ij] which represents the probability of moving from state [a.sub.i] to state [a.sub.j] in one step can be put in the form of a matrix called the transition matrix. This matrix for a general finite Markov Chain process with states [a.sub.1], [a.sub.2], ..., [a.sub.n] is given by:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (3)

Here the sum of the elements of each row of the matrix P is 1. This is because the elements in each row represent the probability for all possible transitions (or movements) when the process is in a given state. Therefore, for state [a.sub.i], i = 1, 2, ..., n the transition probabilities is given as follows:

[summation over (j=1)] [P.sub.ij] = 1 (4)

If we let [E.sub.1], [E.sub.2], ..., [E.sub.j] (j = 0, 1 2, ...) represent the exhaustive and mutually exclusive outcomes (states) of a system at any time. Initially, at time [t.sub.0], the system may be in any of these states. Let [a.sub.j.sup.(0)] (j = 0, 1, 2, ... ) be the absolute probability that the system is in state [E.sub.j] at [t.sub.0]. Assume further that the system is Markovian. The transition probability is defined as:

[P.sub.ij] = P {[[xi].sub.tn] = j | [[xi].sub.tn-1] = i} (5)

This basically is the one step probability of going from state i at [t.sub.n-1] to statej at [t.sub.n], assuming that these probabilities are stationary over time. The transition probabilities from state [E.sub.i] to state [E.sub.j] can be more conveniently arranged in a matrix form as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (6)

The matrix P is called a homogenous transition or stochastic matrix because all the transition probabilities [P.sub.ij] are fixed and independent of time. The probabilities [P.sub.ij] must satisfy the conditions.

[summation over (j)] [P.sub.ij] = 1

[P.sub.ij] [greater than or equal to] 0 for all i and j (7)

Indicating that all row probabilities must add up to one while any single entry in the row or column could have a probability of [greater than or equal to] 0. The Markov Chain is now defined. A transition matrix P together with the initial probabilities {[a.sub.j.sup.(0)]} associated with the state [E.sub.j] completely defines a Markov Chain (Taha, 2001). It is also common to think of a Markov Chain as describing the transitional behaviour of a system over equal intervals. Situations exist where the length of the interval depends on the characteristics of the system and hence may not be equal. This case is referred to as imbedded Markov Chains.

III. THE EMPIRICAL LITERATURE

A typical stock market observer is faced with the problem of predicting the future behaviour either of the market or of a particular stock. By utilizing Markov chain models, the behaviour both of a population of stocks, and of individual stocks over a period of time can be analysed and observed for the explicit purpose of learning how to predict future price behaviour wholly on the basis of past price information (Fielitz, 1969). There are two ways of looking at the problem. One can study the individual process Markov chain model, or one can consider the vector process Markov chain. The individual process Markov chain allows one to study the change behaviour of each individual stock while the vector process Markov chain considers not only the individual processes describing particular stocks, but also the process that characterizes the stock market as a whole. In the vector process Markov chain model, the processes for each component stock are themselves considered as Markov chains (Fielitz, 1971).

The set of states for the individual process Markov chain is defined by first testing the stationarity of the individual sample records, and then applying the stable paretian distribution as the form of the distribution of the random variable in the process. Previous research has shown that the stable paretian distribution generally has an infinite variance, and thus the mean absolute deviation (which is not infinite) is used as a measure of dispersion rather than the variance. Together with the mean value, these parameters are employed to define the set of states for a three state Markov chain. Once the set of states is defined, it is possible to obtain the empirical initial and transition probabilities; and the standard statistical tests for independence and stationarity in Markov-chains are immediately applicable. The same method for defining the states used in connection with the individual process is also applicable to the vector-process Markov-chain model (Fielitz, 1969).

Once the states are defined, an empirical representation for the vector process and each individual process can be considered by the formation of a series of matrices of transition observations. Test for stationarity and independence are immediately applicable, as well as a test for homogeneity in the case of the vector process (Fielitz and Bhargava, 1973).

Business and Other Related Applications of Markov Chains

Markov analysis is basically a probabilistic technique which does not provide a recommended decision. Instead, Markov analysis provides probabilistic information about a decision situation that can aid the decision marker in making a decision; as such it is more of a descriptive technique that results in probabilistic information (Taylor, 1996). Markov analysis is specifically applicable to systems that exhibit probabilistic movement from one state (or condition) to another, over time. For example, Markov analysis can be used to determined the probability that a machine will be running one day and broken down the next, or that a customer will change brands of products from one month to another-typically known as the brand switching problem. This is one area that it has found popular application; and is basically a marketing application that focuses on the loyalty of customers to a particular product brand, store, or supplier. Other applications are in the field of finance, where attempts have been made to predict stock returns, prices as well as to test the random walk hypothesis and other aspects of the efficient market hypothesis under a different set of assumptions than are traditionally needed. For example, the Markov tests do not require annual returns to be normally distributed although they do require the Markov Chain to be stationary. Markov chain stationarity is defined as constant transition probabilities over time. However, one cost of modeling returns with Markov chains is the information that is lost when continuous valued returns are divided into discrete states (Mcqueen and Thorley, 1991).

Niederhoffer and Osborne (1966) use Markov Chains to show some non-random behaviour in transaction to transaction ticker prices resulting from investors tendency to place buy and sell orders at integers (23), halfs (23 1/2), quarters and odd eights in descending preferences. Dryden (1969) applies Markov Chains to U.K. (United Kingdom) stocks which, at the time, were quoted as rising, falling, or remaining unchanged. Fielitz (1969), Fielitz and Bhargava (1973) and Fielitz (1975) show that individual stocks tend to follow a first order, or higher order, Markov Chain for daily returns; however, the process is not stationary, neither are the chains homogenous. While Samuelson (1988) uses a first order Markov Chain to explore the implications of mean regressing equity returns.

A two-state Markov Chain is used by Turner, Startz, and Nelson (1989) to model changes in the variance of stock returns and Cecchetti, Lam and Mark (1990) show that if economic driving variables follow a Markov Chain process, then the negative serial correlation found in long horizons can be consistent with an equilibrium model of asset pricing. Markov Chains have also been used to model other asset markets, for example, Gregory and Sampson (1987), Hamilton (1989), and Engle and Hamilton (1990).

Mcqueen and Thorley (1991) used a Markov Chain model to test the random walk hypothesis of stock prices. Given a time series of returns, they defined a Markov Chain by letting one state represent high returns and the other to represent low returns. The random walk hypothesis restricted the transition probabilities of the Markov Chain to be equal irrespective of the prior years. The results showed that annual real returns exhibited significant non-random walk behaviour in the sense that low (high) returns tended to follow runs of high (low) returns for the period under consideration.

IV. MODEL, DATA AND METHODOLOGY

A model is a theoretical construct that represents processes through a set of variables and a set of logical and quantitative relationships between them. As in other fields, models are simplified frameworks designed to illuminate complex processes. The goal of the model is that the isolated and simplified relationship has some predictive power that can be tested using appropriate statistical tools. Ignoring the fact that the ceteris paribus assumption is being made is another big failure often made when a model is applied. At the minimum, an attempt must be made to look at the various factors that may be equal and take those into account (Abosede, 2008).

For the study a three state Markov Chain model aptly described as a rise (r), drop (d) and stable (s); is used to show the three basic possible price movement of a stock. With this, we can derive the probability of the stock price rising, dropping or remaining stable; and on the basis of these probabilities attempt to predict the future price direction of a stock, with the sum of the probabilities equaling one. This three state system is set as the initial probability vector ([U.sub.o]) which gives the probability of the system being in a particular state.

Furthermore, given the previous state (price) of a stock whether in a rise (r), drop (d) or stable (s) state; transition to a new state of rise, drop or stable is also possible. This we can have as a rise in price leading to another rise (rr), or drop (rd) or stable prices (rs). We can also have a drop leading to a rise (dr), or drop (dd) or stable prices (ds). Finally, we can have a stable price situation leading to a rise in prices (sr), drop (sd) or stable prices (ss). Markov Chains are often described by a directed graph, where the edges are labeled by the probabilities of moving from one state to the other. The directed graph for our model of stock price transition is thus shown in figure 1.

[FIGURE 1 OMITTED]

From the three state system shown in figure 1, transition could occur from state x to state y or state z depicted as s (stable), r (rise) and d (drop). Therefore, for any transition, the probability of moving to the next state is given as [P.sub.i] and the sum of probabilities must equal 1, depicted as follows:

[[sigma].sub.i=1] [P.sub.i] = 1 (8)

If we assume that the system was previously in a particular state x, transition from the previous state x to a new state is possible provided the previous state is in a non-absorbing state. A state ij is called absorbing if it is impossible to leave that state. The state ij is thus absorbing if and only if [P.sub.ij] = 1 and [P.sub.ij] = 0 for i [not equal to] j. Therefore, given an initial probability vector [U.sub.o], we can compute the probability of it being in the next state once we have derived the transition matrix. Therefore

[U.sub.1] = [U.sub.o] .P (Note that U.sub.o] and P are vectors) (9)

[U.sub.2] = [U.sub.1] .P (10)

[U.sub.3] = [U.sub.2] .P (11)

[U.sub.n] = [U.sub.n-1] .P (12)

The various probabilities for this occurrence can be put in a matrix P, which is called the transition matrix and shows the probability of the system moving from state to state. It gives the probability of transiting from rise to rise, rise to drop, rise to stable and so on. The probabilities can be derived using the estimation procedures below:

r = [SIGMA] Pr / [SIGMA] Pr + [SIGMA] Pd + [SIGMA] Ps (13)

d = [SIGMA] Pd / [SIGMA] Pr + [SIGMA] Pd + [SIGMA] Ps (14)

s = [SIGMA] Ps / [SIGMA] Pr + [SIGMA] Pd + [SIGMA] Ps (15)

rr = [SIGMA] Pr r / [SIGMA] Pr r + [SIGMA] Pr d + [SIGMA] Pr s (16)

rd = [SIGMA] Pr d / [SIGMA] Pr r + [SIGMA] Pr d + [SIGMA] Pr s (17)

rs = [SIGMA] Pr s / [SIGMA] Pr r + [SIGMA] Pr d + [SIGMA] Pr s (18)

dr = [SIGMA] Pdr / [SIGMA] Pdr + [SIGMA] Pdd + [SIGMA] Pds (19)

dd = [SIGMA] Pdd / [SIGMA] Pdr + [SIGMA] Pdd + [SIGMA] Pds (20)

ds = [SIGMA] Pds / [SIGMA] Pdr + [SIGMA] Pdd + [SIGMA] Pds (21)

sr = [SIGMA] Psr / [SIGMA] Psr + [SIGMA] Psd + [SIGMA] Pss (22)

sd = [SIGMA] Psd / [SIGMA] Psr + [SIGMA] Psd + [SIGMA] Pss (23)

ss = [SIGMA] Pss / [SIGMA] Psr + [SIGMA] Psd + [SIGMA] Pss (24)

The Markov Chain Model

In this section, the possible price movement of a stock are modeled as a three-state of nature (rise, drop, stable) Markov Chain with the sum of the probabilities equaling one. The three states are captured in an initial probability vector which gives the probability of the stock price being in a particular state. The transition probability which gives the probability of the system transiting from state to state is also given. To compute the probability of the system (stock price) being in the next state, we used matrix multiplication laws to derive the product of the initial probability vector (matrix) and the transition matrix. On the basis of the result derived an attempt was made to test the various hypotheses and to predict the possible future price direction of the stocks selected for the study. The vectors are given as follows:

[U.sub.o] = [[U.sub.r] [U.sub.d] [U.sub.s]] = [[P.sub.r] [P.sub.d] [P.sub.s] (25)

Also,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (26)

[U.sub.o] = Initial Probability Vector

P = Transition Probability Matrix

[U.sub.r] = [P.sub.r] = Probability of the stock price rising

[U.sub.d] = [P.sub.d] = Probability of the stock price dropping

[U.sub.s] = [P.sub.s] = Probability of the stock price remaining stable

[P.sub.rr] = Probability of the stock price rising after a previous rise

[P.sub.rd] = Probability of the stock price dropping after a previous rise

[P.sub.rs] = Probability of the stock price remaining stable after a previous rise

[P.sub.dr] = Probability of the stock price rising after a previous drop

[P.sub.dd] = Probability of the stock price dropping after a previous drop

[P.sub.ds] = Probability of the stock price remaining stable after a previous drop

[P.sub.sr] = Probability of the stock price rising after a previous stable state

[P.sub.sd] = Probability of the stock price dropping after a previous stable state

[P.sub.ss] = Probability of the stock price remaining stable after a previous stable state.

Estimation and Testing Procedure

For our estimation and testing, we borrow greatly from the field of mathematics and binary operations. We basically dealt with zero's and one's as popularly used in binary combination and binary mathematics. Here a 1 (one) is used to represent the actual occurrence of an event while 0 (zero) represented non occurrence. In sum, this approach is adopted for the over eight hundred daily stock prices, after which a frequency count is taken. Using simple probability and statistical methods quite common to the die and coin tossing problem, a set of formulae is derived for the estimation of the probabilities of the various states. This procedure is quite similar and in line with methods adopted by Anderson and Goodman (1957); fielitz and Bhargava (1972) and quite recently by Obodos (2005) and Idolor (2009).

Research Hypothesis

The hypotheses to be tested will provide answers to the research questions and as well assist in dealing with issues raised in the research problems and objectives. The hypotheses are stated in the null form as follows:

[H.sub.01:] The transition probabilities, for the vector Markov Chain, are homogeneous.

[H.sub.02:] The transition probabilities for the vector Markov Chain are stationary.

[H.sub.03:] The observations at successive points in time are independent against the alternative hypothesis that the observations are from a first or higher-order Markov Chain.

These hypotheses serve as the link between theory, speculations and facts, the confirmation or otherwise of these propositions is the subject of the following sections.

Test and Homogeneity for the Vector Process

If the vector Markov process is homogeneous, then [[X.sub.t], t = 1, 2, ..., T] reduces to an individual process, and [[X.sub.st], t = 1, 2, ..., T], s = 1, 2, ..., S, can be considered as Markov Chains with the same parameter values of the transition probabilities. If [[X.sub.t]] is not homogeneous, then [[X.sub.t]] must be studied separately as individual processes for each s = 1, 2 .... , S in order to make specific statements about changes in the natural logarithms of prices for the S different stocks. For simplicity, in some past empirical work done with Markov Chains, researchers have investigated collective processes, implicitly assuming the homogeneity of the vector process (Fielitz and Bhargava, 1973).

To determine whether or not the vector Markov process [X.sub.t], t = 1, 2, ..., T] is homogeneous, a simple test can be devised using some of the methods given in Chakravarti, et al. (1967). The total time interval is divided into c equal subintervals and for each fixed i, j a frequency matrix with element fsc is formed, where fsc equals the number of transitions of stocks from state i to state j during the cth time interval for S = 1, 2, ..., S and C = 1, 2, ..., C. We then compute the statistics:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (27)

[U.sub.ij.sup.2] = Modified chi-square

Fobserved = Observed frequencies

Fcalculated = Calculated frequency

Under the hypothesis of homogeneity, each statistic [U.sub.ij.sup.2] has an asymptotic chi-square distribution with (C-1) (S-1) degrees of freedom. If [U.sub.ij.sup.2] calculated is greater than the tabulated value reject the null hypothesis, otherwise accept it (Fielitz and Bhargava, 1973).

In the empirical analysis the homogeneity test is applied first to the collective or vector process, and then to the individual processes.

Test for Stationarity of the Process

For testing the hypothesis of stationarity in a first-order Markov chain, the null hypothesis is [H.sub.o:] [P.sub.ij] (t) = [P.sub.ij] for all i, j = 1, 2, ..., V; t = 1, 2, ..., T. The chi-square test of stationarity in contingency tables consists of calculating for each row i of the transition matrix the sum

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (28)

[U.sub.i.sup.2] = Modified chi-square

Fobserved = Observed frequencies

Fcalculated = Calculated frequency

Where fij(t) denotes the observed number of transitions from state i at time t-1 to state j at time t.

The assumption is made that [sigma]jfij(t) are non-random for i, j = 1, 2, ..., V; t = 1, 2. ..., T. Under the null hypothesis, each [Ui.sup.2] has an asymptotic chi-square distribution with (V-1) (T-1) degrees of freedom. Also, [Ui.sup.2] for i = 1, 2, ..., V, are asymptotically independent, so that the sum

[U.sub.2] = [sigma]i [U.sub.i.sup.2] (29)

has an asymptotic chi-square distribution with V (V-1) (T-1) degrees of freedom (Fielitz and Bhargava, 1973).

In the empirical analysis that follows, the stationarity test is applied to the collective or vector process, where the transition matrices reflect aggregated transitions across all securities.

Test for the Order of the Chain

For testing the null hypothesis that the Markov process is independent in time against the alternative hypothesis that it is dependent, i.e. first-order, the following statistic is computed:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (30)

[U.sup.2] = Modified chi-square

Fobserved = Observed frequencies

Fcalculated = Calculated frequency

The statistic [U.sup.2] has an asymptotic chi-square distribution with ([V-1.sup.)2] degrees of freedom. If [U.sup.2] cal is greater than [U.sub.i.-a.sup.2] ([V-1.sup.)2] reject the null hypothesis otherwise accept it (Fielitz and Bhargava, 1973).

In the empirical analysis that follows, the stationarity test is applied to the collective or vector process, where the transition matrices reflect aggregated transitions across all securities.

Data Gathering

As at January 20C5, the population of Nigerian banks stood at eighty-nine (89). A simple random selection, by balloting, of eight (8) banks listed on the Nigerian stock exchange was undertaken. The daily stock prices of the eight (8) randomly selected banks over a three year period covering 4th January 2005 to 30th June 2008 served as the data source. The data gathered from the official website of The Nigerian Stock Exchange and Cashcraft Asset Management Limited showed the price movements of the randomly selected banks for the period under investigation. The only restriction made in selecting the stocks is that price data must have been available for the entire period covered, i.e., the bank must have been in existence and quoted on the Nigerian Stock Exchange since 4th January 2005. The banking sector of the stock exchange was chosen because it represented the most vibrant and actively traded sector of the stock exchange.

Furthermore, many of the banks that were quoted prior to 31st December 2005, have either merged with other banks or had their license revoked as a result of their inability to meet the minimum N25 billion capital base regulatory requirement (which has led to a reduction in the current number of banks in the country from 89 to 25), set by the Central Bank of Nigeria (CBN). As a result many new mega banks emerged on the Nigerian bourse, after the consolidation programme. This has naturally reduced our ability to obtain data spanning a longer time period, for a larger number of banks, which would be more desirable for a study of this nature. This does not however reduce the flavour and value of the findings. Finally, for the period under study, all data utilized were secondary in nature and were derived from secondary sources. The eight randomly selected banks that were used for this study are:

(1) Access Bank Plc

(2) Afribank Nigeria Plc

(3) Eco Bank Nigeria Plc

(4) First Bank of Nigeria Plc

(5) First City Monument Bank Plc (FCMB)

(6) Intercontinental Bank Plc

(7) Union Bank of Nigeria Plc

(8) Wema Bank Plc.

V. EMPIRICAL FINDINGS

The key issue in this study is to evaluate the predictive ability of Markov Chains in stock price analysis. To this end the study uses a Markov Chain model to test the random walk hypothesis of stock prices. This is undertaken by conducting various tests aimed at ascertaining if the Markov Chains are homogeneous, stationary, and independent in time. A modified version of the chi square ([U.sup.2]) test statistics were run at 5% significance for the vector and individual process Markov Chains. The modified chi-square being in line with the methods developed and adopted by Anderson and Goodman (1957), Chakravarti, et al (1967) and Fielitz and Bhargava (1973).

Two levels of analysis were undertaken here. The first was in respect of the individual bank stocks while the second was for the collective bank stocks. Table 1-11 presents our research findings.

The high incidence of statistically significant observations in table 1-8 suggests that the hypothesis of homogeneity cannot be accepted. Interestingly, one may note that, for the entire stock market, the probabilities of remaining in the same state from day to day, or for experiencing a large gain or loss, seem to vary from stock to stock causing the nonhomogeneity.

Aggregate Analysis

In furtherance of the micro-evaluation of the individual stocks of the chosen banks, an aggregate analysis was also carried out to test the already stated hypotheses. The results from hypotheses 2 and 3 are of great importance here as we will only be considering the aggregate results derived from them. The empirical results are presented in tables 9-11.

Table 9 shows that the daily vector process for the collective stocks cannot be assumed to be hemogeneous. For the three state process, the results suggest that individual stocks do not have identical probabilities for holding price level, or for making substantial price movements. Moreover, each stock seem to have the same probabilities for modest upward or downward movement, indicating at least some form of conformity of price behaviour.

The magnitude of the [U.sup.2] values for the significant cases (shown in table 10) are such that the significance probabilities (5%) is very small. The results indicates that the chains are non stationary. Furthermore, the magnitude of the [U.sup.2] values for the significant cases (shown in table 11) is such that the significant probabilities (5%) is very small. We therefore reject the null hypothesis and hold that the chains are of a first or higher order nature.

VI. RECOMMENDATIONS

The non-stationary behaviour of the Markov Chains in describing both the vector and individual processes defined from daily closing price changes is noteworthy in that any dependence found is constantly changing in time. Indeed, the non-stationary condition may account for the fact that to date efforts to formulate models to predict stock-price movements on the basis of past daily and weekly price data alone have generally been unsuccessful (Fielitz and Bhargava (1973).

Thus far in the development of the mathematical theory of Markov Chains, little is known regarding the empirical analysis of non-stationary models (those with nonstationary transition probabilities). This class of chains is so general that in most cases they are of little predictive value. Even the two-state chain is extremely complicated to analyse, and widely different types of behaviour are possible, depending on the nature of the transition probabilities. Thus, finding some specific manner in which the transition probabilities change is necessary before a detailed study becomes possible. However, the possibility exists that the Markov formulation of the individual process model developed here can be used for predictive purposes if the non-stationarity present in the transition probabilities can be identified and corrected. Efforts along this line, say, by regression analysis, seem to be fruitful areas for further research. In this light, we can only adopt the position that at best Markov Chains (for now) only helps to enrich our understanding of stock price behaviour (as far as the random walk hypothesis is concerned) even if the ultimate goal of prediction proves difficult and elusive.

Furthermore, each of the statistics tested had an asymptotic chi-square distribution. This simply means that the accuracy of the approximations or test improves as the sample size gets larger. To this end, it is suggested that future works be done by possibly taking a census of all stocks or alternatively a census of all the stocks quoted in the banking sector of the Nigerian stock exchange; with the aim of further determining if the results of the study holds true for all stocks quoted.

In addition, the study result seems to indicate some degree of frequency in transition from state to state (shown by the continuous changes in the probability vectors). This may be indicative of robustness in the Nigerian capital market occasioned by frequency in trading, which causes constant fluctuations and changes in the prices of stocks; which is good for the continuous growth of the market. To this end, it is recommended that the relevant regulatory agencies of government, such as the Central Bank of Nigeria (CBN) and the Securities and Exchange Commission (SEC), should strengthen and streamline the regulatory framework as well as improves their supervisory capacity, in order to ensure that the stock market continues to be vibrant. To achieve this goal, concerted efforts must be made to reduce sharp practices among stockbrokers, to instill discipline and good corporate governance among market participants and also to detect and prosecute any fraud case capable of undermining the integrity of the stock market.

Finally, the behaviour of stock prices in Nigeria, as investigated and discussed in the study, may serve as very useful spring board for some other less developed countries (LDCs), or added experience for some others. This is more so, as there is a need to confirm or refute many of the findings on stock price behaviour. Surely empirical work has unearthed some stylized facts on this very controversial area of finance; but these evidence is largely based on stocks quoted in American and European bourses, and it is not at all clear how these facts relate to different theoretical models, economic conditions and markets. Withoht testing the robustness of these findings outside the environment in which they were uncovered, it is hard to determine whether these empirical regularities are merely spurious correlations, let alone whether they support one theory or another. It is our sincere desire that similar works of this nature be done in LDCs (Nigeria inclusive) as an important attempt to start filling this gap in our knowledge.

VII. CONCLUSION

The study examined the stock prices of eight randomly selected banks that are quoted on the floor of the Nigerian bourse. This was with the aim of predicting the behaviour and future price direction of the selected stocks, wholly on the basis of past price information. The results showed that prices could not be predicted on the basis of the computed probabilities; and tended to agree with the already established opinion in the empirical literature that stock prices are random. One possible explanation for this occurrence is that different companies are affected at different times by new information that could produce significant differences in the runs and in the large reversal patterns among daily stock prices. For example, some companies might experience price runs as a result of favourable (unfavourable) earning reports, dividend policies, and industry news, while at the same time other companies would not be similarly affected by this information and their daily price change behaviour would then be different. On the other hand, some companies may experience large reversal patterns because of the uncertainty relative to new information, while at the same time other companies would not be similarly affected. Moreover, because new information becomes available at various times, heterogeneous behaviour among stocks is further compounded. While the price behaviour of some groups might be affected by today's news, tomorrow's news could conceivably affect a different group of stocks.

The non-stationary behaviour of the Markov Chains in describing both the vector and individual processes defined from daily closing price changes is noteworthy in that any dependence found is constantly changing in time. Indeed, the non-stationary condition may account for the fact that to date efforts to formulate models to predict stock-price movements on the basis of past daily and weekly price data alone have generally been unsuccessful.

Thus far in the development of the mathematical theory of Markov Chains, little is known regarding the empirical analysis of non-stationary models (those with nonstationary transition probabilities). This class of chains is so general that in most cases they are of little predictive value. Even the two-state chain is extremely complicated to analyse, and widely different types of behaviour are possible, depending on the nature of the transition probabilities. Thus, findings some specific manner in which the transition probabilities change is necessary before a detailed study becomes possible. However, the possibility exists that the Markov formulation of the individual process model developed here can be used for predictive purposes if the non-stationarity present in the transition probabilities can be identified and corrected. Efforts along this line, say, by regression analysis, seem to be fruitful areas for further research. In this light; we can only adopt the position that at best Markov Chains (for now) only helps to enrich our understanding of stock price behaviour (as far as the random walk hypothesis is concerned) even if the ultimate goal of prediction proves difficult and elusive.

We conclude with a consideration of the predictive capabilities of a Markov process representation of changes in price when the condition of stationarity and homogeneity in the vector process is satisfied. In stationary Markov process, tomorrows expected price change given today's price change can be estimated. However, not much about expected price changes more than one or two steps away from a starting point can be suggested. After several steps, the memory of the starting point is lost. All that remains is the steady-state transition matrix and the characteristic vector, which gives the probability of being in a particular state independent of the starting state. The use of Markov Chains in portfolio analysis is a virtually unexplored field and a very promising one. The results and methods presented here are very rudimentary and could form the basis for further research. Much work needs to be done on such refinements as a Bayesian--type updating of the transition probability matrix (TPM) and on refinements of the model in other for it to have more operational validity.

References

Abosede, A. J. (2008), "Portfolio Diversification and Performance in Nigerian Commercial Banks'. Unpublished Doctoral Dissertation, University of Benin, Benin City, Nigeria.

Agbadudu, A. B. (1996), Elementary Operations Research, Benin City: A.B. Mudiaga Limited.

Anderson, T. W. and Goodman, L. A. (1957), "Statistical Inference about Markov Chains". Analytical Mathematical Statistics, Vol. 28, 89-110.

Bhargava, T. N. (1962), "A Stochastic Model for Time Changes in a Binary Dyedic Relation with Application to Group Dynamics". PhD. Dissertation Michigan State University, United States of America.

Cecehetti, S. C., Lam, P. and Mark, N. C. (1990), "Mean Reversion in Equilibrium Asset Prices". The American Economic Review, Vol. 80, 398-415.

Chakravarti, et al. (1967), Handbook of Methods of Applied Statistics, Vol. I and 11. New York: Wiley. Dryden, M. (1969), "Share Price Movements: A Markovian Approach". Journal of Finance, Vol. 24, 49-60.

Engle, C. and Hamilton, J. D. (1990), "Long Swings in the Exchange Rate: Are they in the Data and do the Markets know it? The American Economic Review, Vol. 80, 689-713.

Fielitz, B. D. (1969), "On the Behaviour of Stock Price Relatives as a Random Process with an Application to new York Stock Exchange Prices". Unpublished Doctoral Dissertation, Kent State University, United State of America.

Fielitz, B. D. (1971), "Stationarity of Random Data: Some Implications for the Distribution of Stock Price Changes". Journal of Financial and Quantitative Analysis, Vol. 6, 1025-1034.

Fielitz, B. D. (1975), "On the Stationarity of Transition Probability Matrices of Common Stocks", Journals of Financial and Quantitative Analysis, Vol. 10, 327-339.

Fielitz, B. D. and Bhargava, T. N. (1973), "The Behaviour of Stock-Price Relatives--A Markovian Analysis". Operations Research, Vol. 21, No. 6, 1183-1199.

Gregory, A. W. and Sampson, M. (1987), Testing the Independence of Forecast Errors in the Forward Foreign Exchange Market Using Markov Chains: A Cross Country Comparison". International Journal of Forecasting, Volume 3, pp. 97-113.

Hamilton, J. D. (1989), "A New Approach to the Economic Analysis of Non-stationary time Serious and the Business Cycle", Econometrica, Vol. 57, 357-384.

Idolor, E. J. (2009), "The Behaviour of Stock Prices in the Nigerian Capital Market: A Markovian Analysis". Unpublished M.Sc. Thesis, University of Benin, Nigeria.

Mcqueen, G. and Thorley, S. (1991), "Are Stocks Returns Predictable? A Test Using Markov Chains". The Journal of Finance, Vol. 56, No. 1, 49-60.

Niederhoffer, V. and Osborne, M. F. M. (1966), "Market Making and Reversal on the Stock Exchange" Journal of American Statistical Association, Vol. 61,897-916.

Obodos, E. (2007), "Predicting Stock Market Prices in Nigeria: A Preliminary Investigation". Unpublished MBA Thesis, University of Benin, Benin City, Nigeria.

Ryan, T. M. (1973), "Security Prices as Markov Processes". Journal of Financial and Quantitative Analysis, Vol. 8, No. 1, 17-36.

Samuelson, P. (1988), Longrun Risk Tolerance when Equity Returns are Mean Regressing: Pseudoparadores and Vindication of Business Man's Risk. Presented at the Colloqwum of James Tobin Yale University, United States of America.

Taha, H. A. (2001), Operation Research New Delhi: Prentice Hall of India.

Taylor, B. W. (1996), Introduction to Management Science. New York: McGraw-Hall.

Turner, C. M., Startz, R. and Nelson, C. R. (1989), "A Markov Model of Heteroskadasticity, Risk and Learning in the Stock Market". Journal of Financial Economics, Vol. 25, 3-22.

PETER O. ERIKI AND ESEOGHENE J. IDOLOR

University of Benin, Nigeria

* This article is based on Eseoghene Joseph Idolor's M.Sc. Thesis supervised by Peter Omohezuan Eriki (Ph.D)
Table 1
Tests of Homogeneity in Vector--Process Markov Chain Models
for Individual Stock (Access Bank Plc)

State ij   [U.sup.2.sub.ij]   Decision

rr (1,1)             250.43   Significant *
rd (1,2)             228.46   Significant *
rs (1,3)              41.73   Not significant
dr (2,1)             232.85   Significant *
dd (2,2)             202.09   Significant *
ds (2,3)              54.91   Not significant
sr (3,1)              41.73   Not significant
sd (3,2)              57.10   Not significant
ss (3,3)             762.28   Significant *

* Statistically significant at the 95% confidence level

Table 2
Tests for Homogeneity in Vector Process Markov Chain
Models (Afribank Nigeria Plc)

State ij   [U.sub.ij.sup.2]   Decision

rr (1,1)             158.16   Significant *
rd (1,2)              90.05   Significant *
rs (1,3)              26.35   Not significant
dr (2,1)              96.65   Significant *
dd (2,2)              94.45   Significant *
ds (2,3)              19.76   Not significant
sr (3,1)              21.96   Not significant
sd (3,2)              26.35   Not significant
ss (3,3)            1337.84   Significant *

* Statistically significant at the 95% confidence level

Table 3
Test for Homogeneity in Vector Process Markov Chain Models
(Eco Bank Nigeria Plc)

State ij   [U.sub.ij.sup.2]   Decision

rr (1,1)             276.78   Significant *
rd (1,2)             149.38   Significant *
rs (1,3)              28.54   Not significant
dr (2,1)             149.38   Significant *
dd (2,2)             257.01   Significant *
ds (2,3)              26.35   Not significant
sr (3,1)              28.54   Not significant
sd (3,2)              28.54   Not significant
ss (3,3)             926.65   Significant *

* Statistically significant at the 95% confidence level

Table 4
Tests of Homogeneity in Vector-Process Markov Chain Models
(First Bank Of Nigeria Plc)

State ij  [U.sub.ij.sup.2]  Decision

rr (1,1)            421.77  Significant *
rd (1,2)            289.90  Significant *
rs (1,3)             30.74  Not significant
dr (2,1)            285.57  Significant *
dd (2,2)            386.63  Significant *
ds (2,3)             30.74  Not significant
st (3,1)             30.74  Not significant
sd (3,2)             30.74  Not significant
ss (3,3)            360.26  Significant *

* Statistically significant at the 95% confidence level

Table 5
Tests of Homogeneity in Vector-Process Markov Chain Models
(First City Monument Bank Plc)

State ij   [U.sub.ij.sup.2]   Decision

rr (1,1)             226.26   Significant *
rd (1,2)             169.14   Significant *
rs (1,3)              61.49   Not significant
dr (2,1)             180.13   Significant *
dd (2,2)             142.78   Significant *
ds (2,3)              39.50   Not significant
sr (3,1)              46.13   Not significant
sd (3,2)              76.88   Significant *
ss (3,3)             896.29   Significant *

* Statistically significant at the 95% confidence level

Table 6
Tests of Homogeneity in Vector-Process Markov Chain Models
(Intercontinental Bank Plc)

State ij   [U.sub.ij.sup.2]   Decision

rr (1,1)             322.91   Significant *
rd (1,2)             385.57   Significant *
rs (1,3)              79.08   Significant *
dr (2,1)             232.85   Significant *
dd (2,2)             202.09   Significant *
ds (2,3)              54.91   Not significant
sr (3,1)              41.73   Not significant
sd (3,2)              57.10   Not Significant
ss (3,3)             762.28   Significant *

* Statistically significant at the 95% confidence level

Table 7
Tests of Homogeneity in Vector-Process Markov Chain Models
(Union Bank of Nigeria Plc)

State ij   [U.sub.ij.sup.2]   Decision

rr (1,1)             262.46   Significant *
rd (1,2)             289.96   Significant *
rs (1,3)              37.34   Not Significant
dr (2,1)             303.15   Significant *
dd (2,2)             322.91   Significant *
ds (2,3)              21.96   Not significant
sr (3,1)              28.96   Not significant
sd (3,2)              30.74   Not Significant
ss (3,3)             375.64   Significant *

* Statistically significant at the 95% confidence level

Table 8
Tests of Homogeneity in Vector-Process Markov Chain Models
(Wema Bank Plc)

State (ij)   [U.sub.ij.sup.2]   Decision

rr (1,1)               250.43   Significant *
rd (1,2)               131.79   Significant *
rs (1,3)                15.36   Not Significant
dr (2,1)               127.40   Significant *
dd (2,2)               184.52   Significant *
ds (2,3)                26.35   Not significant
sr (3,1)                19.76   Not significant
sd (3,2)                24.15   Not Significant
ss (3,3)              1067.64   Significant *

* Statistically significant at the 95% confidence level

Table 9
Tests of Homoxeneity in Vector-Process Markov-Chain
Models for the Collective Stocks

State (ij)   [U.sub.ij.sup.2]   Decision

rr (1,1)              2269.28   Significant *
rd (1,2)              1634.41   Significant *
rs (1,3)               320.72   Significant *
dr (2,1)              1676.14   Significant *
Dd (2,2)              1862.87   Significant *
ds (2,3)               300.95   Significant *
sr (3,1)               283.38   Significant *
sd (3,2)               336.11   Significant *
ss (3,3)              6166.42   Significant *

* Statistically significant at the 95% confidence level

Table 10
Tests for Stationarity in Vector Process Markov Chain models
(Changes in Daily and Weekly Closing Prices)

                            Decision
Lag (in days)   [U.sup.2]   @ 5% level)

1                 9167.78   Non stationary
10               12739.74   Non stationary

Table 11
Test for the Order of the Chain in Vector Process Markov-Chain Models
(Changes in Daily and Weekly Closing Prices)

                           Decision
Lag (in days)  [U.sup.2]   (@ 5% level)

1                9170.54   1st or higher order
10              21057.88   1st or higher order
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有