Brownian motion can be treated as a limit of random walks. Random walk can be simulated by tossing a fair coin. Suppose that there is an ideal coin tossing game in which each player wins or loses a constant amount with probability 0.5 at each throw. By the law of Large Numbers, the probability of positive outcome is approaching 0.5 as the number of games increases. The final result is likely to fluctuate changing sign from positive to negative and back. How often the path of such a random walk crosses the x-axis, i.e. what is the fraction of time spent on the “positive” side vs. “negative” side?

Taking into account the law of Large Numbers and the obvious symmetry in the time between two consecutive draws it is equally likely to win or lose. With increasing duration of the walk more draws will occur which seem to imply that the fraction of time spent on the “positive” side is 0.5.

But it may come as a complete surprise that the path crosses the x-axis rarely. Moreover, as more games are played, the frequency of crossings decreases. This is perhaps the most counterintuitive aspect of Brownian motion.

However, there are principles that pertain to random walks. These are the arc-sine laws. They assume that the amount to win is equal to the amount that can be lost and that this is always a constant amount. The arc sine laws also assume that there is a 50% chance of winning and a 50% chance of losing, i.e. a game has the mathematical expectation that is zero.

**The first arc-sine law ** states that the probability that in N games the fraction of time spent in the winning zone tends to:

as k and (N-k) approaching to infinity, where k is the number of the events spending in positive side.

After applying some mathematical rules, the cumulative probability that the fraction of time spent on the positive side is less than X (with 0 < X < 1) is

By simulation it can be seen that surprisingly long series with positive or negative results can occur. This can explain why 50% of traders consistently lose money while 50% consistently win.

**The second arc-sine law ** states that the maximum points will most likely occur at the endpoints and least likely at the center. Using the same formula as above, the probabilities of being in positive territory for K after 10 tosses (N = 10):

CVA is the value of the expected losses from counterparty defaulting and can be formulated as well as:

CVA – adjustment to the price of derivative to account for counterparty credit risk. It is a price not a risk measure.

PD – default probability or how likely is counterparty to default

LGD – loss given default after recovery

EAD – exposure at default

Unilateral CVA – only the counterparty can default.

As banks themselves have become risky, counterparty risk must be analyzed from the bilateral perspective. Bilateral CVA is the adjustment which reflects the credit risks faced by both counterparties.

Bilateral CVA is the sum of the Asset and Liability CVA. Liability CVA is also known as debit valuation adjustment DVA. The DVA reduces the value of a liability derivative.

EAD can be predicted by EPE/ENE profiles.

EPE – Expected positive exposure

ENE – Expected negative exposure

The EPE and ENE profiles are central to the calculation of CVA and DVA. Simple CVA and DVA approximation formulas can be written as:

CVA = Present Value of (PD1 * EPE * LGD)

DVA = Present Value of (PD2 * ENE * LGD)

As can be seen, the mechanics of calculating CVA and DVA are almost identical but incorporating different PDs.

If there is a derivative deal between a Bank and a Corporate, the CVA of the Bank is the DVA of the Corporate and vice versa.

**Working example.** Suppose there is a 1-year plain vanilla interest rate swap with $1 billion nominal. Bank A pays a fixed rate of 0.6% and receive LIBOR3M from Counterparty_1. Day count convention is 30/360 basis.

Bank A valued the swap before CVA and DVA and it was -$875 262.

Bank A is expected to pay $874 453 in the first period, $249 532 in the second period and to receive $248 723 in the fourth period. Therefore, the Counterparty_1 is exposed to Bank’s A credit risk in the first two periods. The Bank A is exposed to default of Counterparty_1 in the fourth period.

CVA/DVA calculation is divided into the following steps:

**Step №1.** Calculate the present value (PV) of the EAD at each 3-months period.

**Step №2.** Calculate PD and LGD. For simplicity purposes, PD and LGD are taken as theoretical values.

More details about PD calculation can be found in the article Probabilities of Default.

In this example, it is assumed that forward rates will behave as expected as of 01.01.2016. Calculation of PV of the EAD for each period consists of defining Mark to market of the swap. It is a simple procedure. More details can be found in the CVADVA file

The end exposure as of 31.03.2016 is as follows:

The end exposure as of 30.06.2016 is as follows:

The end exposure as of 30.09.2016 is as follows:

Taking into account PD, LGD and EAD (to get calculation of EAD in the field «Bucket start» see **Excel file**) the final PV of expected loss (EL) can be calculated for each time period. The total sum representing CVA/DVA adjustment of $5 691.

The first Bucket_1 amount means that Counterparty_1 is exposed to the Bank’s A credit risk. Therefore, the PD and LGD of Bank A are taken into account. The amounts in the first and the second Buckets represent DVA amount.

The third Bucket_3 amount means that the Bank A is exposed to Counterparty_1 credit risk. That’s why the PD and LGD of Counterparty_1 are taken into account. The amounts in the third and the fourth Buckets represent CVA amount.

Probability of default is a financial term describing the likelihood of a default over a particular time horizon. Measurement of the probability of default for a credit exposure over a given horizon is often the first step in credit risk modeling and pricing. For example, CVA is the process through which counterparty credit is priced and hedged. For CVA pricing, it is necessary to divide the remaining life of a derivative instrument into a number of time buckets and to calculate the unconditional probability of a counterparty default for each time bucket.

Term structure of credit spreads for the counterparty can be observed in the market. Such spreads can be used to estimate unconditional probabilities for a specific time bucket.

Widely used probabilities of default are:

Cumulative probability

Unconditional probabiblity

Conditional probabiltiy

**The cumulative probabilities** show the default chance through time.

Suppose that CDSs on SuperBank were trading at 50, 70, 80 and 100 basis points for 6, 12, 18 and 24 months respectively.

Using these quotes, cumulative PDs can be approximated as follows:

For the first bucket, the cumulative PD = 1 – exp(-0.5% * 0.5 / 60%) = 0.42%.

For example, the corporate bonds have 0.42% chance of defaulting after 6 months, 1.16% chance after one year, 1,98% chance after 18 months and 3,28% chance after 2 years.

**The unconditional probabilities** are the probabilities of default in a given bucket as viewed from time zero. The unconditional probability of a bond defaulting during year t is equal to the difference in the cumulative probability in one bucket minus the cumulative probability of default in another bucket t−1.

From the piicture above, the unconditional probabiltity of default in the second bucket is 1.16% – 0.42% = 0.74%.

**The conditional probability** is the probability of default in a given bucket conditional on no prior defaults.

This probability is equal to the unconditional probability of default in time t divided by the probability of survival at the beginning of the period.

The probability of survival is 100 minus the cumulative probability. For example, the probability that a

SuperBank will survive until the end of the third bucket is 98.02% (100 minus its cumulative probability 1.98%).

The probability that SuperBank will default during the time of the forth bucket conditional on no prior defaults is 1.32% (1.3% / 98.02%).

]]>The beggar-thy-neighbor term describes economic policies that aim to enrich one country at the expense of other countries.

Conventionally, the term is used in relation to such international trade policies as the application of import quotas and tariffs, regulation of corporation tax rates as well as currency depreciation. Therefore, the welfare gain in the country imposing the beggar-thy-neighbour policy is offset by the welfare loss in the countries affected by the policy.

**1. Trade barriers**

The effect from import quotas and tariffs is temporary and generally leads to retaliation.

In order to prevent such beggar-thy-neighbor trade policies, WTO serves as a forum for countries to thrash out their differences on trade issues. Also, tariffs lead to a net economic welfare loss, i.e. consumers pay higher prices and it is only a small sector which benefits.

Suppressing wages to subsidize exports could be beggar-thy-neighbor if the sole purpose is to increase a country’s competitiveness in the international markets.

No wonder, such policies are not popular.

**2. Corporation tax rates**

Tax competition is an effort to take away investment from countries with higher corporation tax.

Although tax rates are lower, the idea is to make up tax revenue by attracting more firms. Of cause, lower corporation tax doesn’t increase global economic welfare and it just diverts investment from high tax countries to low tax.

G20 corporate income tax rates as of 2014 are show below:

**3. Currency depreciation **

Also known as a competitive devaluation: a country gains from a depreciation of its own currency through higher exports.

The choice of monetary policy of one country can strongly affect its neighbors for worse making the other country’s goods more

expensive. Therefore, countries can be engaged in currency wars through what is known as beggar-thy-neighbour policy.

In July 1944, the system of fixed exchange rates known as the Bretton Woods system was established. The system

meant that competitive devaluation was not an option. In addition, International Monetary Fund and the World Bank were instituted.

The Bretton Woods system was created to prevent the return of the beggar-thy-neighbour competitive currency devaluations of the 1930s.

The main feature of Bretton Woods was a system of fixed and adjustable exchange rates, managed by the IMF, and backed by the US dollar.

In 1971, US president Richard Nixon announced that the dollar would no longer be convertible to gold. By 1973 the Bretton Woods system of fixed exchange rates had been abandoned.

Since the collapse of the Bretton Woods system, many countries have intervened heavily in the foreign exchange markets. Currency wars begun…

All of these policies created to beggar-thy-neighbor/prosper-thy-self. Sometimes, a country can be adversely affected by such policies so

the main impact is a deterioration of its terms of trade, i.e. beggar-thy-self/prosper-thy-neighbor.

Long-term funding sources

Enterprise Value represents the value of the company’s Operating assets but

there are many confusions when thinking about what to include.

Take Shareholders Equity, add Debt and substract Non-operating assets.

But what is the point of adding and substracting different items?

The Enterprise Value calculation is always somewhat subjective. Nevertheless,

there are 3 rules to take into account when moving from Shareholders Equity to Enterprise Value:

1. Extra costs for an acquirer of the company

2. Long-term funding source

3. None operating assets

2. Long-term funding source

3. None operating assets

Balance Sheet may look like this:

Cash…..$100

Short-term investments…..$10

Receivables…..$50

Inventories…..$5

Property, Plant & Equipment…..$1 000

Equity and investments…..$500

Goodwill…..$200

Intangible assets…..$80

Short-term debt…..$10

Accounts payable…..$20

Taxes payable…..$30

Long-term debt…..$500

Deferred Tax Liabilities…..$100

Other long-term Liabilities…..$50

Shareholders’ Equity

Common stock…..$800

Additional Paid-In Capital…..$100

Retained Earnings…..$100

Treasury stocks…..$135

**Total Shareholders’ Equity: $1 135**

Minor interest…..$100

**Total Equity: $1 235**

**Total Liabilities and Equity: $1 945**

Common stock…..$800

Additional Paid-In Capital…..$100

Retained Earnings…..$100

Treasury stocks…..$135

Minor interest…..$100

Shares from Convertible securities, Warrants, Rights, Options and other claims.

Let’s say, Diluted Equity Value is

2. Taking into account the balance sheet, it is time to define which item to add/substract to the Diluted Equity Value.

2.1 Less: -$100, Cash is a non-operating asset

2.2 Less: -$10, if Short-term investments are liquid

2.3 Plus: $50, Receivables are operating assets

2.4 Plus: $5, Inventories are operating assets

2.5 Plus: $1 000, Property, Plant & Equipment are operating assets

2.6 Less: -$500, Equity and investments, they are non-operating assets

2.7 Plus: $500, Goodwill reflects acquired companies and is an operating asset

2.8 Plus: $80, Intangible assets are operating assets in the same way as Goodwill

2.9 Plus: $10, Short-term debt is a funding source, and need to be repaid in an acquisition

2.10 Less: -$20, Accounts payable – not a long-term funding source

2.11 Less: -$30, Taxes payable – not a long-term funding source

2.12 Plus: $500, Long-term debt as a a funding source

2.13 Less: -$100, Deferred Tax Liabilities – a temporary source

2.14 Plus: $50, Other long-term Liabilities – depends on what is inside. Should be investigated

2.15 Shareholders’ Equity: Shouldn’t be included because Market Value (Capitalization) is already taken into account

2.16 Plus: $100, Minor interes – represents a long-term funding source

Finally, EV = $1 000 – $100 – $10 + $50 + $5 + $1 000 – $500 + $500 + $80 + $10 – $20 – $30 + $500 – $100 + $50 + $100 = $2 535

Here, the EV is twice as much as the Market Value. Definitely, I will buy such shares )

]]>Let’s say you earned $1 one week, $10 the following week and $100 the third week.

As it follows from the previous article, Measures of Central Tendency,

the mean μ for this distribution is $37 per week. The mean reveals the center of the distribution but it doesn’t reveal

variability in the distribution and doesn’t provide any information concerning how the earnings are spread apart in the distribution.

In order to summarize a data set accurately and efficiently, it isn’t enough to use only measures of central tendency.

To quantify the accuracy of a single value at the distribution and its degree to which it differs from one another,

measures of variability were created.

Let’s consider a set of data:

1. {1,1,1}

2. {1,1,2}

3. {1,2,3,4,5,100}

scores in the dataset differ from one another is 99 but this value does

not appear to be representative of the typical difference among scores in the third distribution.

Therefore the Range doesn’t make use of all the scores in the distribution while it only

focuses on the difference between the maximum and minimum values and fails to take into account

a measure of central tendency.

1. {5,5,5} with μ=5

2. {6,7,1,6,5} with μ=5

The magnitude of the difference can be used to quantify how good the mean is.

1. (x – μ)={5-5, 5-5, 5-5} = {0,0,0}

2. (x – μ)={6-5, 7-5, 1-5, 6-5, 5-5} = {1,2,-4,1,0}

The maximum difference in the second distribution is 4 points, the typical difference is 1 point.

It is helpful to summarize the deviations into a single value. Taking into account that the sum of positive and negative mean deviations

will always equal zero, two possible solutions are available:

1. To sum absolute values. For the second distribution: Σ(|x – μ|)= 1+2+4+1+0 = 8

2. To square each mean deviation and then sum the resulting values. For the second distribution: Σ (x – μ)2 = 1+4+16+1+0=22

Squaring is often preferred over summing method.

Nevertheless, 22 is the sum of squared deviations and neither reflects the spread of the scores in the distribution

or the accuracy of the mean.

Variance is the squared deviation divided by the number of scores in the distribution and is symbolized by

the Greek letter sigma raised to an exponent of 2.

From the previous example it follows:

Variance = 22/5 = 4.4 which indicates that the average squared difference between a score and the mean is 4.4 units.

Variance has some limitations as well. A set of data {4,6,8} with μ=6 and variance=2.66 shows that

the variance is greater than the maximum difference of 2 in the distribution. Therefore, it is better to

transform variance so that it measures variability in original units rather than squared units.

without resulting in an inflated value and is symbolized by sigma.

The standard deviations for the above example is squared root of 2.66 which is 1.63.

**Median deviation.**

Consider the dataset {5,6,7} with a median value of 6.

Deviations from the median are called **median deviations**, so that it results in {5-6, 6-6, 7-6} = {-1,0,1}.

A single value that describes the accuracy of the median is the median of the median deviations, so that {-1,0,1} has the median of zero.

The median of median deviations will always result in a value of zero. To overcome this, the absolute values are use.

The sorted dataset {-1,0,1} will result in {0,1,1} with the median of median deviations equal to 1.

The median of the absolute median deviations is called the median absolute deviation or MAD.

]]>

The term “central tendency” determines a single value that best represents a distribution of data.

A single value is needed to distinguish among multiple values when describing data.

The mode, median and mean are the most common measures of central tendency.

The mode, median and mean are all valid measures of central tendency and under different conditions, some measures

become more appropriate to use than others.

Mode is defined as the value that occurs most frequently in the data.

Let’s consider a set of data:

1. {5,5,5,5,5,5,5,5}

2. {5,5,5,5,5,5,5,6}

3. {4,5,5,5,5,5,5,10}

A value of 5 is the most frequent and appears be the most representative for all the distributions above.

Let’s consider another set of data:

1. {5,5,5,5,5,5,5,100}

2. {4,4,4,4,5,5,5,5}

3. {1.1,1.2,1.3,1.4,1.5,1.6}

The most representative value in the first data set is 5. Problems arise when applying the mode to the remaining data sets.

In the second data set, there are two modes: the values of 4 and 5. This is a multimodal data set. It is not clear which value to use.

The third data set has continuous data with no single score whose value has a frequency greater than one.

The mode is difficult to use when dealing with continuous data because a single value is rarely repeated in the data set.

As it follows, a data set may contain multiple modes or continuous data and thus it is not appropriate to use the mode as

a measure of central tendency.

There is an alternative solution to eliminate the shortcomings of the mode.

The median separates upper and lower halves of a distribution.

Let’s consider a set of data:

1. {1,10,100}

2. {1,10,20,100}

3. {dog, dog, cat}

4. {1.1,1.2,1.3,1.4,1.5,1.6}

The median of the first distribution is 10. Therefore, the value of the median is unaffected by the actual distance between numbers.

The second distribution has the median 15. A single score doesn’t separate the distribution in half.

Instead, the median is directly in between the two middle scores.

The scores in the third distribution do not contain any inherent order or direction of difference and thus

the median is an inappropriate measure of central tendency for nominal data.

The last distribution contains continuous data but the scores are ordered according to their magnitude. The

two middle scores are 1.3 and 1.4 so the value of the median is 1.35.

As it comes, the median always results in a single value but the value of the median is unaffected by the actual distance (magnitude) between numbers.

The mode and the median reflect frequency and rank respectively but magnitude isn’t taken into account.

Unlike the median or the mode, the sum of a distribution is sensitive to magnitude but it doesn’t appear

to be representative. Therefore, actual numbers in a distribution are counted and also taken into account.

The mean is the sum of all the scores in the distribution divided by the number

of scores in the distribution and symbolized by the Greek letter mu (μ). Mean is just another name for average.

Let’s consider a set of data:

1. {1,10,100}

2. {1,1,2,100}

3. {dog, dog, cat}

4. {1.1,1.2,1.3,1.4,1.5,1.6}

The mean for the first distribution is 37.

The second distribution has the mean of 26. This value isn’t really

representative because 100 is an outlier and thus the mean is pulled in the direction of an outlier. This

distorts the representativeness of the mean.

The mean cannot be used to represent the third distribution but it can be used

to adequately represent continuous data in the fourth distribution.

Philip Morris International (NNYSE:PM)

The company makes seven of the world’s top 15 tobacco brands, laying claim to more than 15% of the international cigarette market outside the US.

The company’s brands by sales volume are Marlboro (the world’s #1-selling cigarette, accounts for about a third of PMI’s total shipment volume),

L&M, Bond Street, Philip Morris, Chesterfield, and Parliament.

Philip Morris has quite a lot of attractive market share positions in emerging economies.

PMI has determined that it qualifies as an 80/20 company for U.S. tax purposes,

an “80/20 company” is a U.S. company, 80% of whose gross income for a specified period is generated from active businesses outside the US.

The business is in four segments:

• European Union

• Eastern Europe, Middle East; Africa (“EEMA”)

• Asia

• Latin America; Canada

• Eastern Europe, Middle East; Africa (“EEMA”)

• Asia

• Latin America; Canada

The uptrend in the company’s margin ratios came to a halt in 2014.

Nevertheless, underlying business fundamentals continue to be strong, revenue growth is still promising.

**Profitability is Stable:**

As the company exclusively sells outside of the United States,

the most prevalent risk factor for the last several years has been currencies.

PM has struggled as a strong dollar and increasing regulation has hindered growth.

Currency issues has improved since the beginning of 2016 and should have a positive impact on future earnings.

**Financial health is Stable-:**

The company appears to have taken on more debt.

Valuation: $78.52

Rating (Out of 10): 6.55

Growth: High+

Profitability: Stable

Financial health: Stable-

Stock Price: $101.78

Market Cap: $158.07b

Preferred: No

On February 5, 2015, the company announced that there is no additional plans for share repurchases.

**History**

**1987:** Philip Morris International Inc. is incorporated as an operating company of Philip Morris Companies.

**2008:** March 28, Philip Morris spinned-off from Altria Group

**Disclosure:** Billion Trader has no positions in any stocks mentioned, the valuation is based on the own model developed and tested by Billion Trader. Billion Trader is not responsible for any loss arising from information provided.

Wal-Mart Stores (NYSE:WMT)

**Strategy:** is to lead on price, differentiate on access, be competitive on assortment and deliver a great experience.

Operations comprise three reportable segments: Walmart U.S., Walmart International and Sam’s Club:

• Walmart U.S. generated approximately 62% of net sales in 2016

• Walmart International consists of operations in 27 countries outside of the U.S

• Sam’s Club consists of membership-only warehouse clubs and operates in 48 states in the U.S.; Puerto Rico.

Sam’s Club accounted for approximately 12% of our fiscal 2016 net sales

• Walmart International consists of operations in 27 countries outside of the U.S

• Sam’s Club consists of membership-only warehouse clubs and operates in 48 states in the U.S.; Puerto Rico.

Sam’s Club accounted for approximately 12% of our fiscal 2016 net sales

The company provided a strategic framework intended to strengthen its U.S. and e-commerce businesses.

The new trend of buying online and picking up in-store will be Wal-Mart’s great competitive advantage over rival Amazon. Online services

will allow Wal-Mart to achieve significant cost savings but currently WMT remains stuck in a very difficult turnaround resulting in extremely thin operating and net margins.

The company faces lack of growth drivers both domestically and internationally, as a result, returns in Wal-Mart recently have been poor.

**Profitability is Stable-:**

Wal-Mart has a solid track record of dividend payments and dividend growth raising its dividend year after year. The major point here is that

stable yield is better than a yield that is increasing merely because of falling share prices.

Nevertheless, investments in e-commerce requires a lot of money.

Online and digital initiatives are expected to total approximately $1.1 billion in fiscal year 2017. Therefore, fiscal year 2017 will represent the heaviest investment period.

**Financial health is Stable+:**

It seems like the company has a disciplined approach to managing its financial resources and portfolio,

i.e. the financial position is strong despite the current weak operating margins. In the same time, free cash flow looks excellent and has grown materially

Valuation: $64.71

Rating (Out of 10): 5.05

Growth: Low+

Profitability: Stable-

Financial health: Stable+

Stock Price: $73.14

Market Cap: $227.30b

Preferred: No

**History**

**1962:** July 2, Sam Walton opened the first Walmart store in Rogers, Ark.

**1969:** October 31, Wal-Mart Stores was incorporated in Delaware

**1970:** October 1, Walmart offered 300,000 shares of its common stock to the public at a price of $16.50 per share

**1972:** August 25, Walmart shares began trading on the New York Stock Exchange under the symbol WMT.

**1983:** The first Sam’s Club opened in Midwest City, Okla.

**1988:** The first Walmart Supercenter opened in Washington

**2000:** Walmart.com was founded

**Disclosure:** Billion Trader has no positions in any stocks mentioned, the valuation is based on the own model developed and tested by Billion Trader. Billion Trader is not responsible for any loss arising from information provided.

Amazon.com, Inc. (NASDAQ:AMZN)

To justify its equity valuation and the credit ratings from S&P and Moody’s, Amazon must begin turning its majestic revenues into operating earnings.

Amazon’s operating, net and free cash flow margins are very small.

Net income profits are hard to find, i.e. profits are volatile and changing signs. On average, the company is hardly profitable for shresholders.

The stock seems to be only for speculative short-term purposes for inspired investors, not for long-term investments.

Capital expenditures have risen drastically compared to 2011 year.

How do the company financing its expenditures? Amazon has turned to the bond markets to finance new initiatives.

The next time Amazon needs to return to the bond market, borrowing costs will be higher because of company’s fundamentals.

Valuation: $27.67

Rating (Out of 10): 2.00

Growth: Low-

Profitability: Low-

Financial health: Low+

Stock Price: $728.10

Market Cap: $348.37b

Preferred: No

There really is no one right way in building a fundamental valuation model for Amazon.

**History**

**1994:** July 5, Amazon was incorporated (as Cadabra) in the state of Washington.

**1995:** July 16, Amazon opened the virtual doors of Amazon.com’s online store

**1996:** Amazon was reincorporated in Delaware

**1997:** May 15, Amazon.com completed its initial public offering, the IPO price was $18.00

**2000:** June 19, Amazon’s logotype has featured a curved representing that the company carries every product from A to Z

**Disclosure:** Billion Trader has no positions in any stocks mentioned, the valuation is based on the own model developed and tested by Billion Trader. Billion Trader is not responsible for any loss arising from information provided.