# Monthly Archives: December 2015

## Why does black and scholes formula for pricing options use risk free rate?

BS constructs a perfectly hedged portfolio. "Perfectly hedged" means "risk-free." A risk free portfolio grows at the risk free rate.
A rule of thumb is to use the US Treasury or libor as a proxy, but a more accurate method is to price OIS interest rate swap. Overnights eliminate almost all counterparty, political, and other risks

BS constructs a perfectly hedged portfolio. "Perfectly hedged" means "risk-free." A risk free portfolio grows at the risk free rate.
A rule of thumb is to use the US Treasury or libor as a proxy, but a more accurate method is to price OIS interest rate swap. Overnights eliminate almost all counterparty, political, and other risks

Why does black and scholes formula for pricing options use risk free rate?

Filed under Life

## When should I use classes in C++?

A simple answer that's more in line with what C++ is about is "when there is an invariant"

A class invariant is a statement (a boolean function if you will) about the contents of an object, which is always true at the end of every function that operates on the object. (If it cannot be satisfied, the function does not return, and throws an exception instead)

For example, a typical std::vector with an empty allocator has three members: start pointer, end pointer, end-of-capacity pointer. At the end of any constructor, as well as at the end of any member or non-member function that works on a vector, it's always true that capacity is greater or equal end which is greater or equal begin, among other things.

It would be possible to use an array of three pointers instead, but it would be wrong because arrays (and other aggregate types) do not enforce invariants. They can be created with any three pointer values.

After this you can talk about encapsulation, inheritance, behavior vs. data, the SOLID design rules, etc, but the core idea of classes in C++ is enforcement of invariants. That's why it differs from all other languages with regards to virtual dispatch during construction.
Written Nov 10, 2014 • View Upvotes

A simple answer that's more in line with what C++ is about is "when there is an invariant"

A class invariant is a statement (a boolean function if you will) about the contents of an object, which is always true at the end of every function that operates on the object. (If it cannot be satisfied, the function does not return, and throws an exception instead)

For example, a typical std::vector with an empty allocator has three members: start pointer, end pointer, end-of-capacity pointer. At the end of any constructor, as well as at the end of any member or non-member function that works on a vector, it's always true that capacity is greater or equal end which is greater or equal begin, among other things.

It would be possible to use an array of three pointers instead, but it would be wrong because arrays (and other aggregate types) do not enforce invariants. They can be created with any three pointer values.

After this you can talk about encapsulation, inheritance, behavior vs. data, the SOLID design rules, etc, but the core idea of classes in C++ is enforcement of invariants. That's why it differs from all other languages with regards to virtual dispatch during construction.

When should I use classes in C++?

Filed under Life

## Java has automatic garbage collection. Why wasn’t similar feature added to C++ in revised versions of language?

It depends on what you mean by "garbage collector."
First there are two major classifications:
1. Those that are deterministic: you know when the resources are freed. And those that arent
2. Those that compact memory addresses by moving object in.memory, and those that dont
First, consider the stack: it allocates memory, and automatically frees it. Coupled with destructors, ANY resource is easily managed, not just memory. C++ heavily relies on this GC, but try this with Java or C# and you're stuck with finally.
Java GC means memory only: locks, file handles etc you're on your own.
For heap memory, C++ has long had reference counting available. Perl and python, and lisp I believe as well, both use reference counted GC. You just have to tell C++ which object are reference counted and which are stack. Additionally, there are other methods, such as scoped_ptr.
I review a lot of c++ code, and I can tell you a huge no-no is when I see a programmer use "delete." A raw delete is an esoteric keyword, and when I see it used liberally, its guaranteed there's a leak somewhere.
So C++ has long had GC capability, its just not the same approach as Java.
The other classification, is GC that compacts. It's already true there is some magic in the OS and TLB about virtual addresses, a compacting GC is another layer on top of that.
C and C++ have the notion of allocation "arenas." Meaning I can place different objects in memory and have them keep their relative position. Furthermore, I can specify with great detail just how a object is laid out. I can't stress enough how important this is. The vast majority of the Standard library woundnt work if data in memory could be moved. And no one wants it moved– that would confuse the optimizer.

GC normally means non deterministic algorithms that sweep through names to see which ones cannot be reached, and reorders the object placement along the way. But it can mean deterministic routines, such as stacks and reference counts, that preserve location. And in that sense C++ has long had GC capability

It depends on what you mean by "garbage collector."
First there are two major classifications:
1. Those that are deterministic: you know when the resources are freed. And those that arent
2. Those that compact memory addresses by moving object in.memory, and those that dont
First, consider the stack: it allocates memory, and automatically frees it. Coupled with destructors, ANY resource is easily managed, not just memory. C++ heavily relies on this GC, but try this with Java or C# and you're stuck with finally.
Java GC means memory only: locks, file handles etc you're on your own.
For heap memory, C++ has long had reference counting available. Perl and python, and lisp I believe as well, both use reference counted GC. You just have to tell C++ which object are reference counted and which are stack. Additionally, there are other methods, such as scoped_ptr.
I review a lot of c++ code, and I can tell you a huge no-no is when I see a programmer use "delete." A raw delete is an esoteric keyword, and when I see it used liberally, its guaranteed there's a leak somewhere.
So C++ has long had GC capability, its just not the same approach as Java.
The other classification, is GC that compacts. It's already true there is some magic in the OS and TLB about virtual addresses, a compacting GC is another layer on top of that.
C and C++ have the notion of allocation "arenas." Meaning I can place different objects in memory and have them keep their relative position. Furthermore, I can specify with great detail just how a object is laid out. I can't stress enough how important this is. The vast majority of the Standard library woundnt work if data in memory could be moved. And no one wants it moved– that would confuse the optimizer.

GC normally means non deterministic algorithms that sweep through names to see which ones cannot be reached, and reorders the object placement along the way. But it can mean deterministic routines, such as stacks and reference counts, that preserve location. And in that sense C++ has long had GC capability

Java has automatic garbage collection. Why wasn't similar feature added to C++ in revised versions of language?

Filed under Life

## My teacher says that whenever we want to add new functionality to a class we should make a sub-class and implement there. Is this good pr…

This is just a bad example of reuse.
Reuse, in the OO paradigm (and LinkedList<T> is not OO code) would involve knowing the invariants of the base class, and reuse by inheritance would mean that you can strengthen that invariant. For instance a square is-a rectangle is a trapezoid. All have four straight sides, with ever increasing constraints placed.
A printable LinkedList may violate some invariant in an esoteric system, like "must work in an FPGA" where there may not be a display.  But I agree with others this a poor example, and ostream_iterators are what's needed here

This is just a bad example of reuse.
Reuse, in the OO paradigm (and LinkedList<T> is not OO code) would involve knowing the invariants of the base class, and reuse by inheritance would mean that you can strengthen that invariant. For instance a square is-a rectangle is a trapezoid. All have four straight sides, with ever increasing constraints placed.
A printable LinkedList may violate some invariant in an esoteric system, like "must work in an FPGA" where there may not be a display.  But I agree with others this a poor example, and ostream_iterators are what's needed here

My teacher says that whenever we want to add new functionality to a class we should make a sub-class and implement there. Is this good pr…

Filed under Life

## In C/C++ why does the main function have return type int, instead of double or string?

Because the result value of main() is passed to the exit() system call on UNIX systems.

The exit() system call takes a single integer argument which is stored in the PID info record in the kernel, and can then be retrieved by the calling process when they call one of the wait() family of system calls.

This method of passing a return code through the PID info has been around since the earliest days of UNIX, which was written in C.
In UNIX, processes typically return a success/failure code in this space, so that the calling process.  Making the return code from main() the same value is just a simple shortcut.

In C/C++ why does the main function have return type int, instead of double or string?

Filed under Life

## What is a list of data structures that a competitive programmer must know?

Answer is different for different skill levels. I will try to categorize,

Beginner: Linked List, Stack, Queue, Binary Search Tree.

Intermediate: Heap, Priority Queue, Huffman Tree, Union Find, Tries, Hash Table, Tree Map.

Proficient: Segment Tree, Binary Indexed Tree, Suffix Array, Sparse Table, Lowest Common Ancestor, Range Tree.

Expert: Suffix Automaton, Suffix Tree, Heavy-Light Decomposition, Treap, Aho-Corasick, K-d tree, Link-Cut Tree, Splay Tree, Palindromic Tree, Rope,  Dancing Links, Radix Tree, Dynamic Suffix Array.

I have seen all of the listed data structures being used in various programming contests.

Many of them are given in language libraries. But it is very important to understand their dynamics. Otherwise understanding related higher level structures will be difficult (if possible).

One may find some higher level data structures easier than lowers (happened to me).

What is a list of data structures that a competitive programmer must know?

Filed under Life

## Why can’t the main() function have a return type of double or String?

Because the operating systems program loader only accepts an int as a return value.
This allows sshell scripts some form of control — continue onto the next program if the previous one returned a zero. if it were a float, this would break a consierable number of existing scripts that do exact comparisions. If a string, then not only would the format values of the strings themselves have to be coordinated (not all strings are zero terminated) but likewise the encoding of the string (UTF16BE, Ascii, etc) and and the actual values.
Basically the only convention used is negative for an  error , a zero for success, and maybe positive value for a warning. .  Anything more complicated and a LOT of things break or become impossible to maintain.