GSoC in Shogun – week 3

During this week, I focused on removing the global random variable. Basically speaking, I removed the global random(sg_rand), then introduced a variable(CRandom* m_rng) in SGObject and which can be used everywhere.

Benchmark for stander random and CRandom

The original idea of Wiking and me was to replace CRandom with the c++11 random framework. So we did some benchmark experiments about the performance of the standard random generator and CRandom, and the results we got were very interesting: we found, a huge difference between the gcc and clang implementations (left and right, respectively).

we found, as wiking said, there are some crazy difference between the clang and gcc’s implementation(the left one is the result obtained from gcc and the right one from clang).

Screen Shot 2017-06-27 at 10.17.46 PM

Screen Shot 2017-06-27 at 10.18.04 PM

From the result you can see gcc has better random implementation than clang, and CRandom is better than the C++11 random generator on clang(\o/)

Remove sg_rand

Even though we know that the C++11 random framework could improve our performance on random, if we want to build it into Shogun, we need to start the refactor with small steps, because random is used so much. In order to avoid broken unit tests, we need to get rid of sg_rand and use a variable in SGObject to instead of it. I haven’t done this job yet, because even though it only needs adding a couple of lines to make things work, there are over 100 files involved into this issue so I must test each thing carefully first. Hopefully I can commit a pr within this week, and the progress about this issue can be found in get_rid_of_sg_rand branch.

Some other issues

In the last week, I did some GMM refactoring by SGVector and linalg lib, and fixed the broken GMM notebook. When working on GMM and Gaussian, I found that their max likelihood computation are actually duplicated. And as Heikos commented on the pr, we are going to further re-design the GMM and CMixtureModel. Another issue wasto add a Gaussian checkerboard fixture and use it to generate BinaryLabel and MultipleClassLabel data for unit tests. I created that pr a couple of weeks ago and I was struggling with the DynamicArray issue then. Most things I do is to copy and paste the data generator function in SVMOcas_unittest and make it global. Some loop like:


for (index_t i = 0, j = 0; i < data.num_cols; ++i)
{
    if (i % 2 == 0)
        train_idx[j] = i;
    else
        test_idx[j++] = i;
}

Which is very hard to read and maintain. Actually, I also spend a lot of time to figure out what exactly it is supposed to do. Therefore, I will refactor it ASAP.

PR in progress:
https://github.com/shogun-toolbox/shogun/pull/3859
https://github.com/shogun-toolbox/shogun/pull/3876

PR been merged:
https://github.com/shogun-toolbox/shogun/pull/3866
https://github.com/shogun-toolbox/shogun/pull/3857
https://github.com/shogun-toolbox/shogun/pull/3852
https://github.com/shogun-toolbox/shogun/pull/3812(need refactor)

Advertisements

GSoC in Shogun – week 1 & week 2

First of all, my major job is about to use std::vector instead of DynArray in CDynamicArray. But due to my carelessness and irresponsible, the works described in this blog actually shouldn’t take so long to deliver. I do meet some issues during this two weeks and went into some kind of dilemma, but it shouldn’t take 2 weeks anyway.

Stage – 1(try to use std::vector instead of DynArray directly)

std::vector is the first choice as the alternative of DynArray, it’s dynamic, check, it has almost same interface with DynArray, check, and the most beautiful thing is I don’t know to manually manage the memory by using it.(How naive I am :/)

Ok, after I replace all the DynArray with std::vector directly, the compiler complain like:

error: invalid initialization of non-const reference of type ‘bool&’ from an rvalue of type ‘bool’

And do some research, I found this(https://stackoverflow.com/a/7376997) for it. And I quote part of wiki:

The Standard Library defines a specialization of the vector template for bool. The description of this specialization indicates that the implementation should pack the elements so that every bool only uses one bit of memory.

looks like I should find another way.

Stage – 2(maybe std::deque is a better choice?)

So I need a container, it must be dynamic and support random index, so it can return reference to bool. Ok, I guess std::deque is something we want. The “only” difference is we can’t direct access to the underlying array,



auto it = m_array.begin();

return &(*it);

But, as wiking said, the elements in deque are not contiguous in memory and the serialization of this object will be broken as result.

Stage – 3(wait! Maybe const_reference and const_pointer works for vector)

After talk with iglesiasg on irc,  I found the std::vector<bool> specialization defines std::vector<bool>::reference as a publicly-accessible nested class. std::vector<bool>::reference proxies the behavior of references to a single bit in std::vector<bool> and it can return reference as usual if it’s not bool element. How sweet! But, however, it proves that it’s impossible to return a plain pointer by vector::const_pointer. 

Stage – 4(Oh man, just use template specification)

If we can’t make vector works, why not stop using it? So we got:


template  class CDynamicArray : public CSGObject
{}

template  class CDynamicArray : public CSGObject
{}

Watch out!  The template definition should be in one line, otherwise our class_list.cpp.py doesn’t know how to handle it[see github comment for more][and here]. Alright, do you think it’s good to go now? NO! We still got so many errors:

1 – unit-DynamicObjectArray (SEGFAULT) 8 – unit-SGObject (OTHER_FAULT) 12 – unit-GaussianProcessClassification (SEGFAULT) 73 – unit-LineReaderTest (Failed) 81 – unit-CommUlongStringKernel (SEGFAULT) 222 – unit-LogPlusOne (SEGFAULT) 223 – unit-MultipleProcessors (SEGFAULT) 226 – unit-RescaleFeatures (SEGFAULT) 265 – unit-SerializationAscii (OTHER_FAULT) 266 – unit-SerializationHDF5 (OTHER_FAULT) 267 – unit-SerializationJSON (OTHER_FAULT) 268 – unit-SerializationXML (OTHER_FAULT) 343 – libshogun-evaluation_cross_validation_multiclass_mkl (OTHER_FAULT)

Where is the problem?

First problem is DynamicArray :: shuffle(). I shuffle things inside like



std:: shuffle(m_array.begin(), m_array.end())

and it will shuffle all the element in that vector rather than the elements been used. For example, if we have a vector with size 10 and we only use 5 elements in it. It will look like vector{1,2,3,4,5,0,0,0,0,0}. And then the std::shuffle() will make things like {0,1,2,0,0,3,0,4,0,5}. Actually we don’t want any zero in there.
The second problem is DynamicArray :: find_element(). Again, I use

std::find(m_array.begin(), m_array.end(), e)
inside and again, it failed :/

For example, if we have a vector with size 10 and we only use 5 elements in it. It maybe looks like vector{1,2,3,4,5,0,0,0,0,0}. I bet you already notice it, it will always return true if  we want to figure out if we have zero in that array. To fix it:

inline int32_t find_element(bool e)
{
int32_t index = -1;

int32_t num = get_num_elements();
for (int32_t i = 0; i < num; i++)
{
if (m_array[i] == e)
{
index = i;
break;
}
}
return index;
}

As conclusion:

These errors and bug actually not so hard to find out. But I just too trust STL and haven’t figure out what things I need exactly. After I found a bunch of segment fault, the first thing come cross my mind is “it’s a serious problem, I should ask my mentor”. If I have more patience and output all the variable step by step, things will been fixed very fast and wouldn’t wast too much time of my mentor(sincerely sorry to wiking). And I should write unit test before I start my work, so we can catch the problem at the beginning.

Relative entropy and mutual information

Consider some unknown distribution p(x), and suppose that we have modelled this using an approximating distribution q(x). If we use q(x) to construct a coding schemem for the purpose of transmitting values of x to a receiver, then the average additional amount of information required to specify the value of x as a result of using q(x) instead of the true distribution p(x) is given by: KL(p||q) = -\ln p(x)lnq(x)dx - (-\ln p(x)lnp(x)dx) = -\ln p(x)ln{\frac{q(x)}{p(x)}}dx and it’s known as relative entropy or Kullback-Leibler divergence or KL divergence. You could also define it as KL(p(x)||q(x)) = \sum_{x \in X}f(x) \dot log\frac{p(x)}{q(x)} we could give some conclusion in here: \n

1: The value of KL is zero if p(x) and q(x) are exactly same function.
2: If the difference between p(x) and q(x) is larger, the relative entropy will become bigger, otherwise, it will decrease if the variance is smaller.
3: If p(x) and q(x) are distribution function, the relative entropy could been used to measure the difference between them.

The thing need to point out is the relative entropy is not symmetrical quantity, that is to say KL(p||q) \neq KL(q||p)

Now consider the joint distribution between two sets of variables x and y given by p(x,y). If the sets of variables are independent, then their joint distribution will factorize into the product if their marginals p(x, y) = p(x)p(y). If the variables are not independent, we can gain some idea of whether they are ‘close’ to being independent by considering KL divergence between the joint distribution and the product of the marginals, given by: I[x,y] = \sum_{x \in X, y \in Y} P(x, y)log \frac{P(X,Y)}{P(X)P(Y)} or we can just say $I(X; Y) = H(X) – H(X|Y)$

Start from the Information Theory

It has been a long time since last update, part of reason is I need to work on my postgraduate paper, and, however, I’m a lazy man anyway 😛 Recently I’m reading about 《Pattern Recognition and Machine Learning》 and 《Beauty of Mathematics 》 which makes me to mark something down and help me to understand them better 🙂

The First thing I want to talk about is information theory, so if you need to predict an even is possible or not, the most straight forward way is you could use the history data to get its probability distribution P(X) depend on the value x. . So, if right now we want to evaluate the information content of x, we should find a quantify h(x) that is a monotonic function of the probability  P(X) that could expresses the information content. The way we could find that h(x) is, we need to have two events x and y that are unrelated with each other and then the information gain from observing both of them should be the sum of the information gained from each of them separately. so the h(x, y) = h(x) + h(y) and P(x, y) = p(x)p(y). The we could get h(x) = -log_{2}p(x) And you could find that the h(x) is actually represent the “bits”. Now, suppose that a sender wishes to transmit the value of a random variable to a receiver. The average amount of information that they transmit in the process is obtained by taking the expectation of h(x) with respect to the distribution p(x) and is given by H[x] = - sum_{x}p(x)log_{2}p(x). This important quantity is called the entropy of the random variable x. Consider a random variable x having 8 possible states and each of which is equally likely. In order to communicate the value of x to a receiver, we would need to reansmit a message of length 3 bits and the entropy is H[x] = -8 * \frac{1}{8}log_{2}\frac{1}{8} = 3 bits. Furthermore, if we have 8 possible states{a, b, c, d, e, f, g, h} for following strings: 0, 10, 110, 1110, 111100, .

So we have the ideal of entropy, then let’s see the other kind of differential entropy. When we minimize the value x and give a quantity ln\Delta, which diverges in the limit \Delta \rightarrow 0. For a density defined over multiple continuous variables, denoted collectively by the vector x, the different entropy is given by H[x] = - \int p(x)lnp(x)d_{x} which denote the fact that to specify a continuous variable very precisely requires a large number of bits.

Support we have a joint distribution P(x, y) from which we draw pair of values of x and y. If a value of x is already know, then the additional information needed to specify the corresponding value of y is given by -lnp(Y|X). Thus the average additional information needed to specify y can be written as H[y|x] = sum_{x\in \textup{x}, y\in \textup{y}}p(x,y)log \frac{p(x)}{p(x,y)} which called the conditional entropy of y given x. It’s easily seen, using the product rule, that the conditional entropy satisfies the relation: H[x,y] = H[y|x] + H[x]

SETA rewrite-Use SqlAlchemy to instead of hardcode sql

Due to we have already make SETA to support taskcluster now,  the next thing we should do is to make it make the code become more robust and readable. That’s it, we should add more tests and documents for it. The first step, before we start to write test for it, is no longer use hardcode sql in SETA. For example.

sql = """insert into testjobs (slave, result,
duration, platform, buildtype, testtype,
bugid, branch, revision, date,
failure_classification, failures)
values ('%s', '%s', %s,
'%s', '%s', '%s', '%s', '%s',
'%s', '%s', %s, '%s')""" % \
(slave, result,
duration, platform, buildtype, testtype,
bugid, branch, revision, date, failure_classification, ','.join(failures)

It makes us hard to write test for it and cased some bugs when we need to switch database in different environment(because we have master mysql branch and heroku postgresql branch). Therefore, the solution I found is use SqlAlchemy which is a powerful and useful database toolkit for python. You can found some more information about on its office website.

Alright, to use SqlAlchemy in SETA, we should make it connects to the database, which is create a engine for it:

engine = create_engine('mysql+mysqldb://root:root@localhost/ouija2', echo=False)

The standard calling form is to send the URL as the first positional argument, usually a string that indicates database dialect and connection arguments. The string form of the URL is dialect[+driver]://user:password@host/dbname[?key=value..], where dialect is a database name such as mysql, oracle, postgresql, etc., and driver the name of a DBAPI, such as psycopg2, pyodbc, cx_oracle, etc. Alternatively, the URL can be an instance of URL.

And we need to create some database table for our new database. Classes mapped using the Declarative system are defined in terms of a base class which maintains a catalog of classes and tables relative to that base – this is known as the declarative model. And we could use Metadata to include all the models we define and bind it with our database.


class Seta(MetaBase):
    __tablename__ = 'seta'

    id = Column(Integer, primary_key=True)
    jobtype = Column(String(256), nullable=False)
    date = Column(DateTime, nullable=False, index=True)

    def __init__(self, jobtype, date):
        self.jobtype = jobtype
        self.date = date

We’re now ready to start talking to the database. The ORM’s “handle” to the database is the Session. When we first set up the application, at the same level as our create_engine() statement, we define a Session class which will serve as a factory for new Session objects and use it to query or operate the database.
In config.py


Session = sessionmaker(engine)

session = Session()

 

In failure.py(which is the file we need to use the database operation)


session.query(Seta).filter(Seta.date == date).delete(synchronize_session='fetch')

That’s it! And you can read some more detail in this PR. The next step for SETA rewrite is write more test for it!

The First part of SETA rewriting job been finished

With the end of GSoC midterm evaluation, I finally remember the blog and weekly maintain request. But I’m working on (make seta to use runnable api)[https://github.com/mozilla/ouija/pull/93], so I leave it aside for now. However, with the most seta rewriting job been finished now, I think it’s a good time to pick this up and write something.

The first thing I would like to mention in here, of course, is we use runnable api instead of uniquejobs table now! Thank you Kalpesh and Armenzg! I couldn’t make this happened without your help! 🙂

The uniquejobs table in SETA is about all the  jobtypes available on buildbot(we unable to recognise taskcluster jobs before we use runnable api). The jobtypes been stored in the table as a three tuple list like: [‘build platform’, ‘build type’, ‘test type’]. Here is a piece of data in table:

Screen Shot 2016-07-14 at 12.33.06 PM

But it’s manually created by ‘/data/createjobtype’ endpoint before we use SETA which make it’s a little bot hard code and cause some bugs(e.g. This pr fix the linux 64 display issue).  Therefore, we need to use runnable jobs api to replace that. The runnable api could return both buildbot jobs and taskcluster jobs as json.  So we could simple exact the corespondent piece of data and assemble them as job information we want. Here is the PR. And SETA can show more job data than before now:

Screen Shot 2016-07-02 at 4.51.32 PM

The next step is we should move to the task decision tree and make it work with seta data. I will make a description for task decision tree in my own understanding.

Sorry  again for the blog update. I will try my best for updating it once a week.

How SETA works

Even the work period has been started for a while, I still confuse with how seta works and how we find out those high value job(such a shame :() So Jmaher(my mentor) decide to have voice meeting with me and help me walk through all these ideals. I’m very grateful Joel can make it happened! The whole meeting is fun and I had learn a ton of things from it! In case of I forget these ideals in the further, I would like to write it down and sort them again in here 🙂

The core idea of SETA is find out high value jobs and discard those low value jobs. But, how we determine if a job is necessary one(which is high value job) to us or not so important one. The answer is regression. Because, even there may has a lot of failures with a push, we only care about those failures fixed by commit.   So, the “base line” in here has become **we only need a job can case a regression**.

However, we still have no ideal about how seta do that. Actually, seta build a list of all root cause failures in a hash table with the revision as the key and all failed tests as an array(under the weighted_by_jobtype mode). like:
# we have 18 failures totally
{[
‘rev1’: [job1, job2, job3, job4, job5],
‘rev2’: [job1, job2, job3, job4, job5, job6, job7],
‘rev3’: [job4, job5, job6],

‘rev18’; [job1, job3, job5]
]}

then for each job type in the failure list, we temporarily remove it and check to see if we still detect all the failures. like:

#remove job1  
total_failures = 18
temp_failures = {[
  ‘rev1’: [ job2, job3, job4, job5],
  ‘rev2’: [job2, job3, job4, job5, job6, job7],
  ‘rev3’: [job5, job6],
  …
  ‘rev18’; [job3, job5]
]}
#remove job2
total_failures = 18
temp_failures = {[
  ‘rev1’: [ job3, job4, job5],
  ‘rev2’: [jjob3, job4, job5, job6, job7],
  ‘rev3’: [job5, job6],
  …
  ‘rev18’; [job3, job5]
]}
#remove job3
total_failures = 18
temp_failures = {[
  ‘rev1’: [ job4, job5],
  ‘rev2’: [jjob4, job5, job6, job7],
  ‘rev3’: [job5, job6],
  …
  ‘rev18’; [job5]
]}
#remove job5
**total_failures = 17**
temp_failures = {[
  ‘rev1’: [ job4],
  ‘rev2’: [jjob4, job6, job7],
  ‘rev3’: [job6],
  …
 ** ‘rev18’; []**
]}
Oh! Now there is no job in ‘rev18’ list and the total_failures has become 17. It’s ‘we can’t detect all failures’ moment, and we call job5 as necessary job or high value job. Then we add job5 into the master_root_cause job list and do it again until the whole active_job list been iterated.