RSS

Tag Archives: data

Mozart and Harper Lee

I could really use some music to honor my wife on the anniversary of her death...

I could really use some music to honor my wife on the anniversary of her death…

In 1791 Mozart died while working on a beautiful piece of music, his Mass in D minor. I love much of Mozart’s work, but I think that this is probably my favorite (perhaps a tie with The Marriage of Figaro). Yet, there has always been discussion about how much Mozart, himself, completed and how much his friend and copyist (possibly student), Franz Sussmayr, wrote as he completed the manuscript for delivery to Count Franz von Wallsegg in 1792.

What is relevant here is that it does not really matter to me who wrote it. It’s attributed to Mozart, so I assume that he did the majority of the work in at least shaping it and providing hints as to how it would develop.

Similar accusations have been raised about Harper Lee’s authoring (or lack of authoring) To Kill a Mockingbird, and now Go Set a Watchman. Ms. Lee was good friends with another iconic writer, Truman Capote. The two were childhood friends, and she worked with him for some time as an research assistant for his opus, In Cold Blood.

To answer this, Maciej Eder and Jan Rybicki assembled a data analysis algorithm to analyze writing styles. With only a single novel, it is difficult to say much about the authorship (of Mockingbird), however, with the release of a second novel, a data analysis technique known as ‘cluster analysis’ becomes more meaningful. Using a number of analyses, the two data miners assert that Ms. Lee’s voice is her own, distinct from Capote’s. One of these analyses is presented below (taken from the Computational Stylistics Group website), examining most- frequent -word usages by Lee and a number of other Southern Authors.

Lee_and_others_consensus

We see that both of Lee’s books cluster together (as do other authors), and that her own style appears to more closely resemble authors that she professed were influential to her rather then that of her friend, Capote.

What is most important to me though is how I feel about the text. At this point I am nearing the end, but have not gone far enough that I can say definitively what my conclusions are. I admit that it took some time to get into the novel – the first chapter or so didn’t feel right to me – but most of the book has developed well in my opinion. I think what will make or break this book in terms of real importance to me is where things go with respect to the central question of race that it deals with.

Regardless of that conclusion, I have greatly enjoyed this book (as I did Mockingbird), for its ability to transport the reader into the mind and body of the protagonist, Scout. Taking us in a journey through time – twice! Once to Scout’s childhood, and again to her adulthood, still many years past now, just after World War II.

These books and Mozart’s Requiem Mass, they are what they are. And I intend to enjoy them by that standard.

The Requiem would be no less a masterpiece if it was written by Donald Duck. And Go Set a Watchman is what it is regardless of who wrote it or who wanted it published. The fact is, it’s out there and the whole world is devouring it this week. I say, discuss the politics of the book all you want, it’s all quite interesting too, but judge it on its own merits, irrespective of all these other questions.

That said… are you reading it? What do you think? I’ll probably be finished by the time anyone gets around to reading this, so answer as thoroughly as you like. Let’s consider this a SPOLIER ALERT for anything beyond this point – don’t read the comments if you have not finished the text. (OK, with all that lead up, I need some comments….)

 
Leave a comment

Posted by on July 17, 2015 in Uncategorized

 

Tags: , , , , , , , , , , ,

Flow Rate

I received an extra credit essay from one of my students based on a question from the textbook that I had to do a little modeling to understand. The question was one about patients with atherosclerosis that could be explained using Poiseuille’s Law. This Law describes the relationship between the flow rate, pressure, radius and viscosity of a liquid flowing through a vessel.

Basically, it is presented as:

Flow Rate = change in Pressure * pi * radius^4* Length of the vessel * viscosity

.                                                                    8

The question asks, ‘why symptoms of myocardial ischemia do not usually occur until ~75% of a vessel has been occluded.’

The easy answer is that that is the cutoff after which the amount of blood required to provide Oxygen sufficient for the heart’s metabolism is insufficient. However, this can be visualized qualitatively simply by graphing the equation. To do this, I made up a quick spreadsheet and just plugged in ‘1’ for all the variables, then solved for the flow rate. From here, I simply plugged in fractions into the radius variable.

Here’s the raw data:

Screen Shot 2015-05-08 at 5.00.16 PM

1.00 – 0.75 (i.e. a 75% blockage) = 0.25 is the number from the question. Here’s the analysis:

Screen Shot 2015-05-08 at 5.02.20 PM

Note how the Flow Rate has dropped to essentially ZERO when the radius is occluded 75%.

There may be more to this, but I think that just looking at this analysis of the equation answers a lot.

ps – I just spent a hell of a lot of time and effort messing around in the terminal of my mac changing the screen capture file type all to realize that it wasn’t my mac that was the problem at all – I simply was not using the largest image type available in wordpress and then tried to scale up my image after it was inserted – don’t do this. You lose all of your image quality.

 
Leave a comment

Posted by on May 8, 2015 in Uncategorized

 

Tags: , , , , , ,

Epidemiology: Should farmers try to do more work near noon?

The CDC has a wealth of classroom information (case studies, discussion material) regarding epidemiology. No surprise there. It’s what they do.

In my Microbiology class we’re starting a unit on epidemiology that students are working on in their free time either alone or in groups. We will talk about the project as questions come up, but mostly, I wanted people to have an opportunity to think freely – i.e. without me forcing my own ideas on them.

In my Ecology (population genetics, etc) class, we just spent some time last week discussing how data is just data, and in the absence of a reason to mistrust it, it probably makes sense to assume that the data is correct. However, this leaves the interpretation of the data up for much debate. ‘How so?’ I was asked. ‘Because people run experiments with certain ideas in mind that they would like to support or undermine. There can be many ways to misinterpret data.’

With this in mind, I ask you…

Should farmers try doing more work near noon?

Data suggests that this is the safest time of day. Yet, anecdotally, fewer farmers are putting time in the field at this hour than any other hour of the day(8am-8pm). What’s going on?

Screen Shot 2014-04-21 at 9.51.05 PM

 
2 Comments

Posted by on April 21, 2014 in Uncategorized

 

Tags: , , , , , ,

Numbers – and Other Obfuscations (corrected)

ImageI’m afraid this is going to be something of an incomplete post.

Yesterday, on the local NPR station, several Kansas State Legislators were questioned about plans for this new year. One question that did elicit some discussion concerned the State Education Budget. Here in Kansas, there is some debate about the budget (not unlike the rest of the country). One specific item was how the state was budgeting for education. Currently, there is a dispute where schools are challenging the legislators in the courts over insufficient budgeting.

This is a big mess because now the Judicial branch is hearing a challenge about financial matters that are clearly within the jurisdiction of the Legislative branch – but this is not what interested me.

My interest was piqued when one Legislators were asked about whether they felt that the school budget was sufficient or not to meet educational standards. One Kansas Congressman (I don’t know who because information regarding this show is not yet available online) responded that Kansas is among the top 4 states in funding education. He then quickly added that this was based on the percentage of total State Budget going to Education.

In graduate school, a statement like this would bring a conversation to a halt. ‘What does that mean?’, ‘Is percentage of state budget an appropriate way to gauge spending?’, ‘what does it translate to in absolute numbers?’, ‘What is the best way to measure education spending?’, ‘How can we compare this to other states / countries?’, ‘Are these comparisons important? i.e. does spending correlate to results?’

I could go on for some time on just this question.

The only thing to do is to go find the real answers, which might lead into muddy water. What numbers should we even look for? I think the best place to start is to consider what the standard is for comparison between schools in terms of budget. The most common and apparently sensible answer to this is to look at spending per pupil. This should normalize other variables fairly well – who cares what the actual state budget is? I don’t even care what the educational budget is. If we were going to ask what dollar amount is sufficient to feed a school full of students, we would want to know whether we are feeding 1000 kids or just 10. Admittedly, there are economies of scale, but at least this gets us somewhere.

I got my numbers from National Center for Education Statistics. Also note that the data I found were from 2010, so a little ways back, but necessary if we want to make any comparisons. Using this site’s data, we can immediately see that Kansas ranks 26th of the 50 states in terms of spending per pupil. This is not exactly top 4, but that’s not to say the legislator was lying, just using numbers to his advantage.

Does it matter how much we spend on education? To answer that, we have to look for some data on school performance, grouped by state, from 2010. Luckily, there is a Pew-funded study that does just this giving us a fairly simple number grade from 1-100 (100 being best). I’m not sure if this is the best way to do this, but for a back-of-the-envelope it’ll suffice.

Using these data together, I grouped the states in order of spending per pupil and graphed that against the state’s grade provided by the Pew report. This gave a pretty predictable looking graph.

Image

I had hoped to see something definitive, but despite the trend, the line these data make, gives an r-square value of 0.2. This is the number that tells us how predictive our line will be in the future as well as how well it accounts for the data presented.

This raises the question of whether we should have confidence in the line or not. In this case, r2 = 0.2 means that this line accounts for only  20% of the data. We would like to see this number as close as possible to 1.0, the number indicating that the line fits 100% of the data points and we should be confident of its predictive power.

Here, I have to admit that I am not a dyed in the wool numbers guy. I wish I was, but my faculty with math is weakening with each passing year since my undergraduate studies. I’m going to have to do some investigating into what we can interpret from these data. The trend is clear, the significance of this trend is not.

(Note – this post was corrected, I initially posted it with an incorrect conclusion due to late night foggy thinking)

 
Leave a comment

Posted by on January 7, 2014 in Uncategorized

 

Tags: , , , , , , ,

5 Beautiful Infographics

I’ve been listening to a new podcast about big data, data mining and data visualization while running and It got me thinking about the way that data is presented.

In the lab, beautiful data means clarity and precision of results with the assumption that the observer can do the work to understand with a minimum of assistance. Here’s some cell proliferation data using CFSE (a dye that stains cells and is diluted every time a cell divides):

Image

published in Nature by Dawkins et al 2007

Whereas, outside of the laboratory, data is best presented in a way that clearly expresses the message with the least possible explicit explanation. I collected five info graphics from the web that I thought accomplished this goal the best (and presented data I was at least partially interested in)

A beautiful visualization of population density across the United States down by Time magazine:

Image

The legend serves only to translate scale into actual numbers, but the meaning is clear enough without actually needing it at all.

Here Forbes media shows what source of media predominates in each of the United States:

Image

 

This media info graphic does the best to illustrate that sometimes the information used to create an info graphic is close to worthless, but can still make a compelling presentation. As such, this probably represents the best argument against these presentations. “Is this information really worth knowing?” -or- “Is this really information at all?”

 

 

Mmmm Coffee. I certainly do love coffee…

Image

 

Some data that’s a little more serious: The good work of vaccines is invisible. It’s very hard to wake up and look out on the world and think, I sure am glad so many people are not getting sick from vaccine-preventable illness. Here’s a way to actually see that:

 

Image

Again, from Forbes

Lastly, how does your level of education correlate with salary and your chance of being unemployed? These are numbers that perhaps every parent should consider when talking to their child about their educational goals.

Image

I can’t say that I’m receiving that lest benefit currently, but perhaps I should consider it motivating.

 
Leave a comment

Posted by on November 17, 2013 in Uncategorized

 

Tags: , , , , ,

Experimental Flaws -Uncontrolled variables

I had an interesting text message from my cousin today. He was asking, ‘What is meant when a  study is deemed to be flawed due to uncontrolled variables? i.e. what does it really Imagemean to have uncontrolled variables?’

It’s an excellent question – and one that is well addressed in a book I recently recommended here called How to Lie With Statistics.

I gave him the following answer:

‘A simple example might be someone looking back through historical data and seeing that the number of cancer cases (of all kinds) has been on the sire over the past twenty years. In terms of absolute numbers, this is true. Some people use this to raise the alarm that we have to get more aggressive in our fight against cancer because it has become a leading killer. Perhaps that’s not a bad idea either, but if someone were to look more closely at the details they would quickly see that these absolute numbers aren’t the right data to make this conclusion by. There are uncontrolled variables.

here’s some real data:

The unaltered or crude cancer death rate per 100,000 US population for the year 1970 is 162.8. Multiply this rate by the US population of that year, 203,302,031 and divide by 100,000, we obtained the total cancer deaths of that year, 330,972. Divide this number by the number of days in a year, we obtain the average number of Americans who died of cancer in 1970 at 907.

Twenty years later, the unaltered cancer death rate for the year 1990 is 505,322, the total population, 248,709,873. The cancer death rate per 100,000 population rose to 203.2. The daily cancer death rate was 1384.

(http://www.gilbertling.org/lp2.htm – original data:The 1970 cancer death rate was taken from p.208 of the Universal Almanac, John W.Wright, Ed., Andrews and McMeel, Kansas City and New York. The estimated 1996 cancer deaths figure was taken fromTable 2 in “Cancer Statistics” by S.L. Parker et al, in CA, Cancer Journal for Clinicians, Vol. 65, pp. 5-27, 1996.The 1970 US population was taken from the World Almanac and Book of Facts, 1993, p. 367; the estimated 1996 population was from the 1997 edition of the World Almanac and Book of Facts, p.382. The 1997 total cancer death figure was obtained from S.H. Landis et al in CA, Cancere Journal for Clinicians, Vol. 48, pp.6-30, 1998, Table 2. The US population for 1997 was obtained from The Official Statistics of the US Census Bureau released on Dec, 24, 1997)
ImageHowever, if this is the limit of the analysis, it’s useless. In 1970 the life expectancy was about 67 years for a white, non-hispanic male, while in 1990 that number was about 74.
Since cancer is a disease of the aged, it is likely that the increase in cancer is directly linked to the increase in population of the elderly.
What this means, it that in order for the study to be meaningful, the authors should look at cancer rates among a more comparable group, perhaps white, non-hispanic non-smoking males living in some certain region  that has not undergone drastic demographic changes or excessive immigration / emigration. By taking these additional steps, we reduce the number of differences in our two populations, allowing us to make a ‘more controlled comparison.’
 
Leave a comment

Posted by on September 20, 2013 in Uncategorized

 

Tags: , , , , , , , , ,

The Human Genome… genes on chromosomes

I was spending some time on stack exchange’s biology section the other day, when I saw an interesting question that someone had about how genes are arranged on chromosomes.

In answering his question, I picked up a couple of screen shots and links that I thought I should share here.

The query was included the following (paraphrased):

How are genes  arranged on the chromosome, are they were all in a single direction and how does the cell ‘know’ which direction they are in?

The best way to approach this question is to take advantage of the amazing amount of resources compiled at the NIH’s National Library of Medicine…

One fun place to start is the Genome Page, which looks like this:

Image

 

Note the 22+ X and Y chromosomes on the lefthand side of the page. Each chromosome is clickable and will take you to a chromosome page that looks like this:

Image

Map view of H. sapiens Chromosome 14

Genes are listed on the right side of this map with locations of each indicated through a set of nested maps on the left. Each gene is clickable, providing links to the research done supporting these map placements and functions of the gene/protein. You can also easily use this information to jump to the homologous gene found in any of a number of fully sequenced organisms.

Below the map of the chromosome is a legend that indicates additional information and shows how much detail that each of the maps you are observing provides.

Image

The amount of data is overwhelming, but you can adjust how much detail is shown in order to get the ‘lay of the land’ for a specific chromosome without getting too lost. If you have a gene you want to find, you can also pinpoint it this way and see what other genes are located nearby (and therefor ‘linked’ to your gene).

Image

huMMR gene, chromosome 10

I searched for the Human Macrophage Mannose Receptor (a protein I made antibodies against when I worked for Medarex). This gene is located on chromosome 10, as indicated by the red dots. 212 references provide sequence information about this gene and protein.

If you keep going down the rabbit hole, you can see each of the DNA sequences that were used to identify and locate this gene on the chromosome (I omitted providing an illustration of this page because it is hard to get anything from it if shrunk down of prevented piecemeal. However, you can go to this page by following this link).

Finally, you are given the links to the complete coding sequence (cds), which has the actual sequence of the gene and protein as well as notes about how it is put together. In my mind, these are the bread and butter of this site, and probably the oldest reference pages that have provided gene hunters data for several decades now. 

Image

Ahh, data I can use!!

 …

Image

A slice of sequence info

It’s easy to see this as way too much information to be useful (hence the problem of ‘Big Data’ in Biology), but it’s also extremely cool, and I have to admit that I’ve gotten just as lost in tracing the data on genes using this site as I did walking from topic to topic in the Encyclopedia when I was a kid.

So… to answer the questions posed above, you can use this site to see that many genes lie in different direction along the chromosome. Why the cell doesn’t get ‘confused’ is because the cell doesn’t try to arrange data like we do in volumes of books meant to be read in order. Each gene is regulated, transcribed and translated according to its own local rules, as if ‘unaware’ of all that’s going on around it.

 

 
Leave a comment

Posted by on September 8, 2013 in Uncategorized

 

Tags: , , , , , , , , , ,

All in a kerfuffle

I’m all bent out of sorts since I decided to write about the green coffee extract paper popularized by Dr. Oz. 

Here’s the problem: in my last post I attempted to unpack the data presented in the article describing a weight loss trial using this supplement. Yet, the closer I examined the data, the more clear it was to me that the data presented in that paper does not support any conclusions.

This does not mean that the supplement is effective or not. It doesn’t even mean that the group is lacking in data that would answer the question. It merely means that the numbers they present and the descriptions of their methods do not allow one to scrutinize the data in a way that supports or refutes their claims.

ImageFor anyone interested in a fun discussion of statistics and what they mean, I strongly recommend the classic text, How to Lie with Statistics, by Darrell Huff.It’s a bit out of date, but still a lot of fun to read and educational for those who have not spent much time analyzing figures.

One thing the Mr. Huff’s book does well is brings the reader into the discussion of data and how to present it. A lot of his focus is on how advertisers manipulate their graphs and language in order to obfuscate the truth.

I don’t think this coffee extract paper is intentionally obfuscating the truth, rather, I think the confusion comes from an inability of the authors to present their data clearly (even to themselves perhaps). I’ve worked in a number of labs with a number of scientists in my life and I can say with conviction that not all scientists ability to analyze their data is the equal. In fact, I have seen a number of presentations where the presenter clearly did not understand the results of their own experiments. I can say that sometimes I have not understood my own data until presenting it before others allowed us to analyze it together (i.e. I am not exempt from this error).

I would love to have the opportunity to examine the raw data from these experiments to determine if they really do address the question – and whether, once addressed, the question is answered. I’m going to appeal to both the journal and the authors for more clarification on this and will report my findings here. 

 

 
Leave a comment

Posted by on July 23, 2013 in Uncategorized

 

Tags: , , , , , , , , , , ,

Because it was on Dr. Oz, I’m more likely to think it’s a scam

doctor-ozI got something interesting in my inbox the other day. Something that I assume was a  friend’s email address getting hacked – although it’s the least offensive (apparent) hack I’ve ever seen (he says as the viruses circulate around his computer’s RAM).

It was a nearly blank email with a link to a Dr. Oz clip about the weight-loss promoting effects of green coffee extract, which contains high concentrations of chlorogenic acids. These molecules are said to promote weight loss through increasing metabolism.

Being a scientist means being a skeptic. In this case, because I already feel like it must be BS due to its connection with Dr. Oz (an Oprah-elevated proponent of many untested, ‘alternative’ therapies), the challenge for me is to admit the possibility that this stuff may work. So, rather than looking through the data to see if there’s anything to deny the claim, I’m really trying hard to look at the data to see any glimmer  of possibility.

Here’s a link to the Dr. Oz article online. The article was published in the January 2012  Diabetes, Metabolic Syndrome and Obesity, and happily the entire article is available free of charge. So let’s look at the data…

The article examines a “22-week crossover study was conducted to examine the efficacy and safety of a commercial green coffee extract product GCA™ at reducing weight and body mass in 16 overweight adults.” Half of the participants were male and half female – a typical study setup (although I do worry about how data is handled when looking at both sexes together, so let’s pay attention to that.)

Dr. Oz’s website indicates that “The subjects (taking the supplement) lost an average of almost 18 pounds – this was 10% of their overall body weight and 4.4% of their overall body fat.” These are pretty hefty claims, but I could use losing 18lbs, so let’s see where this goes.

The study followed those eight men and eight women for 22 weeks. At the beginning of the study, the average body mass index (BMI) at the start of the study was 28.22 ±  0.91 kg/m2 . Determine your own BMI here.

Note that BMI < 18.5 is underweight

18.5  –  25     healthy weight

25   –   30      overweight

30+               obese

This puts the study participants at the high end of overweight, but ‘preobese’.

Dosages of the green coffee extract and placebo were as follows:

“This study utilized two dosage levels of GCA, as well as a placebo. The high-dose condition was 350 mg of GCA taken orally three times daily. The low-dose condition was 350 mg of GCA taken orally twice daily. The placebo condition consisted of a 350 mg inert capsule of an inactive substance taken orally three times daily.”

I don’t think I’m the first one to point out that it’s hard to have a double blind trial when the dosages are distinguishable (two times vs three times daily). At least the placebo should be indistinguishable from the high dose.

One early eye-catching piece of data is from Table I, that summarizes the data of all precipitants as

BMI (kg/m2) pre study:28.22 ± 0.91        post study:25.25 ± 1.19     change-2.92 ± 0.85**, -10.3%

On average, all subjects lost weight during the study. But this really tells us nothing because we could see a 10% drop in BMI if the test arm lost 20% and then placebo arm stayed the same, or we could see the same thing if the weight loss occurred during ALL arms of the study.

Perhaps this reporting of data is justified by the next statement that participants all rotated through being on high dose, lose dose or placebo with intervening washout periods. Presumably, this makes the most of a small sampling of people, but I do find it harder to be confident about the data. Then again, I have never been involved in any human trial of this kind.

here’s the data:

High Dose arm:

start    BMI (kg/m2) 26.78 ± 1.55  –>    end 26.03 ± 1.36

Low Dose arm:

start    BMI (kg/m2) 26.25 ± 1.37  –>    end 25.66 ± 1.20

placebo arm:

start    BMI (kg/m2) 25.66 ± 1.20  –>  and 26.67 ± 1.72

At first glance this might appear to be pretty good. But let’s graph it out:

Image

the data continue to look great.

Now, with error bars:

ImageHuh. Not so hot anymore.

Also, I’m not how sure this was done, but they get p values for HD p = 0.002, LD p = 0.003, placebo p = 0.384. These stats mean that the HD and LD groups are showing very significant differences, while the placebo group is not. You should be able to see this in the graph with error bars (as an approximation of significance). Unfortunately, I see a whole lot of no nothing. But, perhaps BMI is not the appropriate way to observe weight change (we are, after all not seeing specific weight changes, but changes within a group, i.e. diversity)

Another way to try to see what’s going on is to take a look at the weight data:

Image

The data were presented in a number of other ways, but each of these was confusing and didn’t illustrate any clear conclusion (my interpretation).If the individuals’ data were visualized as a scatter plot, this might show us something – or data for each individuals change while in each group… As it is, we see unclear data with spectacular statistics, but we don’t get to see enough to be convinced of the changes.

Rather than go on and get more and more skeptical, let’s say, although we don’t see a lot here, the data,as reported, would make us want to see a larger study with some revisions for control of diet, exercise monitoring and a change in the way osage is administered so as to maintain the ‘blindness’ of the study.

 
4 Comments

Posted by on July 22, 2013 in Uncategorized

 

Tags: , , , , , , , , ,

Project One of my Game Development Class made my head explode

I’m taking an intro Game Development course online (it’s well known that I hate online courses in general) and here I am on what amounts to day three and I’m struggling.

Image

Do this Stuff. Don’t worry, it’s easy.

Instructor:”So, you just need to make up this data tree thing with nodes and identifiers and stuff… look, just do it and get back to me. Here’s an outline:

  1. Each node has an ID property which is unique number identifying the node
  2. Each node as a Report() method that will print to the console it’s ID and if its a leaf or not
  3. When created, the tree has a single node, the root
  4. The tree has a SplitLeafs() method which will cause all leafs to create two children
  5. The tree has a VisitAll() method which will visit every node and call the node’s Report() method
  6. The tree has a LeafReport() method which call Report() on just the leaf nodes
  7. In your main() method in Program.cs/Main.cs, you have the following:
    1. Create a tree
    2. Call VisitAll() on the tree
    3. Call SplitLeafs() on the tree
    4. Call LeafReport() on the tree
    5. Call SplitLeafs() on the tree
    6. Call VisitAll() on the tree”
Image

OK…

Me: “Um, OK. I don’t really know what this is, but if you say it’s easy, I’m sure that I can figure it out.”

 

Instructor:”Got it yet?”

Image

Arghh!!!

Me: “Arghh… Let’s see: Tree class and Node class… How do I instantiate these things?

The root has no parent, but has children…”

Instructor:”Yeah. You totally have it. Let’s talk about the completed project tomorrow.”

Me:”Oh crap. So, the tree just gets made once, but then it makes the nodes…? Each node will hold some data: let’s keep that simple. Make it an integer.

Image

What the hell’s a leaf?

Ughh. Simple isn’t simple enough.” 

Image

Feeling the heat now?

Instructor:” A leaf if just a node. It has a parent, but no children.”

 

Me:  “And, how to we do this splitLeaf thing?”

Image

Cripes! instantiate two children from each leaf….how… to…?The pain!

 

Instructor:” Look, it’s just a couple methods within the class. write up a couple setters and a couple getters and then one or two other methods to do the work.”

Image

Look, I don’t mean to tighten the screws or anything, but this needs to be done and uploaded onto the Google+ document space…

 

 

I’m not kidding. I went from dominating my into C++ class to being a joke in this next class. I’ll try to  deconstruct the problem and post a walkthrough of the general idea if anyone’s interested.

Image

Google+? Wha…..

 
Leave a comment

Posted by on June 20, 2013 in Uncategorized

 

Tags: , , , ,