Music and Class in India

I was in middle school when I started listening to American music (we called it English music) and watching American TV series (on Star World, for those in the know). Soon, I was listening to a lot of Backstreet Boys, and watching a lot of Friends, Dexter, etc. It was a strange feeling. Although I enjoyed consuming all of that content, I obviously couldn’t relate to much of it. Society in India was very different. People were a lot more discrete about dating than Joey, for instance. In fact, the concept of dating as such did not really exist. People mostly just decided to get into a “relationship” right away. And the word “relationship” too was a new thing! In the generation before ours, such liaisons were called “affairs”, and were essentially looked down upon as dishonorable and irresponsible. If you were having an “affair”, you were probably bad at studies and hence didn’t have much professional hope anyway, reneging on your duties towards your family, and blowing through their hard earned money. It was with this cultural mindset that I watched my favorite American TV characters trying to get dates with everyone in sight, laughing loudly with the soundtrack each time. It was surprisingly easy to co-exist in two contradictory worlds.

The same can be said about American music in India. I started with listening to pop, and soon progressed to classic rock like Michael Learns to Rock or the Eagles. Metal was still too weird for me, and I never quite took to it. But it was not just me. Almost everybody in my class listened to English songs, and we took pains to memorize the lyrics so that we would be able to sing along with those songs in class or wherever. A part of it was obviously an intention to signal status….and that’s not exactly the same as wealth, or caste. Class could, in some sense, be built by an exposure or affinity to “the West”. If I was a a low caste person in India with not a lot of wealth, I could still signal class if I knew the lyrics to a lot of American songs and knew what was up in the West. If I knew who Kirk Hammett was, for instance, I was in, my family circumstances notwithstanding. However, this was rare. Most times, caste, class and wealth would be in alliance. If you were born into a high caste family with wealth, you were likely to be exposed to Western influences, and hence earn “class” as well.

Fine. So we all consumed a lot of American content to signal “class”. So what? Well, a natural outcome of this is that some people wanted to do this professionally. We have a very large number of rock bands in India that are still chasing the kind of fame that American rock bands see all over the world. Our film industry is full of filmmakers exposed mainly to western influences (often educated there), and often base their whole storylines in the “West” if their budget allows. Although the TV industry has mainly withstood the onslaught of western influences, they end up merely being the Indian versions of loud telenovelas that are often derided by our generation on social media. Essentially, we are invested in producing a lot of “westernized” content in India partly because they seem nice, and also partly because we want to signal “class”. But this never quite catches on. There are no Indian rock bands that regularly rule the music charts. Overly western-ised Bollywood movies regularly fail to recuperate their investments. Hence, this strategy has repeatedly failed to produce an authentically Indian voice that can resonate with the people.

But what is the authentically Indian voice? Is it the Hindustani or Carnatic music that we sometimes hear when our calls are kept on hold? I don’t think so. I would go so far as to say that most Indians (especially those living in rural India) have never even heard much classical Indian music, if any. Carnatic music was historically cultivated in Tamil Brahmin households, and has strong caste roots. Moreover, Hindustani music has also mostly been developed in esteemed Muslim or Hindu households, and was not commodified for the plebs until very recently. If you are a lower caste person working the fields in Madhya Pradesh, chances are you haven’t heard much of either kind music. It’s like the French saying that caviar is representative of regular French food.

In their search for the authentic music of India, some music outfits have tried fusing Indian classical music with rock, jazz, etc. And by my estimation, they’re musically brilliant! However, they have still eluded making it big. So what is the real music of India?

There are two ways of looking about it. If you talk about reach, then the real music of India is mainstream film music. Wherever you travel in India, you are likely to be blasted with Bollywood or regional film music. In my city of Kolkata (erstwhile Calcutta), you will often experience loud Bollywood music blaring all around you. The rich and poor equally enjoy dancing to the latest chartbusters that play on TV and radio stations all day every day. Hence, the real music of India is a bastardized offspring of Eastern and Western influences, packaged together with lavish sets and dancing film stars.

The other way of looking at it is, of course, status signaling. The reason why most of us started listening to English music was to signal our “class”. However, soon listening to English music became too mainstream, and people needed an alternate way to signal class. Thus, a lot of Indian college going hipsters began celebrating Anurag Kashyap and his brand of rustic, “authentically Indian” movies like “Gangs of Wasseypur”. I’m not saying that the movie wasn’t good. It was. However, the wave of appreciation that the movie saw was clearly culturally counter-revolutionary. It was a way for the urban elites to tell the masses that they, the rarefied and gentrified, still had their feet planted firmly on the ground, and perhaps understood real India better than the frauds who were trying to appropriate their superior class by listening to English music.

As music evolves in India, people will soon find another way to signal their class. They may start listening to old European music, or perhaps even Bhojpuri music ironically. However, the real music of India will always remain Himesh Reshammiya’s chartbusters, or perhaps Badshah’s “rap”. At least for some time to come.

Status signaling in academia

This is how math grad students talk:

I don’t really understand this very simple concept. What is the essence of this object, and why was it needed at all? I perhaps need to construct ten different examples or think of alternate definitions before I can successfully narrow down what this means.

No. Not really. This is how math grad students (including me) talk:

Oh you’ve heard of cohomology, but what about quantum cohomology? This *insert name* French mathematician has done some amazing work in this regard, and here’s a fairly advanced book that discusses it.

Of course this changes with time. As grad students become more competent in the latter half of their PhDs, conversations like these become more rare. However, there are still a lot of big words thrown around without caution.

Almost all of this can be explained by status signaling. Graduate students work in fairly isolated and non-overlapping research areas, and are hence free from the academic competition that their undergrad experiences entailed. However, there is still an intrinsic human need for them to signal to each other their relative intelligence and their imagined positions in the status hierarchy. And what better way to do this than to lob some polysyllabic words from their research fields, waiting and hoping for their audience to become suitably impressed, who in turn are getting ready to lob some long words of their own.

However, status signaling explains much, much more than just grad school conversations in lowly student bars. Robin Hanson claims that it explains almost all of modern human society. The more I think about this article, the more true it rings. The post below is also majorly influenced by the genius of this article by Freddie deBoer.

Status signaling in academia

If you’re a researcher in the United States or any other “First World” country, how would you signal your status as an intelligent and capable scientist? You would try to discover something new, create a new technology, or perhaps prove famous scientists wrong. Note that these do run the gamut of almost all avenues still open to researchers in these countries to signal their superior positions in the status hierarchy. They do really have to create something new.

However, if you are a researcher in the “Developing World”, perhaps in a country like India, things are slightly different. You can of course signal status by creating a brand new technology, or perhaps invent a paradigm-shifting theory. However, you can also earn status by being proficient at the newest fields and technologies that were only just created in the “First World”, and that your other colleagues are too slow or stupid to understand. Given below are some conversations that the author has created out of thin air

Are you aware of Machine Learning? Oh it is so interesting. There was recently a paper in the journal Nature on how it has come to beat humans at Chess and Go. We are using this esoteric kind of unsupervised learning in our lab to harvest data on genes.

You must have heard of Bitcoin. But have you heard of Blockchain? It is the technology that all cryptocurrencies are based on. I have included a module on it to teach my students in the Financial literacy course, and also regularly lecture corporations on their importance. Cryptocurrencies are the future, and it is a shame that our country doesn’t understand it yet

Oh you’re interested in learning String Theory? Well the first thing that you have to do is read the latest paper by Edward Witten. Oh you can’t understand the Math in it? Well, keep trying, and one day you will. I believe that the math used in that paper should be taught in elementary school itself.

The last conversation is real. The former Physics grad student (who then quit Physics to completely change fields) was perhaps trying to signal his own intelligence by saying that the latest paper by Witten was easy to read. It is in fact highly advanced, and would perhaps take most Physics or Mathematics researchers many months if not years to understand. Definitely not elementary school material.

Do we see a pattern? If I am a researcher working in India, I don’t really need to create whole new technologies or paradigms. A much easier way is to just import those paradigms from the “West”, become proficient in them before others (or at least in throwing around the relevant buzzwords), and consequently signal that I’m smarter and a more capable researcher than my colleagues. Of course other ways of signaling this are writing more papers than my colleagues, having my papers published in better journals, having a higher h-index, etc. Although the best way to signal status is still creating something brand new, the other ways are just so much easier that the law of “least work” precludes those from ever happening in the developing world.

I recently read the following comment on a substack article (that I cannot recall):

Chinese and Indian research is basically a paper mill. Let’s get real, nothing of value ever gets produced there.

As an Indian researcher, I felt bad upon reading this. However, this did ring true in significant ways. Although scientific advances do come out of India from time to time, nothing is usually big enough to “hit the headlines”. Of course there is one major exception: the claim by IISc scientists to have achieved superconductivity at room temperature for the first time in history. No one was surprised when this was proven to be a fraudulent claim. Another such claim doing the rounds these days is a Nature article published by an NCBS lab, which also turned out to have fraudulent data. One of the best research labs in the country willfully manipulated data to have their paper published, wasting taxpayers’ money and further reducing trust in Indian research.

This contrasts with my experiences as a student in India. My classmates and colleagues were some of the smartest people I’ve ever met, and I continue to correspond with and learn from them. Why was it that a country, with so many intelligent and hard working people, not capable of creating a good research culture that can contribute something meaningful to the world? I think that it is a case of misplaced incentives.

As researchers, our main incentive is discovering new truths about the world around us. However, an equally important (if not more important) incentive is signaling to others that we are intelligent. And by the law of “least resistance”, we want to find the shortest and easiest path to do so. Being versed at the latest “western” theories and technologies is a much easier path than actually creating something from scratch. Hence, we inevitably choose that path. Setting up whole labs devoted to reinforcement learning, creating a research group on String Theory in all major universities, etc.

Status signaling in other domains

One form of status signaling that is often on display is between researchers and entrepreneurs. When Elon Musk invented Neuralink for instance, lots of neuroscience researchers gave interviews in which they said that Musk had not created anything new, and was merely mooching off of their research that had been in the public domain for many years. Musk, on his part, emphasized that writing papers that no one reads is the easy part, and actually engineering products and bringing them to the world is the much harder part, that only he purportedly had done. Hence, researchers and entrepreneurs often engage in status battles.

Another form of status battle takes place between different economic strata. For instance, slightly lower earning professions (like researchers, bureaucrats, etc) are often engaged in status battles with high earning professions like bankers, tech workers, etc. The “we don’t get paid very much, but we are smarter and do what we love” refrain is often heard from researchers who actively hate their lives under the aegis of university bureaucracies, but want to signal a higher status. Of course the “how come you are smarter if I am the one earning much more money” retort is then in turn heard from bankers consultants, who lead an overworked and sometimes miserable existence in order to be able to signal a higher status.


Robin Hanson claims that most of what we do is status signaling. I want to strengthen this claim by saying that almost all of that we do is status signaling. We don’t really want to understand the world. We want to be perceived as understanding of the world, or at least as curious about the world. We want to signal our good looks by recalling stories of people expressing interest in us, our intelligence by talking about reading books and studying in reputed colleges (some take it too far and discuss IQ test scores), our virtues by talking about the disadvantaged and how we have stepped in to help them, etc. Very often, this leads one away from actually trying to understand the world, helping the disadvantaged, etc.

Of course, writing this post itself is an attempt to signal my status. I’m trying to prove that I’ve caught on to other people who indulge in status signaling, and that I myself am above all this. However, it would also be of immense value for me if I’m able to figure out a way to escape taking part in status battles with the people in my life. And if I remain in academia, it would enrich my life to no end if I’m able to pursue my curiosity without indulging in status games with the rest of the researchers in my field. Here’s to hoping.

Disentangling objective functions

I am currently reading the book Feeling Great by Dr. David Burns, and am finding it to be very insightful and helpful. In fact, I would highly recommend it to any person that has chanced upon this fetid corner of the internet. I apologize in advance for the self-help nature of the rest of the post.

In Chapter 3, the author talks about a Harvard student who is depressed because she is unable to get good grades and be the academic superstar that she had always been before this. She has been undergoing a lot of mental trauma for months now, and has finally come to her counselor for help. Now imagine that the counselor gives her two options:

  1. There is a “happiness button” that the student has to press, and then all her sadness will go away instantly, although her grades remain unchanged. Let us suspend belief for a moment and imagine that such a button actually exists
  2. The student does not press the happiness button, and continues living her life in pursuit of better grades and circumstances

Which option do you think the student will choose?

On close reflection, you may soon realize that the student will inevitably choose the second option, and not the first one. Although she does want to be happy, she wants good grades even more than mere happiness. She has made her happiness conditional upon academic success.

In life, we often entangle our happiness with our goals or ambitions. We say “if I become very rich or very successful in my field, I will be happy”. What inevitably happens is that we either don’t reach our desired goal, or when we do reach it, we realize that our goals have now shifted. We now want to be better than the other people who have achieved the same goals. Only then will we be happy.

What is perhaps more tricky to realize is that we need not do that. Happiness has nothing to do with achieving goals. Happiness is perhaps being at peace with ourselves and celebrating the present. This can be achieved by reflecting on the miracle of life and the universe, or perhaps injecting morphine into one’s eyeballs for the slightly more adventurous. However it is achieved, it actively has nothing to do with our goals. Hence, we will do well to disentangle our two aims of being happy and being successful. Both of these aims are valuable and worth pursuing. However, they are not related. Our being happy has nothing to do with being successful.

Humans have many objective functions like wealth, fame, happiness, meaning, quality of relationships, etc that they want to maximize in their lives. Maximizing any (or all) of these functions will add great value to one’s life. However, these objective functions needn’t have anything to do with one another. I can be happy without wealth, fame, meaning, etc….much like Sisyphus. I can also be wealthy without fame, happiness, meaning, etc. Entangling these functions can potentially take away value from our lives. For instance, if I entangle my happiness with fame and wealth, which means that I decide that I will be happy only when I’m rich and famous, then I lose out on the possibility of being happy if I’m not able to attain my goals of being rich and famous. Hence, keeping these functions separate and disentangled can only be to our benefit.

Of course, one may think that entangling my happiness with wealth and fame may make them more motivated to attain wealth and fame. Although this sounds convincing, this is not how things work in practice. We can’t “decide” what will make us happy. It is possible (and entirely common) that even when we attain our goals of wealth and fame, we are unhappy. An analogy is you deciding that you will turn 30 only when England wins the Football World Cup. You can’t really decide how and when you turn 30. Similarly, being happy cannot be arbitrarily entangled with any other objective function of your choosing. It has to be pursued and attained on its own terms, independent of other objective functions.

Thus ends my spiel for the day. If you think that I am slowly drifting away from reviewing scientific papers to writing crappy self-help posts, you’re right on the money.

HIV rebound

The paper that I’m writing about today is “The size of the expressed HIV reservoir predicts timing of viral rebound after treatment interruption” by Li et al. I will quote passages from the paper, and then try to explain what all of those fantastically long words mean.


Therapies to achieve sustained antiretroviral therapy-free HIV remission will require validation in analytic treatment interruption (ATI) trials. Identifying biomarkers that predict time to viral rebound could accelerate the development of such therapeutics.

This is one of a whole host of papers that deals with identifying biomarkers that can aid in the permanent treatment of HIV-positive patients. What does permanent treatment mean? When HIV-positive patients are put on an active treatment regimen, the treatment is often spectacularly successful…..until the treatment stops. Then, patients see a violent relapse. However, there are some patients (we’ll call them super-patients) that don’t see a relapse at all. Researchers are now trying to figure out what it is about these patients that helps them not relapse when treatment is stopped, and whether these conditions can be re-created in all patients. Simple.


Cell-associated DNA (CA-DNA) and CA-RNA were quantified in pre-ATI peripheral blood mononuclear cell samples, and residual plasma viremia was measured using the single-copy assay.

What is single-copy assay? Here is a direct quote from this paper:

This assay uses larger plasma sample volumes (7 ml), improved nucleic acid isolation and purification techniques, and RT-PCR to accurately quantify HIV-1 in plasma samples over a broad dynamic range (1–106 copies/ml). The limit of detection down to 1 copy of HIV-1 RNA makes SCA 20–50 times more sensitive than currently approved commercial assays.

Essentially it is a new-and-improved method of measuring the amount of HIV RNA in your blood plasma.

What are the results of this experiment?


Participants who initiated antiretroviral therapy (ART) during acute/early HIV infection and those on a non-nucleoside reverse transcriptase inhibitor-containing regimen had significantly delayed viral rebound. Participants who initiated ART during acute/early infection had lower levels of pre-ATI CA-RNA (acute/early vs. chronictreated: median <92 vs. 156 HIV-1 RNA copies/106 CD4þ cells, P < 0.01). Higher preATI CA-RNA levels were significantly associated with shorter time to viral rebound (4 vs. 5–8 vs. >8 weeks: median 182 vs. 107 vs. <92 HIV-1 RNA copies/106 CD4þ cells, Kruskal–Wallis P < 0.01). The proportion of participants with detectable plasma residual viremia prior to ATI was significantly higher among those with shorter time to viral rebound.

So people who start HIV treatment early have a more successful treatment overall, and it takes a longer time for the disease to rebound even when the treatment is stopped. This largely aligns with common sense and disease rebounds seen in other diseases like cancer. What is more surprising is that patients on the non-nucleoside reverse transcriptase inhibitor-containing regimen also see the same kind of success. Let us explore some of the words in this phrase. A nucleoside is a nucleotide, which is the basic building block of DNA and RNA, minus the phosphate group. Reverse transcriptase is the process of constructing complementary DNA sequences from RNA sequences (reverse transcription, because regular transcription constructs RNA from DNA). So constructing DNA from RNA without the help of nucleosides helps in treating HIV? Maybe this newly constructed DNA helps the immune system figure out how to fight the HIV RNA in the plasma? I’m not sure.

Moreover, higher levels of cell-associated HIV RNA lead to a shorter rebound time after treatment is stopped (ATI). This also makes sense. Treatment should only be stopped when RNA levels have decreased considerably. This is something I also came across in the book “The Emperor of Maladies” by Siddhartha Mukherjee. Cancer treatment, whether it be chemotherapy or a strict drug regimen, is often stopped when the patient supposedly feels cured for a duration of time. However, the cancer often rebounds very quickly. This tells us that treatments, whether they be for cancer or HIV, should be carried on for much longer than they are today, and the patient feeling “fine” is not a good marker for when the treatment should be stopped.


Higher levels of HIV expression while on Antiretroviral Therapy (ART) are associated with shorter time to rebound after treatment interruption. Quantification of the active HIV reservoir may provide a biomarker of efficacy for therapies that aim to achieve ART-free remission

This is a repetition of the above. Stop treatment only when HIV RNA levels are low. This will increase the time it takes for the disease to rebound. Essentially, disease treatment aligns with common sense. Who knew.

It sure doesn’t feel like predictive processing

Reddit user @Daniel_HMBD kindly re-wrote some parts of my previous essay to make it clearer. I am now posting this corrected version here.

Broad claim: The brain (conscious or unconscious) “explains away” a large part of our surroundings: the exact motion of a tree or a blade of grass as it sways gently in the wind, the exact motion of a human as they walk, etc. If we could force our brain to make predictions about these things as well, we’d develop our scientific acumen and our understanding of the world.

How can I understand the motion of a blade of grass? The most common answer is “observe its motion really closely”. I’ve spent considerable amounts of time staring at blades of grass, trying to process their motion. Here’s the best that I could come up with: the blades are demonstrating a simple pendulum-like motion, in which the wind pulls the blade in one direction and its roots and frame pull it in the opposite direction. Observe that I didn’t end up observing the tiny details of the motion. I was only trying to fit what I saw with what I had learned in my Physics course. This is exactly what our brain does: it doesn’t really try to understand the world around us. It only tries to explain the world around us based on what we know or have learned. It does the least amount of work possible in order to form a coherent picture of the world. Let me try and explain this point further in a series of examples.

When ancient humans saw thunder and lightning in the sky, they “explained away” the phenomena by saying that the Gods were probably angry with us, and that is why they were expressing their anger in the heavens. If there was a good harvest one year, they would think that the Gods were pleased with the animal sacrifices they’d made. If there was drought despite their generous sacrifices, they would think that the Gods were displeased with something that the people were doing (probably the witches, or the jealous enemies of our beloved king). Essentially, they would observe phenomena, and then somehow try to tie it to divine will. All of these deductions were after the fact, and were only attempts at “explaining away” natural phenomena.

When pre-Renaissance humans observed their seemingly flat lands and a circular sun rising and setting everyday, they explained these observations away by saying that the earth was (obviously) flat, and that the sun was revolving around the earth. They then observed other stars and planets moving across the skies, and explained this by saying that the planets and stars were also orbiting us in perfectly circular orbits. When the orbits were found to be erratic, they built even more complicated models of celestial motion on top of existing models in order to accommodate all that they could see in the night skies. They had one assumption that couldn’t be questioned: that the earth was still and not moving. Everything else had to be “explained away”.

When we deal with people who have a great reputation for being helpful and kind, we are unusually accommodating of them. If they’re often late, or sometimes dismissive of us, we take it all in our stride and try to maintain good ties with them. We explain away their imperfect behavior with “they were probably doing something important” and “they probably mean well”. However, when we deal with people who we don’t think very much of, we are quick to judge them. Even then they’re being very nice and courteous to us, we mostly only end up thinking “why are trying so hard to be nice” and resent them even more. We explain away their behavior with “they probably have an ulterior motive”.

Essentially, our brain sticks to what it knows or understands, and tries to interpret everything else in a way that is consistent with these assumptions. Moreover, it is not too concerned with precise and detailed explanations. When it sees thunder in the skies, it thinks “electricity, clouds, lightning rods”, etc. It doesn’t seek to understand why this bolt of lightning took exactly that shape. It is mostly happy with “lightning bolts roughly look and sound like this, all of this roughly fits in with what I learned in school about electricity and lightning, and all is going as expected”. The brain does not seek precision. It is mostly happy with rough fits to prior knowledge.

Note that the brain doesn’t really form predictions that often. It didn’t predict the lightning bolt when it happened. It started explaining away with lightning bolt after it was observed. What our brain essentially does is that it first observes things around us, and then interprets them in a way that is consistent with prior knowledge. When you observe a tree, your eyes and retina observe each fine detail of it. However, when this image is re-presented in the brain, your “the tree probably looks like this” and “the leaves roughly look like this” neurons fire, and you perceive a slightly distorted, incomplete picture of the tree as compared to what your eyes first perceived.

In other words, your brain is constanly deceiving you, giving you a dumbed-down version of reality. What can you do if you want to perceive reality more clearly?

Now we enter the historical speculation part of this essay. Leonardo da Vinci was famously curious about the world him. He made detailed drawings of birds and dragonflies in flight, of the play between light and shadows in real life, futuristic planes and helicopters, etc. Although his curiosity was laudable, what was even more impressive was the accuracy of his drawings. Isaac Newton, another curious scientist who made famously accurate observations of the world around him, was unmarried throughout his life and probably schizophrenic. John Nash and Michelangelo are other famous examples.

I want to argue that most neurotypicals observe external phenomena, and only after such observations try to explain these phenomena away. However, great minds generate predictions for everything around them, including swaying blades of grass. When their observations contradict these predictions, they are forced to modify their predictions and hence understanding of the world. Essentially, they are scientists in the true sense of the word. What evidence do I have for these claims? Very weak: n=1. Most of what I do is observe events, concur that this is roughly how they should be, and then move on. Because I can explain away almost anything, I don’t feel a need to modify my beliefs or assumptions. However, when I consciously try to generate predictions about the world around me, I am forced to modify my assumptions and beliefs in short order. I am forced to learn.

Why is it important to first generate predictions, and then compare them with observations? Let us take an example. When I sit on my verandah, I often observe people walking past me. I see them in motion, and after observing them think that that is roughy how I’d expect arms and legs to swing in order to make walking possible. I don’t learn anything new or perceive any finer details of human motion. I just reaffirm my prior belief of “arms and legs must roughly swing like pendulums to make walking possible” with my observations. However, I recently decided to make predictions about how the body would move while walking. When I compared these predictions with what I could observe, I realized that my predictions were way off. Legs are much straighter when we walk, the hips hardly see any vertical motion, and both of these observations were common to everyone that I could see. Hence, it is only when we make prior predictions that we can learn the finer minutae of the world around us, that we often ignore when we try to “explain away” observations.

I was on vacation recently, and had a lot of time to myself. I tried to generate predictions about the world around me, and then see how they correlated with reality. Some things that I learned: on hitting a rock, water waves coalesce at the back of the rock. Leaves are generally v-shaped, and not flat (this probably has something to do with maximizing sunlight collection under varying weather conditions). People barely move their hips in the vertical direction while walking. It is much more common to see variations in color amongst trees than height (height has to do with availability of food and sunlight, while color may be a result of random mutations). A surprisingly large number of road signs are about truck lanes (something that car drivers are less likely to notice, of course). Also, blades of grass have a much smaller time period than I assumed. Although I don’t remember the other things I learned, I think that I did notice a lot of things that I had never cared to notice before.

Can I use this in Mathematics (for context, I am a graduate student in Mathematics)? In other words, can I try to make predictions about mathematical facts and proofs, and hopefully align my predictions with mathematical reality? I do want to give this a serious shot, and will hopefully write a blog post on this in the future. But what does “giving it a serious shot” entail? I could read a theorem, think of a proof outline, and then see whether this is the route that the argument goes. I could also generate predictions about properties of mathematical objects, and see if these properties are true about these manifolds. We’ll see if this leads anywhere.

So forming predictions, which really is a lot like the scientific method, is naturally a feature of people of certain neural descriptions, who went on to become our foremost scientists. It is yet to be seen whether people without these neural descriptions can use these skills anyway to enhance their own understanding of the world, and hopefully make a couple of interesting scientific observations as well.

Of dead Russian authors and dead-er French kings

Note: I’m in a gradual process of anonymizing this blog. This is just so that I can write more freely, and include observations from my life that cannot be tied to my boring real world grad student existence. We’ll see how that goes.

There’s a theme from Anna Karenina by Tolstoy that has stayed with me for years. Anna is cheating on her husband Alexei with a young army man. Alexei is a reputable senior statesman who has maintained his family’s irreproachable position in society through hard work and intelligence, and is generally respected by the higher echelons of Russian bureaucracy. Hence, his self respect and position in society take a major hit when his wife is found to openly be having an affair with someone else. Seeing as we’re talking about society in 19th century Russia, Alexei is expected to “discipline” his wife and forcibly put the affair to an end, or perhaps divorce her and leave her to fend for herself without money in an unforgiving Russian society.

Instead of all of this, Alexei has a religious awakening, and he suddenly begins to sense the love in all of humanity (perhaps seeing himself as Jesus Christ incarnate). He refuses to discipline his wife or divorce her, and tells her that she can continue living in their house with their children, while having an affair with the young army man at the same time. He protects her dignity and her standard of living, while also going out of his way to ensure that she has a romantic partner of her choosing. This is perhaps as close to God as one can get. This, as one might expect, leads her to hate and loathe him even more, so much so that she cannot even bear to look at him or be in the same house as him.

I was shocked when I read this for the first time. It seemed unfair and bizarre and very real, all at the same time. I couldn’t quite put it all together. Why would she not be grateful to such an accommodating husband? It has taken me a couple of years to understand that Anna did not need a semi-god like figure to “forgive” her for her mistakes. She just needed someone who would empathize, and not necessarily position himself above her as a superhuman, even if he was only offering kindness and not punishment.

Why am I talking about all of this? Because I face situations like these in my daily life too. If I am nice to a friend, and they don’t reciprocate the way that they “should”, I sometimes remind them that I was nice to them, and they’re not being fair to me in this social transaction. Nine times out of ten, it leads relations to sour between us. Instead of empathy, I offer them terms of an implicit social contract that they’re violating. I’ve almost always been this way, and often thought that this was a fair and honorable way to conduct human relationships. Of course I was wrong each time.

However, my life is fairly insignificant in the grand scheme of things. Hence, there is a more important reason why I am writing this post. I have been listening to Mike Duncan’s Revolutions podcast, and am currently at the French Revolution. A short summary would be that a bunch of French intellectuals thought that the only way to make society better would be to kill the royals, and then subsequently guillotine their own leaders. They’d read a lot of books, heard some sophisticated rhetoric, and concluded that they were smarter and better informed than everyone else. Hence, they should put their knowledge to good use, and kill everyone. Of course Colonialism, Communism, Fascism, and almost every other overarching genocidal movement in the last five hundred years has been the result of a bunch of educated elites reading a ton of books, and deciding that this made them smarter than everyone else. They would write thick manuscripts and manifestos on what an “ideal society” should look like, and then decide that anyone who stood in the way of their irreproachable vision was the enemy and deserved to be killed.

Of course each and every of these educated, intelligent men was wrong. They single handedly led to the avoidable deaths of millions. Adopting the neuroscientist Iain McGilchrist’s terminology, observing patterns and constructing theories, all of these are the domain of the left hemisphere of the brain. Empathy and connectedness – these are the domain of the right hemisphere. The French intellectuals were predominantly using their left hemispheres in devising their grand plans and writing flowery manifestos on what the future could look like, but rejecting their right hemispheres and consequently empathy for their fellow citizen. The French king Lous XVI was not an evil tyrant who would not listen to reason. He was an uncharacteristically pliant ruler who essentially followed almost every whim of his citizens. And he was still beheaded on the streets of Paris.

Whenever we think we know what’s best for other people and the world in general, we are almost always wrong. All our grand plans are probably flawed, and will need to be re-worked. Hence, if our plans can only be realized by killing or hurting other people, that’s good a sign as any that we’ve made a major mistake and we need to go back to the drawing board. The only grand plans that have ever worked, say Capitalism, Democracy or public infrastructure, are ones that gave people even more freedom, whether it be political freedom or freedom of movement.

The best that we can do in this world, apart from giving the people in our lives even more freedom, is empathize with them. That doesn’t necessarily mean that we should be a Christ-like specter of unconditional love and forgiveness. It just means that we step into their shoes and see the world from their perspective, rather than look down on them from above and pass judgement on them or forgive them out of divine grace. This is (of course) is a repeat of what Tolstoy said about farmers in Anna Karenina: that we should seek to understand and empathize with them rather than seek to “uplift” them, treating them as animals unfit to fend for themselves.

I will make a greater effort to not write sappy blogposts in the future, doling out generic “love everyone” advice. However, I feel strongly enough about this to put it in writing, if only to laugh at it years later.

The case for falling in line

Picking up bits and pieces from various writers that I admire and producing a relatively inferior narrative.

A lot of instagram is basically a bunch of people encouraging each other to be “fierce”, not care what others think of them, keep on doing what they love, keep on being who they are, etc. This is good advice for a lot of people. I have friends who are paranoid about what others might think of them, and bend over backwards to accommodate others, often at the cost of their own happiness. This advice is probably meant for them. They would truly be happier and more fulfilled in their lives if they stopped caring about what others are thinking, and did what they wanted.

This advice, unfortunately, does not reach them. People who frequently consume content and post on social media websites are often not the very accommodating types that I describe above, but those who are extroverted and think that they have important things to say to others. These qualities (traits?) sometimes correlate with narcissism, false self-image, etc. And it is these already-extroverted people, a subset of whom are already convinced of their relative superiority over others, that such advice to “be fierce” and “don’t care what others think” reaches. I know. Because I have been one of them (some would argue that I still am, and they’re probably right). Well here goes my spiel, which is a bastardized version of Scott Alexander’s “Should You Reverse Any Advice You Hear” and Freddie deBoer’s unfortunately titled “Women Do Not Need Lunatic Overconfidence” (my take on this article has nothing to do with women).

If you frequently get such advice on the internet, chances are that you don’t need this advice. You are already “fierce”, and have a search history comprising of things like “how to not care what people think”. Complex machine learning algorithms have picked up these search patterns, and keep displaying similar content. The internet is not meant to change you. It is designed to keep you in the hole that you’ve dug for yourself.

In my personal history, I have displayed a lot of personality traits that didn’t help in making friends or getting along with people. Ever. For some reason, I decided to try and change myself. This of course was not my first reaction, and I stuck to “be fierce” and “don’t care what others think” in the beginning. I was probably slated to stick to these notions for life, as I see a lot of people around me doing. But a lot of truly inspirational people, for some weird reason, agreed to hang out with me pretty often, and I noticed that that they were objectively far better people than me. So I decided to change myself.

Some changes that I’ve tried to make are that I try to speak less and let others take centre stage, not pass judgement too quickly, not express my opinion on something unless I am explicitly asked for one, not try to impose my way of doing things, etc. All of these are different manifestations of the same phenomenon: I learned to shut up. This is bad advice for a lot of people. Some people are very reserved and self-conscious. They perhaps need to be encouraged to speak out more and assert themselves much more. However, it was good advice for me. I am happy that I have tried to make this change.

So what does real, helpful advice look like? Most movies that we watch and books that we read ask us to be who we are, not change ourselves, etc. And when we try to do these things, some of us (like me) come away unhappy and dissatisfied. Hence, perhaps the only useful advice that there can be is “figure out where you want to be in life, and try different things until you get there”. This is so general that it is almost useless. However, it is still better advice than the more specific “never change” and “you are already the best”.

So kids, don’t take advice from the internet. The internet is not your friend. Wait…

Last year in retrospect

I turn (even) older today. Hence, this seems as good an occasion as any to put the last year in retrospect and think about things I could have done better.


I decided last summer to start blogging about research papers outside of my field. I would often email these pieces to the authors of the papers I would write about. Regardless of the merits of my posts, I came away with a very polite and encouraging picture of researchers.

Response to blogpost on quantum computing
Response to my CRISPR blogpost
Response to blogpost on Neuromorphic Computing
Response to blogpost on Chlorophyll

What could I have done differently? I could have done a deeper dive into these subject areas, perhaps reading multiple papers to bring out the true essence of the field. I could perhaps also have been more regular about blogging. Regardless, I unilaterally call this exercise a success, as I had a lot of fun doing it and learned a lot.

Effective Altruism

It has now been about three years that I’ve been donating 10% of my income to charity. This has been a difficult transition for me. I was never particularly inclined towards charity before (in school or college), and generally thought that money donated to someone was a net negative. However, after a host of bizarre incidents (like reading Gandhi’s autobiography, some personal circumstances that pushed me to re-evaluate my life, etc), I decided to push myself to try and have a net positive impact on the world.

GiveWell approximates that Effective Altruism charities save 1 life in a developing country for every $2300 donated. By that estimate, I might have saved around 3.8 lives in the last three years. Let’s round down to 3. So three more people are alive in the world today because of the money that I donated. As I type this, I feel a staggering impulse to just gawk in disbelief. For someone who has generally struggled with positive self-image, this is surely the most important thing I have ever, ever done. Whatever I do, I will always have this. Let this inconsequential grad student have this moment of joy.

Of course the other people involved with Effective Altruism are much more awesome than I am, and I have learned a lot by talking to them. I am also being hosted by CEELAR in the UK to work on Artificial General Intelligence. Although I won’t be able to avail of this opportunity right now because of VISA issues, I hope to do so in the near future.

How to learn

I’ve always wanted to understand how one should learn. As any researcher can surely testify, the dream perhaps is to one day be able to take any research paper or textbook and be able to understand exactly what is happening in one go. This dream is often unfulfilled as researchers take years to understand their specific subfield, and often cannot understand research from other unrelated areas. This gets in the way of cross-disciplinary research in academia and industry.

I tried to get better at it last year by trying to read papers from various fields. A quick feedback loop ensured that I kept correcting my approach and trying to get better. I started out by reading papers and understanding them at an intuitive level. This proved to be effective, but there were many topics that were still beyond my grasp. I then changed my approach to trying to draw diagrams of various concepts. Although helpful in non-mathematical fields, this didn’t help me too much in mathematics as I wasn’t able to remember theorems and calculations. I then migrated to trying to type out each line in textbooks and writing detailed analyses. This was again much more helpful than my previous approaches, and often led to new insights. However, I kept forgetting old facts and theorems. I have recently moved to studying concepts by comparing them to previously known concepts and ideas. This was in part inspired by Roam Research, which is an app that claims that the best learning happens when we’re able to place concepts in context. Although I don’t know if this is the best method to learn, it is surely the best method that I’ve tried yet. This approach, moreover, is how the right hemisphere of the brain processes information anyway. Hence, it is in many ways how humans really learn about their environment.


I’ve often had various social anxieties, and have found it difficult to make friends. I used to blame it on others, but have on deep introspection found that most of the blame rests solely on me. Consequently, I have tried to improve myself so that I can contribute more positively to relationships.

One aspect that I have tried to improve upon is empathy. I find it difficult to empathize with people, and this probably has to do with complicated neurological reasons. According to Ian McGilchrist, my left brain hemisphere is dominant, which contributes to false self-image, general apathy, etc. I have tried to correct for this by taking oxytocin supplements. Although I’ve been lazy about studying the actual effects of the medicine, I feel that there is an overall positive effect.

I’ve also tried to contact friends and family more often, tried to be more helpful, and been more assertive with respect to people who are not nice to me. Although working on my social life is a life-long project, I have only recently realized how important it is to my overall happiness, and I do wish to keep chipping away at it.

I’ve also found out a lot about myself by reading research papers from the social sciences, and I’ve blogged about them here and here. I’ve also had very fruitful correspondence with Dr. Laran, the author of one of those papers. Moreover, I recently had the opportunity to listen to the bulk of Eliezer Yudkowsky’s sequences, which have been truly life changing for me. I plan to keep this exercise going in the near future.

Final thoughts

Being at home the whole of last year has been a tremendous learning experience for me. I got the time to read a whole host of things and learn a lot. I talked to fantastic people, and also deepened bonds with friends. If you’re still reading this post and have recommendations on what I else I should read/write about, please do feel free to comment or write to me. Thanks for reading!

Yet another stab at image recognition

Like every other idiot with an internet connection, I am fascinated by machine learning and neural nets. My favorite aspect of AI is image recognition, and I’ve written about it in the past. I am going to try and talk about it in reference to a book I’ve recently been reading.

The book that I’ve been reading is “The Master and the Emissary” by Iain McGilchrist. It is hands down the most amazing work I’ve come across in the recent past, and I plan to write a more detailed review on completing it. However, there is one fact that I want to flesh out below.

The main thesis of the book is that the left and right hemispheres of the brain are largely independent entities, and often process the world in conflicting ways. The left part of the brain recognizes objects by “breaking them up into parts and then assembling the whole”, while the right part of the brain “observes the object as a whole”. Clearly, the left part of the brain is horrible at recognizing objects and faces, and mainly deals only with routine tasks. The right part on the other hand is what we mainly depend on for recognizing things and people in all their three dimensional glory.

Anyone with even a cursory understanding of how neural networks (something something convolutional neural nets) recognize objects knows that neural algorithms mainly resemble the left side of the brain. Image inputs are broken up into small pieces, and then the algorithm works on trying to identify the object under consideration. Maybe this is why image recognition is bad (much, much worse than humans for instance)? How can one program a “right brain” into neural nets?

I don’t know the answer to this. However, it now seems clear to me that a lot of our approach to science and programming in general is based on a Reductionist philosophy- if we can break things up into smaller and smaller units, we can then join together those fundamental units and figure out how the whole edifice works. This approach has been spectacularly successful in the past. However, I feel that this approach has mostly served to be misleading in certain problems (like image recognition). What can be a possible roadmap for a solution?

The left and right hemispheres of the brain perform image recognition like this: the right brain processes the object in its entirety, and notices how it varies in relation to all other objects that it has seen before. For instance, when the right brain looks at you, it notices in what ways you’re different from the persons around you, and also from the other inanimate things in the background. The left brain now breaks up those images into smaller parts to notice similarities and differences, forms categories for “similar” things, and places all of the observed entities those categories. For instance, it places all the people in the “humans” category”, the trees in the background in the “trees” category, and so on. Hence, the right brain notices fine and subtle features of objects all at one go, and the left brain clubs objects together in a crazy Reductionist daze.

How would a neural network do “right brain” things? I’m tempted to say that there may be a lot of parallel computing involved. However, I don’t think that I understand this process well enough because it inevitably leads to the opinion that we should just have a bazillion parameters that we should try to fit onto every image that we see. This is clearly wrong. However, it does seem to me that if we’re somehow able to model “right brain” algorithms into neural nets, image recognition may improve substantially. More on this later (when I understand more about what is going on exactly).