On masks and SARS-CoV-2

This comment was initially a response to a youtube video from Tech Ingredients – a channel I have in the past thoroughly enjoyed for their in-depth dive into scientific and engineering aspects of various heavy on engineering DIY projects. Unfortunately, I am afraid that panic around COVID19 has prevented a lot of people from thinking straight and I could but disagree with the section on masks.


Hey there – Engineer turned biomedical scientist here. I absolutely love your videos and have been enjoying them a lot, but I believe that in this specific domain I should have enough experience to point out what appears to me as overlooked and is likely to chase drastically your recommendation on masks.

First of all, the operation room masks and the standard medical masks are extremely different beasts – if anything their capacity to filter out small particles, close in size to droplets transporting COVID19 at the longest distance is much closer to N95s than those of standard medical masks:

masks filtration efficiency

The standard medical masks let through about 70% of droplets on the smaller end of those that can carry SARS-CoV-2. A decrease in exposure of such magnitude has not been associated with a statistically significant reduction in contagion rates in any respiratory transmitted disease.

So why are standard medical masks recommended for sick people? The main reason for that is that in order to get into the air, the viral particles need to be aerosolized by coughing/sneezing/speaking by a contaminated person. The mask does not do well at preventing small particles from getting in and out, but it will prevent, at least partially the aerosolization, especially for larger droplets – that will contain more viruses and hence be more dangerous.

Now, that means that if you really want to protect yourself, rather than using a mask, even surgical, it’s much better to use a full face shield – while useless against aerosolized particles suspended in the air, it will protect you from the largest and most dangerous droplets.

Why do medical people need them?
The reality is that without the N95 masks and in immediate contact with the patients, the risk of them getting infected is pretty high even in what is considered as “safe” areas – as well as passing the virus to their colleagues and patients in those “safe” areas. If let spreading, due to the over-representation of serious cases in the hospital environment, it is not impossible that the virus will evolve to forms that lead to more serious symptoms. Even if we can’t protect the medical personnel, preventing those of them who are asymptomatic from spreading the virus is critical for everyone (besides – masks are also for patients – if you look at pictures in China, all patients wear them).

Second, why did WHO not recommend the use of N95 masks to the general public at the beginning of this outbreak, whereas they did that for SARS-CoV in 2002-2004 outbreak almost as soon as it became known to the West?

Unlike the first SARS-CoV, SARS-CoV-2 does not remain suspended in aerosols for prolonged periods of time it does not form clouds of aerosolized particles that remain in suspension and can infect someone who is passing through the cloud hours after the patient who spread it left. For SARS-CoV-2, the droplets fall to the ground fairly rapidly – within a couple of meters and a couple of minutes (where they can be picked up – hence hand washing and gloves). Due to that, unlike SARS-CoV, SARS-CoV-2 transmission is mostly driven by direct face-to-face contact with virus-containing droplets landing on the faces of people in direct contact.

Situation changes in hospitals and ICU wards – with a number of patients constantly aerosolizing, small particles do not have the time to fall and the medical personnel is at less than a couple of meters from patients due to the place constraints. However, even in the current conditions, the N95 masks are only used in the aerosol-generating procedures, such as patient intubation.

Once again, for most people, face shield, keeping several meters of distance and keeping your hands clean and away from your face are the absolute best bang-for-buck there is with everything else having significantly decreasing returns.


PS: since I wrote this paper, a number of science journalists have done an excellent job at doing in-depth research on the subject and write up their findings in an accessible manner:

In addition to that, a Nature study has been recently published, indicating that while masks are really good at preventing large droplets formation (yay), when it comes to small droplets formation (the type that can float for a little bit), it’s not that great for Influenza. The great news is that for Coronavirus, since there are few droplets of that size formed, it works great and containing any type of viral particles emission: Nature Medicine Study.

(Anti-) Marie Kondo

For Christmas, my GF decided that the best thing to do was to offer me the coming about tidying up from Marie Kondo. You know, the inspirational TED speaker that converted from helping people to de-clutter their apartments to helping them with cluttering them with useless things, such as crystals.

Beyond the pure and brutal irony of selling magic crystals after having gained fame helping people to get rid of things they didn’t need, my main issue with her approach is that she sees tidying up as a single-off effort rather than a process.

The issue with that it has an insane activation barreer for people who actually want to clean up (aka “I will never start because I need a lot of time to do this and I don’t have it”) and given the amount of work, people are likely to run out of steam and loose motivation before it’s done.

Instead of that, I am more in favor of the “steel milling approach”. If you try to remove all the unwanted steel while cutting a piece, you will cut too deep and break the machining bit. Instead, you need to do a multitude of passes, each of which removes some unwanted steel, revealing the piece level by level.

I have been quite often told that I am tidy and that I keep my apartment in good order. So here is my take on the matter (as a bonus, without any attempts to start a sect or sell you that having a clean apartment will change your life and bring in the perfect partner into your life).

As many things that I’ve seen working out in the long run, this approach is not motivated by the final result but instead is about developing a process that will be ongoing. Instead of focusing on the goal, it should be a focus on the process, the daily and weekly actions to tidy up and to maintain the tidiness.


Realistically, for those of us leaving in insane climates or not having the money to buy clothing we like, the “does it bring joy” characterisics is more of a meme than anything else. Some clothes are not the best, but that’s the one we have right now and what point is there to spend more money to get something that is not guaranteed to be better or more satisfying in the long run?

I suggest instead the MMORPG approach. You have a set or two appropriate for environments that you encounter often. For instance winter, interseason, summer, running, skiing. You have slots onto which gear goes. Shoes, socks; pants, underwear; on-body layer, intermediate layer, second intermediate laye, top layer; gloves, scarf, headchief. Depending on how frequently you wash your stuff and how long you need to go without washing, you need a number of items to cover the slots between laundries. And that’s basically it. You don’t go above it, except for maybe 1 or two items for safety; you replace as soon as you don’t need them anymore.

For leather shoes, it’s a good idea to have 2-3 pairs and rotate to prevent usage. Simialrly, for shoes it makes sense to make a better investment for better quality and perform sole repairs to extend their lifespan. It’ll be cheaper in the long run and as a bonus you won’t have to break your shoes in over and over again.

Don’t fold your outer/intermediate layer cloths; hang them if you have space. It’ll allow the anti-moth products to better permeate them; for them to dry if they are still slightly dampened and will make it easier to see their state when you are looking at them. The only exception for me are running cloths because of how thin they are.

Put away off-season clothing (ski gear in summer; shorts in winter) as well as items that you use rarely (bedding for guests) away, preferably vaacuum-compacting them and adding some anti-mite paper (yeah, those are super annoying to get rid of and can fly in during the warm months). For fancy clothing, it also makes sense to have a hanging insulation, to avoid too much friction to them. Bonus – putting them away after laundry doesn’t take nearly as much time as in case folding was needed and hence will happen faster and with strain on your brain.

You can totally wash colors and whites together and dry them together, provided you throw in a color absorbent lingette.

Beware of what you buy, because you will be storing it.

Have a separate bin for dirty and lcean clothing.


This one is easy. Those you need to keep, for instance for return policy, insurance, contracts or contestation with banks, put them in a plastic pocket and into a binder.

Better even – scan them so that you have index of things; keep just ones you need in the original.

Leave those you need to process out, on a table. Seeing them there will annoy you into processing them ASAP. DO NOT PUT THAT OFF. Beter – designate a day of week when you need to get them done.

Small stuff:

That one is hard. Decoration should go into the place they are supposed to go.

Those rather small items that you need some time should be stored in boxes or drawers. Otherwise they will accumulate dust.

Memorabilia from the past should be stored in folders and boxes and be put away as well.

General principle of storage is the same as in computers.

What determines what is stored where is the time you need to access/store it and frequency. The more often you need things, the more easily accessible they should be. For some things, it often doesn’t make sense to buy them – renting will be cheaper and the absence of clutter will feel great.

Besides, my biggest beef with Marie Kondo is ion the concept of joy. The life is not all about joy. Some of your items don’t bring joy, but they are important reminders to yourself or are highly functional. For instance, winter anti-slip boots you need twice a year, ugly AF, but you are happy that you have them when you need them.

Overall, there is a balance to be found between order and chaos. Too orderly and it looks like a prison cell or a hospital. Too chaotic and it looks like a dumpster. While in Japan, the super-clean aesthetic of Sama-Zama is considered as a reference, for most westerners, it is too clean and sterile. A balance between the order and the chaos is to found and struck, that is personal to everyone.

Energy and Co

Now that we have covered the basics, let’s go to the pseudo-guru stuff of energy. Marie Kondo actually seems to know nothing about the energy of the items or that it differs significantly between people and cultures.

This is due to the fact that the energy might not exist – our minds treat it as existing and use it to better interface with unconscious.

For instance, the resting cloths trick is something my grandfather taught me when I was five, so that I would put cloths into order when going to bed. And I can guarantee that if you imagine yourself in the place of your cloths, resting on a chair or hanging in a straight position is much better than being folded with several folds.

Similarly, covering the heads of the plushie toys or the photos in order to avoid the eyes is the stuff of nightmares – reminding of worst horrors of prisoners/civilians executions. You need to face your decisions. Besides, toys are better off donated to find a new life and photos are better shredded and taken by the river of time to the past they belong to.

More problematic to me is the concept of having to leave things behind to move forwards. As we advance in life, we accumulate memories. Some of them joyful, some bitter-sweet, some nostalgic, some brining on the reflections. Trying to clear them all just to move forwards will get us only to sterile rooms; without past or future; without a personality or wisdom to speak of.

Don’t discard things that forged you or that give you glimpse into your past self. They define you and make who you are. They’ll help you to make sense of your past and guide you forwards.

As you age, you will be walking into the valley of the memories of ther life well-lived; lessons learned and emotions lived – all that made you alive; and all that will allow you to face death, when the time comes, without fear or regret.

You’ve poured energy and life in order to craft the artifacts; sometimes on purpose, sometimes on accident. They are yours to give it back to you in times of need now – it makes but sense to keep them and keep them exposed to yourself.

What I’ve learned after recording every single transaction for a year

After a couple of years watching my finances not faring as great as I was expecting them to, I’ve finally decided to go beyond the usual “just stop doing X” internet advice and the basic analysis capacities the most common accounting platforms provided.

To do so, I did something called analytical accounting. For every transaction, I was splitting it into meaningful categories (Groceries:Staples, Groceries:Dairy, Groceries:House Cleaning, …) and assigning a tag for them, that was explaining why that expense took place. An example of such tag can be proc_daily_burn – for most basic groceries or proj_moving_houses – for a project moving between houses or proc_relationship_X for drinks/dining out that I did to go out and see a friend. I’ve also recorded what I was paying for someone when I was inviting them out, but also when someone was inviting me to have a more global view of my expenses.

Here is what I’ve learned:

  • Don’t pay for dates/partners if they aren’t paying similarly.

  • Car is expensive AF, train – pretty much as well; carsharing is the best

  • Dining out creeps up really fast and alcohol – drank out or at home builds up to cost impressive amounts of money really fast.

  • Of all sports, skiing is most expensive, followed by swimming (mostly pool and trainer), whereas running by yourself is shockingly effective, despite the shoes price.

  • Car is fucking expensive, especially when renting is involved. trying to off-set it by car-sharing is pretty hard and often doesn’t work at all. That being said, per month, my rental expenses are comparable with the price of just owning a car in Switzerland – before repairs or gas.

  • I should stop going out with friends with whom I spend too much in one sitting and try to do more of the outings where I spend little for huge amounts of enjoyment. Which means friends who drink a lot and eat at expensive restaurants.

  • Similarly, the bang for buck of doing a party at home and cooking for it is significantly higher than the one of eating out.

  • Fanciness is about the last thing I need in my life at this stage. Dress watch, picking up bills for friends when going out, offering car rides/help for free when out with friends comes naturally, but if the favor is not regularly returned, it doesn’t make sense to keep going. Similarly, trying to collect wines – expensive or too many of them is expensive and brings little joy for the cost associated with it. In all hobbies, there is a point of diminishing returns, it makes no sense to go beyond it.

  • My godfathering is an excellent example of massive enjoyment/meaningfulness for a relatively limited price. Along with my relationship with my current GF and the visit to Paris with a friend of mine who loves running and was stopping on her way from SF.

  • The amount of catching up to do when you move in with basically nothing to a new country is massive. I am not yet done and it did cost me just shy of 3k.

  • Healthcare in Switzerland is expensive AF, but still a tiny amount compared to the US.

  • I am fortunate to have a housemate to split rent with and to count on if I mess up my accounting.

  • Debt snowballs fast. I ended by paying 3k total that was due to debt to banks. About 700 in interests, 800 in exchange rates and about 1k in service charges (paying with the wrong card, going over the limit, …), on top of the standard 380 that I was supposed to pay to the banks for keeping my accounts going. Basically 10x due to debt.

Now, when to looking where to save in a year:

  • take the car as little as possible, going with carshare whenever possible, especially when going to my parent’s city.
  • with less travel for leisure, there should be as well a bit less expenses
  • when traveling, at least maximize for memorability for a given price.
  • stopping going out with friends and GF so much, not eating and drinking as much as them and not picking up the bill will help tremendously.
  • stop being fancy with wines and beers is another great way of saving.
  • keep out of debt, above all else

What I learned about accounting:

  • forcing yourself to write down expenses is hard in the long run; it gets easy as soon as you do less of the expenses you are unhappy about and start seeing it as a “where do I stand/how much can I afford to spend this month after I pay all the bills” kind of tool.
  • tags are insanely powerful but require thinking about what to put in them and some planning forwards/re-tagging. In my case, it ended up being process/project – main motivation + location. Yet remembering the specific spelling of tags is far from evident. Splitting is uneasy and leads to cross-contamination as well, given it’s not always possible to put tags on the splits of the main expense.
  • performing categories splitting as well as tags/owners is hard – you need to know in advance what kind of analysis you will want to do and keep it highly consistent.
  • writing the meaningful messages as to “whom” you paid something is really helpful as well.
  • what you have an impression you’ve spent on vs what you really did spend on have little to do one with another. You tend to forget small recurring expenses and instead focus on the big one-off ones, memorable because you gnashed your teeth while paying for them (just after a big resolution, a high fraction of your available money at that time), whereas a steady flow of smaller ones or intermediate ones tends to fly under the radar.
  • just as with the weight gain, it’s hard to see what leads to accruing debt – where the previous poor decision ends and a more recent one starts

Overall remarks:

Just with the weight loss, it is easy to find yourself with an impression that you are having a lot of money at your disposal and start spending it only to realize it flies away way too fast and that you’ve wasted it all but forgot about an important recurring charge/planned charge.

Alcohol specifically: with regards to groceries, there were two major spikes – one for the wine fair, one for my trip to Avignon and Chateau-Neuf de Pape. For dining out, there is a noticeable spike with a childhood friend in January, that I was seeing for the first time in years, and another one while my ex was visiting from the US for a month. The consumption dropped drastically after September analysis of where my money was going and once I was no more in travels/visiting friends.

Project-wise, the ones that were the most imposing were my ex’s visit (4.5k), followed by the Interrail trip across central Europe(2.5k) – which is insane considering the difference in duration, location, and excitement between the two; followed by furnishing a new apartment (1.9k), with Christmas coming on the tail (1.4k), a trip to Portugal with the lab (1k), smartwatch (1k), Paris with the US friend (360), skiing equipment, watches, pure move to Bassenges (700), fine wines (650), Skiing with Fred (650), move with my friend into his appt right after coming back from the US (580), Avignon to see a high school friend, (566), Audio equipment (561), skiing with dad (469), with Madrid with my GF being surprisingly cheap (350), about the level of Strasbourg visit with an average at 312, on par with the WEs with Nat in Strasbourg (cherry+October) and slightly below WEs in Switzerland (~200) with thermometers clocking in around 500 (BBQ is an expensive hobby after all…).

Relativism is toxic – some expenses are high, but it doesn’t mean you should consider burning off a fraction of them on small things that are no biggie.

To fight it, defining a base amount (as for instance my favorite Nepalese in Baltimore) and every time you make an expense, ask yourself if you are going to get more pleasure from that expense than from the base amount. Whether the expected trade-off is worth it. You’ll develop a keen sense of what is worth what to you.

Adjusting the sense of what is an acceptable expense – aka overcoming the legacy of brokenness and a sharp lack of money. You need to get physically uncomfortable with not having more room/not having more savings to avoid compensating for things you didn’t buy for yourself as well as the frustration of the lack of money from all those years.

But perhaps, most importantly, the devil is in the details. your memory is imperfect and does not register daily groceries, you buy almost every day for 5-10 euros as anything significant and yet it adds up to 300 – a plane ticket you would have definitely remembered.

Scale-Free networks nonsense or Science vs Pseudo-Science

(this article’s title is a nod to Lior Pachter vitriolic arc of 3 articles with similar title)

Over the last couple of days I was engaged in a debate with Lê from Science4All about what exactly science was, that spun off from his interview with an evolutionary psychologist and my own vision of evolutionary psychology in its current state as a pseudo-science.

While not necessarily always easy and at times quite movemented, this conversation was quite enlightening and let me to trying to lay down

Following the recent paper about scale-free networks not being that spread in the actual environment (that I first got as a gist from Lior Pachter’s blog back in 2015) helped me to formalize a little bit better what I believe I feel a pseudo-science is.

Just as the models and theories within the scientific method itself, something being a scientific approach is not defined or proved. Instead, similarly to the NIST definition of random numbers through a series of tests that all need to be successfully passed, the definition of a scientific approach is a lot of time defined from what it’s not, whereas pseudo-science is defined as something that tries to pass itself as a scientific method but fails one or several tests.

Here are some of my rules of thumb for the criteria defining pseudo-science:

The model is significantly more complicated that what the existing data and prior knowledge would warrant. This is particularly true for generative models not building on the deep pre-existing knowledge of components.

The theory is a transplant from another domain where it worked well, without all the correlated complexity and without justifying that the transposition is still valid. Evolutionary psychology is a transplant from molecular evolutionary theory,

The success in another domain is advanced as the main argument for the applicability/correctness of the theory in the new domain.

The model claims are non-falsifiable.

The model is not incremental/emergent from a prior model.

There are no closely related, competing models that are considered upon application to choices.

The cases where the model fails are not defined and are not acknowledged. Evo psy – modification of the environment by humans. Scale-Free networks.

Back-tracking on the claims, without changing the final conclusion. This is different with regards to affining the model where the change in the model gets propagated to the final conclusion and that conclusion is then re-compared with reality. Sometimes mends are done to that model for it to align with the reality again, but at least during a period, the model is still considered as false.

Support by a cloud of plausible, but refuted claims rather than a couple of strong, hard to currently attack the claims.

The defining feature of pseudo-science however, epsecially compared to the faulty science is its refusal to accept the criticism/limitations to the theory and change its prediction accordingly. It always needs to fit the final maxim, no matter the data.

Synergy from the boot on Ubuntu

This one seemed to be quite trivial per official blog, but the whole pipeline gets a bit more complicated once the SSL enters into the game. Here is how I made it work with synergy and Ubuntu 14.04

  • Configure the server and the client with the GUI application
  • Make sure SSL server certificate fingerprint was stored in the ~/.synergy/SSL/Fingerprints/TrustedServers.txt
  • Run sudo -su myself /usr/bin/synergyc -f --enable-crypto my.server.ip.address
  • After that check everything was working with sudo /usr/bin/synergyc -d DEBUG2 -f --enable-crypto my.server.ip.address
  • Finally add the greeter-setup-script=sudo /usr/bin/synergyc --enable-crypto my.server.ip.address line into the /etc/lightdm/lightdm.conf file under the [SeatDefaults] section

Why you shouldn’t do it?

Despite the convenience, there seemed to be a bit or an interference for the keyboard command and command interpretation on my side, so since my two computers side by side and since I have an usb button switch from before I got synergy, I’ve decided to manually start synergy every time I log in.

Writing a research paper with ReStructuredText


As a part of my life as a Ph.D. student, I have to read a large number of scientific papers and I have seen a couple of them written. What struck me was that these papers have a great deal of internal structure (bibliographic references, references to figure, definitions, adaptation to a particular public, journal, etc…). However, they are written all at once, usually in a word document, in a way that all that structure lives and can only be verified in the writer’s head.

As a programmer, I am used to dealing with organizing complex structured text as well – my source code. However, the experience has shown me that relying on what is inside your brain to keep the structure of your project together works only for a couple of lines of code. Beyond that, I have no choice but to rely on compilers, linters, static analysis tools and IDE tools to help me dealing with the logical structure of my program and prevent me from planting logical bombs and destroying one aspect of my work while I am focusing on the other. An even bigger problem is to keep myself motivated while writing a program over several months and learning new tools I need to do it efficiently.

From that perspective, writing the papers in a word file is very similar to trying to implement high-level abstraction with very low-level code. In fact, so low-level, you are not allowed to import from other files (automatically declare common abbreviations in your paper), define functions and import upon execution. Basically, the only thing you can rely on is the manuscript reviews by the writers and the editors/reviewers of journals where the paper is submitted. Well, unless they get stuck in an infinite loop.

Alt text

So I decided to give it a try and write my paper in the same way I would write a program: using modules, declarations, import, compilation. And also throwing in some comments, version control and ways to organize revisions.

Why ReStructuredText?

I loved working with it when I was documenting my projects in Python with Sphinx. It allows quite a lot of operations I was looking for, such as .. include:: and .. replace:: directives. It is well supported in the PyCharm editor I am using, allowing me to fire up my favorite dark theme and go fullscreen to remain distraction-free. It also can be translated with pandoc to both .docx for my non-techy Professor and LaTeX files for my mathy collaborator.

It also allowed me to type the formulas with raw mathematics in LaTeX notation quite easily by using .. math:: directive.

How did it go?

Not too bad so far. I had some problems with pandoc’s ability to convert my .rst file tree into .docx, especially when it came to failing on the .. import:: and citation formatting. (link corresponding issues) There was also some issue with rendering .png images in the .docx format as well (link issue). In the end, I had to translate the .rst files into html and with rst2html tool and then html to docx. For now, I am still trying to see how good of the advantages it is giving me.

After some writing, I’ve noticed that I am now being saved from the pending references. for instance, at some point I wrote a reference [KimKishnoy2013]_ in my text and while writing biblio I realized the paper came out in 2012, so defined the paper there as .. [KimKishnoy2012]. And rst compilation engine threw an error on compilation about Unknown target name: "kimkishnoy2013" Yay, no more dead references! The Same thing is true for the references defined in the bibliography but not used in the text.

Now a shortcoming of this method of writing is the fact that inter-part transitions do not come naturally. It can be easily mitigated once the writers’ block has been overcome by writing all parts separately by opening a compiled HTML or .docx document and editing the elements os that they align properly.

An additional difference with the tools that has been developed to review code is that the information density and consistency in the programming languages is closer to mathematical notations rather than a human-readable text with all the redundancy necessary for a proper understanding. A word change or a line change is a big deal in the case of programming. It isn’t so important in the case of writing. and all the annotation and diff tools used for that are not very useful.

On the other hand, it is still related to the fact that human language is still a low-level language. Git won’t be as useful to review the binaries that it is when reviewing the programming languages that are important.

Over the time, a two significant problems emerged with that approach. First – incorporating the revisions. Since the other people in the revision pipeline are using the MS word built-in review tools, in order to address every single revision I have to find the location in the text tree file where the revision needs to be made, then correct it. Doing it once is pretty straight-forward. Doing it hundreds upon hundreds of time across tens of revision by different collaborators is an entire thing altogether and is pretty tiresome.

The second problem is related to the first. When the revisions are more major and require re-writing and re-organization of entire parts, I need to go and edit the parts one by one, then figure which part contents is going into what new part. Which is a lot of side work for not a lot of added value in the end.

What is is still missing?

  • Conditional rendering rules. There are some tags that I would want to see when I am rendering my document for proofreading and correction (like parts name, my comments, reviewer comments), but not in the final “build”.

  • Linters. I would really like to run something like a Hemingway app on my text, as well as some kind of Clonedigger to figure out where I tend to repeat myself over and over again and make sure I only repeat myself when I was to. In other terms automated tools for proof-reading the text and figuring out how well it is understood. It seems that I am not the only one to have the idea: Proselint creators seem to have had the same idea and implemented a linter for prose in python. Something I am quite excited about, even though they are still in the infancy because of how liberal the spoken language is compared to programming language. We will likely see a lot of improvements in the future, with the development in NLP and machine learning. Here are a couple of other linter functions I could see be really useful.

    • Check that the first sentence of each paragraph describes what the paragraph will be about
    • Check that all the sentences can be parsed (subject-verb-object-qualifiers)
    • Check that there is no abrupt interruption in the words used between close sentences.
    • Check for the digressions and build a narration tree.
  • Words outside common vocabulary that are undefined. I tend to write to several very different public about topics they are very knowledgeable about and sometimes not really. The catch is that since I am writing about it, I am knowledgeable about them, to the point that sometimes I fail to realize that some words might need. If I have an app that shows me words that I introduce that are rare and that I don’t define, I could rapidly adapt my paper to a different public or like the reviewers like to ask unpack a bit.

  • chaining with other apps. There are already applications that do a great job on the structuring the citation and referencing to the desired format. I need to find a way to pipe the results of .rst text compilation into them so that they can adapt the citation framework in a way that is consistent with the publication we are writing.

  • Skeptic module. I am writing scientific papers. Ideally, my every assertion in the introduction has to be backed by an appropriate citation and every paragraph in the methods and results section has to be backed by the data, either in the figures or in supplementary data.

  • A proper management of mathematical formulas. They tend to get really broken. Latex is the answer, but it would be nice if the renderings of formulae could also be translated into HTML or docx, that has it’s own set of mathematical notation (MS office always had to do it differently from open source standards).

  • Way to write from the requirements. In software we have something we refer to as unittests: pre-defined behaviors we want our code to be able to implement. As a rule of thumb, accumulation of unittests is a tedious process, that is nonetheless critical for building a system and validating that upon change our software is still behaving in a way we expect it to. In writing we want to transmit a certain set of concepts to our readers, but because the human time is so valuable, we regularly fail at that task, especially when we fail to see that 100th+ revision makes a concept that is referred to in a paper not be defined anymore. In software it is a little bit like acting as a computer and executing the software in your head. Works well for small pieces, but there are edge cases and what you know about what program should do that really gets into the way.

Dependency of a dependency of a dependency

Or why cool projects often fail to get traction

Today I tried to install a project I have been working for a while on a new machine. It relies heavily on storing and quering data in “social network” manner, and hence not necessarily very well adapted to the relational databases. When I was staring to work on it back in the early 2013, I was still a fairly inexperienced programmer, so I decided to go with a new technology to underlie it neo4j graph database. And since I was coding in Python and fairly familiar with the excellent SQLAlchemy ORM and was looking for something similar to work with graph databases my choice fell on the bulbflow framework by James Thronotn. I complemented it with JPype native binding to python for quick insertion of the data. After the first couple of months of developer’s bliss and everything working as expected and being build as fast as humanely possible, I realized that things were not going to be as fun as I initially expected.

  •  Python 3 was not compatible with JPype library that I was accessing to rapidly insert data into neo4j from Python. In addition to that JPype was quickly dropping out of support and was in general too hard to set up, so I had to drop it down.
  • Bulbflow framework in reality relied on the Gremlin/Groovy Tinkerpop stack implementation in the neo4j database, was working over a REST interface and had no support for batching. Despite several promises of implementation of batching by it’s creator and maintainer, it never came to life and I found myself involved in a re-implementation that would follow that principles. Unfortunately I had not enough experience with programming to develop a library back then, nor enough time to do it. I had instead to settle for a slow insertion cycle (that was more than compensated for by the gain of time on retrieval)
  • A year later, neo4j released the 2.0 version and dropped the Gremlin/Groovy stack I relied on to run my code. They had however the generosity of leaving the historical 1.9 maintenance branch going, so provided that I had already poured along the lines of three month full-time into configuration and debugging of my code to work with that architecture, I decided to stick with 1.9 and maintain them
  • Yesterday (two and a half years after start of development, when I had the time to pour the equivalent of six more month of full-time into the development of that project), I realized that the only version of 1.9 neo4j still available for download to common of mortals that will not know how to use maven to assemble the project from GitHub repository is crashing with a “Perm Gen: java heap out of memory” exception. Realistically, provided that I am one of the few people still using 1.9.9 community edition branch and one of the even fewer people likely to run into this problem, I don’t expect developers will dig through all the details to find the place where the error is occurring and correct it. So at that point, my best bet is to put onto GitHub a 1.9.6  neo4j and link to it from my project, hoping that neo4j developers will show enough understanding to not pull it down

All in all, the experience isn’t that terrible, but one thing is for sure. Next time I will be building a project I would see myself maintain in a year’s time and installing on several machines, I will think twice before using a relatively “new” technology, even if it is promising and offers x10 performance gain. Simply because I won’t know how it will be breaking and changing in the upcoming five years and what kind of efforts it will require for me to maintain the dependencies of my project.

Usability of fitness trackers: lots of improvement in sight

Fitness trackers and other wearable techs are gaining more and more momentum, but because of the ostrich cognitive bias they are absolutely not reaching the populations that would benefit most from them. And as per usual, ReadWriteWeb is pretty good at  pointing this out in a simple language.

To sum up, current fitness tracking has several short-comings for the population it would target:

  • It is pretty expensive. Fitness band that does just the step tracking can cost somewhere between $50 and $150. If you are trying to go something more comprehensive, such as one of the Garmin’s multisport watches, you are looking for somewhere in the $300-$500. Hardly an impulsive purchase for someone who is getting under 30k a year and have kids to feed from that. However they are the group at highest risk from obesity and cardiovascular disease.
  • They generate a LOT of data that is hard to interpret, unless you have some background as a trained athlete. Knowing your Vmax and hear-rate descent profile following an error is pretty cool and useful for monitoring your health and fitness, but you will never know how to do it, unless someone explains it to you or you already knew it from your previous athletic career.
  • They do not provide any pull-in. As anyone with a bank account would know, saving comes from the repeated effort in duration. Same as with health capital. However, as anyone with a bank account knows, when you hit hard financial times, you watch your bank account much less than during the times where everything is going well. Just because it is rewarding in the latter case and painful in the first. Same thing with health: people who lack health but are ready to do it are self-conscious about it and need an additional driving motivation to make them last through the periods where no progress is happening
  • It does not respond to an immediate worry and is one of those products that are “good to have”, but whose absence does not lead to a “I need it RIGHT NOW” feeling


With that in mind, I decided to participate in MedHacks 1.0 last weekend. My goal was to develop something that would provide an emergency warning for users that are either at high risk of stroke or undergoing it, so they would not get themselves isolated while having a stroke. With my team, we managed to hack together a proof of concept prototype in about 24 hours, which took us into finals. In order to do this, we used an audio mixing board to amplify the signal, Audacity to acquire the data on a computer, FFT and pattern matching to retrieve the data and filter out loss-of-contact issues and build an app in Android that was able to send out a message/call for help if the pattern changed.

Now, those are very simple components that could be compressed on a single sensor woven into a T-shirt and beamed onto a phone for analysis in background. We would need to do some machine learning to be able to detect most common anomalies and then validation by human experts of the acquired EKG.

However, the combination of persistently monitoring cheap device and an app that is able to exploit it opens large possibilities for fitness tracking for those most needing it.

  • The reason to purchase and use the monitoring device is not fitness anymore. It is basic safety. And can be offered by someone who is worried for your health.
  • The basic functionality is really clear. Something is going on wrong with you, we will warn you. Something is going really wrong, we will warn someone who can check on your or come to your rescue.
  • We can build upon the basic functionality, introducing our users to the dynamics of fitness in a similar way games introduce competitive challenges: gradually and leaving you the time to learn at your pace.
  • We have a very precise access to the amount of effort. Your heart rhythm will follow if you are doing a sternous directed activity and we will guide you in it
  • We were able to build a prototype with very common materials. Compression and mass-production will allow us to hit the lowest market range, at a price where you are paying for a smart athletic piece of clothing only marginally more than for the same “non-smart” piece of clothing.

Sounds interesting? I am looking for someone with clinical experience in hear diseases, a hardware hacker that would have experience with wearable and someone to drive the consumer prospection and sales.