New: Saturation Mixing Ratio is the ratio of moisture to dry air in a parcel of air at the point of saturation. This is essentially the mixing ratio at which the parcel condenses.

New: Mixing Ratio is the ratio of moisture to dry air in a current parcel of air.

New: Relative humidity is the ratio of the mixing ratio to the saturation mixing ratio (how close a particular parcel is to its point of saturation in terms of percent).




Extended: Extended knowledge of practical use of Bayesian mathematics for probability-based computations.

New: Understood use of Taylor Series approximations in showing difference between adjacent pixels is a reasonable approximation for the gradient of the image surrounding that point.


New: Weather forecasters often leverage something called the Skew-T diagram. Skew-T diagrams are plots of data acquired from radiosondes (devices typically lifted by weather balloons).

More information here: http://www.theweatherprediction.com/thermo/skewt/

New: I learned how to read wind barbs. These are graphical representations of wind direction and magnitude given in a short-hand form.

New: I learned the meaning of the term isotherm and isobar.

New: I learned of the location of NOAA-based Skew-T diagrams and I learned how to read them (for the most part).


New: The GRAIL mission is a mission taking place around the moon that’s comprised of two separate satellites. The mission’s goal is to map out the gravitational contour of the moon by leveraging a KA (part of the microwave region of the EM spectrum) -band emitter whose sensitivity is so great it can be used to determine relative displacement of the two satellites down to approximately one micrometer (the size of a human red blood cell). As the satellites orbit the moon, displacement is monitored and the relative gravitational effects are noted; lending to an understanding of the internal makeup of the moon.

Extended:  An ordinary least squares approach to solving the best-fit of a 2D line on a standard Cartesian coordinate system can be viewed in two discrete ways. The first, which I’ve always known, is to treat the task as an optimization problem whereby each individual point is plugged into a 1 degree polynomial as such:

mx+b = y

In this particular system, and using the least-squares approach, we’re trying to solve for an x and b which minimize the squared error of each point’s polynomial. This means

argmin(m,b) \sum{(y-mx-b)^2}, or to find the values m and b which minimize the residual.

This is done by computing the sum (as above), and taking the partial derivative of the sum of the polynomials with respect to each variable, so where

f(m,b) = \sum{(y-mx-b)^2}

We compute

m_p = \frac{\partial f}{\partial m} and b_p = \frac{\partial f}{\partial b}

Where the optimal equation is expressed by y = { m_p}x + b_p

However, another way to view the problem is as a covariance cov(x,y) = E([x - E[x]] [y-E[y]]) of the x and y variables over the variance of x.


New: The precise definitions of mean and variance depend on whether we are talking about the mean of a probability mass or density function.

New: For a probability mass function, where the units are discrete, the mean is simply E[x] = \sum_x{p(x)x} . For a continuous function (a probability density function), we are required to integrate the function to retrieve any notion of probability; so we leverage E[x] = \int{p(x)x dx} It can be noted that mean is the expected value of the distribution. 

: Variance is similar in that you need to know for which type of distribution you’re computing. In the discrete sense, it is simply the var_x = \sum{ p(x)(E[x] - x)} , again the continuous (density) form is the integral: var_x = \int{p(x)dx (E[x]-x)}. The computation makes sense when all values of x across the distribution are used in the equations. 

New: Instability is one of the primary recipes for a thunderstorm. Its defined by how readily a discrete parcel of air is displaced from its current place and continues as opposed to return to normal. The dynamcis appear to be the result of colder air above warmer air, resulting in raising parcels to be less likely to fall back down as a result of warmer ambient air.

New: Heat transfer between air molecules is low, and so a discrete parcel of air will cool and warm more as a result of the thermodynamics of air pressure than the ambient air temperature. This means that parcels of air will cool at a rather constant rate as an adiabatic process. The adiabatic lapse rate, then, is the rate at which the air cools as it raises through the atmosphere due to changes in its internal pressure. As a corollary, once the parcel of air reaches a particular altitude it will condense, thereby adding water to the mixture: creating a lower adiabatic rate. Therefore, there are two adiabatic rates: dry and saturated.

New: Due to the adiabatic lapse rate (see above), a parcel of air will cool at a relatively fixed rate.  If the ambient air at a particular altitude is cool, then the buoyancy of the air parcel will be high; giving the parcel a higher tendency to raise. This higher tendency translates into massive columns of air (thunderheads) and strong updrafts within the column; the end result being strong downdrafts as the air cools to a point where it returns to earth in addition to frozen water ice (hail), etc. This whole dynamic is what starts a thunderstorm.


  1. New: The maximum likelihood weighting factor (for use in things such as the Kalman filter), is expressed by the equation b = (M(\sigma_x)^2)/(M^2(\sigma_x)^2 + (\sigma_z)^2) . Where (\sigma_x)^2 is the mean square deviation of the values being measured; (\sigma_z)^2 is the mean square error of the object doing the measurements. The combination of this can be used to weight a particular measurement against its expected value (in, say, filters).  
  2. Expanded: The mean squared error of the maximum likelihood estimator can be computed as well, meaning we can update our weighting factor variables as we continue to garner more information from experiment; this realization is what lead to the Kalman filter’s recursive and non-batch nature. The computation is:
    \sigma_{\hat{x}}^2 = (\sigma_{x}^2\sigma_{z}^2)/(M^2\sigma_{x}^2 + \sigma_{z}^2). This effectively updates the \sigma_{x}^2 error estimate in the weighting function above (see 1), and therefore offers a recursiveness to the overall computation. 
  3. New: The overall computation, with the weighting factor included, then becomes \hat{x}_n = (E + b(y_z - E)) where E is the estimated value of the reading that is weighted against the actual value (y) (from experiment). 


Extension: a probability mass function is a function mapping discrete outcomes to their associated probabilities. The value of the function is directly the probability for a particular input to the function. For continuous functions,  however,  things aren’t as straightforward. Afterall,  determining the probability of an exact value on a continuous function results in a nearly nil value. As a result,  we talk about ranges and therefore integration is required to determine a probability. Therefore,  the density function is a function which must be integrated on order to derive probabilities.

Reminded: A weighted average has a very simple function \sum s n(s)

New: A thermal is the term given to a rising column of air that lacks the moisture necessary to form a cumulonimbus.