Posted by: Jim Masterson | 07/14/2016

Writing an LR(1) Parser Generator

About twenty-five years ago, I wrote an LALR(1) compiler-compiler loosely based on the syntax in “Principles of Compiler Design” by Aho and Ullman.  I wrote that program using Kernighan and Ritchie’s C language.  A few years later, I tried the more ambitious effort of reprogramming the compiler-compiler in Stroustrup’s C++.   Unfortunately, it wasn’t a very “clean” version of C++.  Although C++ is a powerful OOP-type language, it’s hard to wring out the older C-type constructs.

I’ve since lost access to C compilers, so my code sat fallow for many years.  A few weeks ago, I decided to try converting it to Java.  One of my current goals is to write a C-to-Java converter, and it would be a great help to have a modifiable compiler-compiler at my disposal.  That conversion became quite a task (it took about 77 Java classes).

LR parsers are very powerful, but are difficult (if not impossible) to write by hand.  I wrote the original compiler-compiler on an MS Windows-based machine, so I didn’t have access to either YACC or LEX.  This meant I had to bootstrap the LR parser table.  To my amazement, the bootstrap was successful.

For the Java conversion, I still had the last table that my C++ compiler-compiler used to parse (along with the source productions and lex statements).  I was hoping to use this conversion effort as a model for my C-to-Java converter to follow.  Unfortunately, I made some design decisions during the C++ modification that didn’t make the Java conversion straightforward.  Computers back then didn’t have much memory, so I was writing various structures and objects out to files.  Obviously, I/O slows down processing speed, but it’s better than stopping completely with an out-of-memory error.  Modern computers don’t have those old memory restrictions, so I got rid of the old file I/O.  To make a long story shorter, the Java conversion is mostly done.  I’m now adding new features to make my future converter effort easier.  The complete syntax of the compiler-compiler (I called the program CCOM) is here.  The optional operator (?) is available for the regular expressions used by the lexer but not yet available for the production statements of CCOM.

The next step is to bullet-proof the program and add useful diagnostics.  LR parsers are great for identifying input strings that belong to a specific grammar but are poor at error-handling.  The fist occurrence of an error causes them to stop processing.  The trick, then, is to restart the scan and find as many errors as possible.  Thanks to ACM (Association for Computing Machinery), I have access to multiple papers describing LR parser error repair.  I plan to implement one or more of these algorithms in CCOM and report my success (or failure) here.

Advertisements
Posted by: Jim Masterson | 06/30/2012

Comparing Model Results with Data

One of the interesting things about the Kiehl and Trenberth 1997 paper is that it’s possible to make a simple climate model based on it.  This diagram (Figure 1) repeats the paper’s classic (and somewhat overused) Fig. 7, but here I’m only using the data values from the paper’s figure.

Data from Kiehl and Trenberth 1997 Fig 7

Figure 1

Of course, some of these values are off by a factor of two or more.  It’s also obvious that the figure’s audience wasn’t meant to be other scientists; otherwise these values would include error ranges.  In any case, it’s possible to get a feedback model to stabilize at these values.  I have such a model using an Excel spreadsheet.  One obvious restriction is that Excel doesn’t allow self-referencing cells, so I created a macro to bypass this limitation.

This simple climate model allows us to play with various parameters.  For example, to simulate increasing greenhouse gases (GHG) all we need to do is narrow the atmospheric window (shown as ω in the diagram).  The atmospheric window shown here is set at about 10% (40/390).  By reducing the window’s size, we have more radiation being absorbed by the atmosphere.  The advantage of doing this is that we don’t have to worry about which GHG is causing the effect.  We also don’t have to worry about the CO2-H2O coupling in the enhanced greenhouse effect (EGHE), because narrowing the window handles everything nicely.

What happens when we narrow the atmospheric window?  The surface warms, but the atmosphere warms too.  In fact, the rate of atmospheric warming is greater than the rate of surface warming.  I’ve played with these values and the atmosphere warms at a rate of about 130% to 160% of the surface warming.

This effect is due to the physical arrangement of the feedback loop.  It’s a requirement of the model.

I happen to know that the atmosphere isn’t warming at this rate.  The atmosphere is warming, but at a rate that is less than the surface rate.  So I looked for other warming possibilities.

Another way to warm the surface is to reduce the planet’s albedo–that is, make the planet darker.  This causes more radiation to be absorbed by the planet which will also increase the surface temperature.

The planet’s albedo is due to both clouds and surface reflectivity.  From the diagram, we can calculate it at about 31% ((30+77)/342).  This is roughly the planet’s albedo, although the value may be off by 10% or more.

If we alter the planet’s albedo, we get similar results, but instead of a 130% to 160% range, it’s 60% to 90%.  That is, the atmosphere warms at a rate of about 60% to 90% of the surface.  This is more in the ball park of what we know about the current rate of atmospheric warming.

There’s another way to alter the surface temperature, but that requires changing the Sun’s input.  We’ve been told many times that the Sun–the solar constant–supposedly precludes that possibility.

What happens to the rate of atmospheric heating, if we combine both the greenhouse effect (GHE) and the reduced albedo?  It turns out that the atmosphere acts like it’s a GHE acting alone.

So we have three cases: 1) GHGs acting alone; 2) reduced albedo acting alone; and 3) GHGs and reduced albedo acting together.

In cases 1) and 3), we’ll get a 130% to 160% atmospheric warming rate; and in case 2) we’ll get a 60% to 90% atmospheric warming rate.

So let’s try some actual data.  My first choice is the temperature record from GISS.  Although there are some problems with using this record, it has a couple of things going for it–it’s one of the standards, and it’s easy to find their data.  I show their surface temperature record in Figure 2.

Figure 2

Obviously, this record only runs to 2011, because we haven’t completed 2012 yet.  It also uses anomalies.  The referenced source claims that I shouldn’t need anything other than anomalies.  It also says that if I must use temperatures, then I can add 14 °C to the anomalies and convert them to temperatures.  I actually do need temperatures so here is the same data with respect to temperature.  (This is another reason why I picked gistemp–they specified their reference temperature which makes it easy to convert from anomalies to temperatures.)

Figure 3

The curve in Figure 3 should look the same as Figure 2.  That’s because I’m using the same data but with 14 added to each value.  Now that I have gistemp as temperatures, I can generate the atmospheric limits based on my Excel model.  The graph in Figure 4 shows five curves:  gistemp, 60% of gistemp, 90% of gistemp, 130% of gistemp, and 160% of gistemp.

Figure 4

At this point I need rates and not absolute values.  Interestingly, anomalies provide that feature.  Figure 5 shows the same five temperatures graphed as anomalies.  Annual temperature anomalies are referenced to the annual temperature of a reference year.  For my purpose here I’m picking 1979 as the reference year.  The vertical dotted line in Figure 5 represents year 1979. (I’ll explain why I chose year 1979 later.)

Since 1979 represents the reference year, all the curves will cross the zero axis in 1979 (but it doesn’t prevent them from crossing the zero axis at other times too).

Figure 5

If you obtain trend line slopes of these curves and compare them; they match the original relationship between the temperature curves rather closely.  Figure 6 shows this relationship with the equations of the linear trend lines displayed on the chart.  The slope of the gistemp trend line is 0.0073.  If we divide each of the other trend line slopes by this value, we get 0.6027, 0.8904, 1.3014, and 1.5890 respectively.  This is very close to the values of 60%, 90%, 130% and 160% that we started with.

Figure 6

The goal here is to overlay atmospheric anomalies on these temperature curves and see where they line up.  If those atmospheric curves fall between the 130% and 160% temperature curves, then that will mean we’re looking at a GHE.  However, if those atmospheric curves fall between the 60% and 90% temperature curves, then that will mean we’re looking at an albedo effect.

Next we need the atmospheric anomalies.  Figure 7 shows these anomalies.

Figure 7

These anomaly curves have base year 1979.  Two of the curves (UAH and RSS) begin in 1979.  The other curves are referenced to year 1979; so all the curves must cross the zero axis in year 1979.  This is why I picked year 1979 earlier.  Comparing anomalies is like adding a list of figures.  You have to align the decimal points, or in this case, you have to use the same reference year.

These curves end in 2009, because that’s the limit with this reference.

Finally here is my combined temperature anomaly graph shown in Figure 8 where we compare the earlier reference-temperatures with the Tropospheric temperatures.

Several things are apparent.  All the curves cross the zero axis in 1979–as they should; therefore the year 1979 doesn’t tell us anything.  However, as we move away from 1979, the curves start showing a trend.  Most are well below gistemp and some are even below the 60% lower limit curve.  This basically indicates that the more recent trends are not due to GHE but are clearly due to albedo decreases.

Figure 8

The temperatures prior to 1979 are not as clear as they seem to be more confused.  Additionally, the very hot year in 1999 was probably due to extensive water vapor from a strong El Nino.  I would have expected 1999 to clearly show a GHE; but it, too, is below the 130% limit temperature.

It’s a possible demonstration of Hansen’s manipulation of gistemp.  For example, my Death Valley graphs (Figure 9) show that the current curves are tilted more in favor of later warming and cooler earlier temperatures.

Figure 9

The combined anomaly curves in Figure 7 would occur if you tilt gistemp to show more warming in the present.  The earlier temperatures are used as reference temperatures and are also adjusted downward.  That may be why they show less albedo warming.

It’s also interesting to note, that by adjusting gistemp to show what Hansen thinks is a GHE, he’s actually decoupling gistemp from the atmospheric temperatures.  He’s making it show that the GHE is less likely and an albedo decrease is more likely.  It would be nice to see what an undoctored gistemp shows.

Posted by: Jim Masterson | 02/12/2010

Death Valley Temperature Changes

Years ago (before 2003 and after 1998), I became interested in desert temperatures (specifically Death Valley). One of the predictions of greenhouse theory is that dry regions, like deserts and polar regions, will show the effects of CO2 warming more than other areas. This is because CO2 effects are masked by water vapor, so dry regions are the “canary in the mine” signal of GW. Unfortunately (for AGW), during the hot year of 1998, Death Valley had a cold year–third coldest in fact. I stored my data away and didn’t check Death Valley temperatures until recently. The current data show that 1998 is still a cool year, but something has changed. The temperatures now shown for Death Valley weren’t as I remembered them. So I pulled out my old data and checked. Below is a comparison of these datasets. The first graph is the pre-2003 plot of my saved data. The second plot is the current GISTEMP values. In the third plot, I overlay the two datasets. Apparently Hansen’s been busy “correcting” these temperature values during the last few years.

The linear trend slope of the pre-2003 data is 0.0143 °C/year and the current data has a linear trend slope of 0.0192 °C/year.

Have fun trying to figure out the temperature modification algorithm. I tried to check the original B91 forms and that’s a lot of work. Too bad there isn’t a fancy OCR program that will scan these forms. The two years that I checked don’t match either dataset.

Death Valley B91 forms are handwritten with scribblings, crossouts, and random placement of various stamps (IDs and keypunch comments). Here is an example:

An OCR of THIS form would be nearly impossible. It’s easier to hand-enter the data than to babysit an OCR scan.

Posted by: Jim Masterson | 09/17/2009

US Housing Starts

Housing Starts

Posted by: Jim Masterson | 09/16/2009

Civilian Work Force

The plot shown here uses data from the U.S. Department of Labor Statistics.  The light blue dots are total civilian labor force, the light green dots are total civilian employed, and the pink dots are the total civilian unemployment rate.  The black dots are the seasonally adjusted total civilian unemployment rate.  The blue, green, and red lines are 12-month running means of the monthly values (the end-points are a problem–of course).  A 12-month running mean seems to do a better job of filtering out the month-to-month and season-to-season fluctuations and shows the overall trends better.  The vertical brown lines delineate the November elections of each President within the scope of the graph.

Unemployment rate is a lagging economic indicator.  If you want to tie it in to actual events then you’ll have shift the times earlier by about six months.  Notice that we have one of the lowest unemployment rates in history as we near the end of Clinton’s Administration and year 2000.  This is the famous dot-com bubble.  Companies providing no services and no products were popping up everywhere, and investors were flocking to them like flies to honey.  Such companies are flushed out of the economy if there is a slightest downturn.  Unfortunately, when the economic downturn arrived and the subsequent flushing occurred, the large number of failing dot-coms put a heavy drag on the economy.

Posted by: Jim Masterson | 08/27/2009

Hello world!

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

Categories