Thursday, November 12, 2009

OOPSLA Trip Report

Sunday


Functional programming in OO tutorial

Presented my tutorial. This is an overview of functional programming and how to integrate that paradigm into an OO environment. It contains example code for several key pieces of a Functional programming runtime including: an interface for Function objects, an interface for Lisp style list/streams, implementation of a tiny Functional library (map, foldl, foldr and transpose), a framework for tail-call elimination, and an assortment of little helpers and worked problems. I explain when the Functional style is useful and appropriate, and demonstrate several techniques ranging from Functionally derived Patterns to adopting a pure Functional style. The code, examples and slides are available via SVN at: http://svn.xp-dev.com/svn/FunctionalOO/lib/head and http://svn.xp-dev.com/svn/FunctionalOO/examples/head


Monday

Barbara Liskov keynote

Barbara Liskov is a systems programmer who, in the mid '70s, developed and/or refined a series of programming language concepts while designing the research language CLU. Among these were: abstract data types, exception handling, iterators and, most famously, the relationship between substitutability and subtyping. She was motivated by a desire to create a language that would facilitate communication between programmers, to both formally define and test her ideas (particularly in the area of abstract data types, a precursor to Objects), and to do so in a way that was also performant. She placed high value on the qualities of expressiveness, simplicity, performance, ease and minimalism. She placed a lot of emphasis on the idea that ease of reading program source is more important than ease of writing it, where comprehension is the key goal of reading. This is the driving force behind the famous _Goto considered harmful_ paper by Dijkstra as well as a long string of developments that followed: structured programming (Hoare & Morris), _Global variables considered harmful_ (Wulf & Shaw), encapsulation, exception handling, etc... She noted the success of languages that support semantic extension (e.g. the introduction of new types, functions and variables) and the general failure of syntactically extensible languages (e.g. operator overloading), calling the latter "write only" languages. It turns out that semantic extensions are generally self-describing and easily assimilated in to a cognitive model, where syntactic extensions are ambiguous at best and misleading at worst. One can infer that the success of a language feature depends critically on the novice readers ability to effectively identify and research items in source code that are both of interest and whose exact nature is unknown. During the Q&A she expressed misgivings about Aspect Oriented Programming, comparing it to the use of goto's. I believe that her discomfort stems from the non-obvious ways in which AOP modifies the behavior of a program. At the end of her talk she called for new abstraction mechanisms and more complete languages to ease the difficulties of programming in the age of the internet. I think that Barbara Liskov will be both intrigued and pleased by Gosu when she encounters it.
Random notes:
Assumptions that one piece of code must make about another are a type of connection that must be included under the general umbrella of "coupling".
Exception handling should include the option to "fail". I'm not sure what she meant by this. Is System.exit() not enough? In Erlang systems, processes in a severe error state shut down and are then restarted by other processes. Part of Ms. Liskov's definition of "fail" included the ability to restart and so the Erlang model may be more in line with her thinking.
On inheritance: Type inheritance is an unequivocally Good Thing, implementation inheritance, not so much. She says that implementation inheritance breaks encapsulation (because subtypes have a view into their parent types). During this segment she mentioned that Smalltalk completely lacks this kind of encapsulation, an assertion to which Ralph Johnson (a thought-leader among Smalltalkers) took some exception during the Q&A. My opinion: In practice, her assertions are proved out. From diamond shaped inheritance hierarchies, to the "fragile base class" problem, to the addition of "mix-ins" to Scala, there is evidence that delegation is a more healthy and durable abstraction than implementation inheritance.
Modularity is based on abstraction.
During the Q&A Guy Steele asked Ms. Liskov for advice about how to design a more advanced iterator mechanism in a new language (which I imagine to be the Fortress language that he is developing to replace Fortran). She demurred, saying that she would content herself with a solution that covered most basic needs and that could be extended by hand in order to cover the rest. I was intrigued by this exchange because in my own talk I introduced the concept of streams borrowed from the Scheme programming language, and this notion seems to be a candidate solution to Mr. Steele's dilemma. Since he is one of the inventors of Scheme I can only assume that he has already considered and rejected this solution, but I am very curious as to why.
A short write-up of this talk along with links to the papers mentioned can be found at: http://cacm.acm.org/blogs/blog-cacm/49502-the-power-of-abstraction-barbara-liskovs-oopsla-keynote/fulltext

Random conversations

Don Roberts & John Brant
Working on a project where they are translating a 1.5 million line Delphi program into C#. Their program (written in Smalltalk, of course) parses the source into an AST for analysis and translation targeting, and then fires fairly simple substitution rules to perform the translation. It turns out that they had to translate some of the UI code by hand because the underlying models were too different (despite sharing many elements and names). I understand that a demo of of the engine that the wrote to do this (called SmaCC) can be found at: blazonres.com.

Tuesday

Flapjax

Poster & demo
JavaScript library + (optional) language extensions, for concurrency.
Based in part on the "synchrony hypothesis", but I don't know what that actually means.
A major goal is to make concurrent programs deterministic.
Two new abstractions:
Behavior: a variable whose value changes due to forces that are external to the program. The value of such a variable may always be queried, but cannot be derived by any other means. [I imagine that these values must be frozen at Event boundaries in order for determinism to be achievable.] Events: a special kind of container produces event objects in response to OS events (including timers, interrupts, etc...) Methods can be attached to the container to handle these events as they occur. Alternatively, queued events may be pulled from the container as the program becomes ready to handle them. If no events are queued the default behavior is to block the requestor until one becomes available.

Type and Effect system for Java

Short preso.
I didn't really "get" this, and my notes are spotty: "Deterministic execution with parallelism for optimization"
Functional, SIMD, explicit dataflow
Fork & join style blocks:

  • foreach
  • cobegin
These were clearly the entry points to (potentially) concurrent operations.
Data protection mechanisms: region, path list, index parametrized arrays, owner regions.
These each define a data set that must be modified in a transactional manner [my interpretation].
These declarations are structured so that the compiler can make some scheduling decisions based on static typing, e.g. two mutating operations may run concurrently because the memory areas that they affect are disjoint.
Commutative operation: an operation that must be atomic, but does not have to be run in a particular order relative to other operations.
More at: dpj.cs.uiuc.cs

Tuesday, May 12, 2009

Why Net Neutrality Is Good For Telcos

One tactic of political debate is to influence public perception of terms by subtly changing or obscuring their meaning. What I describe here will be recognized by some as Net Neutrality, but will be seen by others as a distortion of that concept. In either case, this is a proposal that would resolve the Net Neutrality debate by framing it in terms of qualities of service versus uses of the service. I argue that telecommunications companies, companies that maintain the "pipes" of the Internet, are utility companies that provide a commodity service, and that their long term health, as well as the health of the industry that they serve, will be best maintained if they are regulated in a way that prevents them from having conflicts of interest with their various customers.

Communications bandwidth is a commodity resource much like electricity and water. Like these other resources the variety of uses for this bandwidth is nearly unlimited. The consistent, reliable availability of these commodities allows businesses to constantly deliver new and innovative products, and allows consumers to use them, thus driving American productivity and our economy.

If other utilities controlled the uses to which their resources were put the resulting drag on our economy would ruin our ability to be competitive in the global market place. Imagine what would have happened if we had to pay for light, air conditioning and access to radio and television all separately, instead of just paying for electricity. Imagine what would happen if farmers had separate price lists for corn irrigation and wheat irrigation. By separating the availability of a commodity from the uses to which it is put, we create an economy that is agile and productive.

Like other commodities, bandwidth has several properties that can be varied to make it more suitable for various purposes. For example, some kinds of data transmission can tolerate the loss of some percentage of information. This could be compared to the property of purity in water -- some applications of water (e.g. irrigation, cement mixing) can tolerate the presence of impurities and microorganisms, while others, like drinking and cooking, cannot. It makes sense for utility companies to offer their products in different packages that provide varying qualities of service each tuned to a different segment of the market. It does not, however, make sense for the utility companies to monitor or control the uses to which these quality-controlled commodities are put. For instance, it would be completely inappropriate for the electric company to revise my bill because I had used light bulb electricity in my coffee maker.

Telecommunications companies that provide data channels and connectivity to the Internet should be returned to common-carrier status. They should offer products based on industry standard qualities of service (the same way that electric companies offer 120 or 240 volt electricity but not 178.3 volt electricity, for example.) They should not base their pricing or availability on the use to which these services are put, and they should have no visibility into nor responsibility for the data that they carry. Separating the qualities of the product from the use to which it is put allows both users and providers to maximize their levels of innovation and productivity.

Telecommunications utilities resist this sort of plan on the basis that it represents excess regulation and restricts their ability to profit. In reality, separating quality of service from type of use approximates the minimal possible level of regulation and creates an market place where these utilities can achieve maximal productivity and profit. If the telcoms are allowed to influence the content that flows through their networks then market forces will compel them to limit the types and availability of the content in a cycle that will ultimately result in a small amount of content with high production value but limited accessibility and high prices. They will, in fact, be compelled regulate the content industry and to do so in a highly destructive manner. This sort of regulation, imposed by the private sector and driven by blind profit motive, is far more restrictive (to the point of being excessive,) than the plan presented here. If telecommunication utilities are allowed (and therefore effectively forced) to compete based on content rather than service, they will have significantly less incentive to invest in the infrastructure required to deliver the highest level of service. Ultimately they will have high ROI, but relatively lower levels of absolute profit. If they are restricted from competing based on content then they will be forced to compete based on service, resulting in higher quality infrastructure, greater opportunity and innovation in the content creation industry, spurring demand for higher volume and quality of telecommunications services, resulting in a virtuous cycle of real growth in productivity.

Businesses are often forced to make long-term sacrifices for short term profits. Here is an opportunity to do something that, although they won't like it, will be better for them and our country in the long run.