A report by Jacob Brickman
Fresh mountain air, forests, a whispering brook, a beautiful lake, balmy weather – this was the idyllic setting for the 2017 APL Implementers Workshop, which took place September 16-20 at Syracuse University’s Minnowbrook Conference Center in upstate New York, in the Adirondacks, on the shore of Blue Mountain Lake. Even though I have been taking part in these workshops for several decades now, this place never fails to work its magic from the moment one arrives there. The charming timber main lodge, the comfortable rooms, the delicious food, the friendly crew, and the conference room right at the edge of the lake all conspire to create ideal conditions for deep intellectual concentration and exchange of ideas. Much of the value of the Minnowbrook workshops has also been in the ability to have deep, one-on-one conversations, and in the well-lubricated “evening seminars.” This year’s workshop was very well attended and I found it particularly intense and absorbing, so much so that when I got back to New York City it felt like I had been on an extended trip abroad and had to re-acclimate.
The workshop was convened by Roy Sykes, who, after a hiatus, took over this role in 2007 from Garth Foster, the original organiser of the Minnowbrook workshops, who had retired from Syracuse University in 2001. Roy has run the last six workshops, in 2007, 2010, 2011, 2013, 2015, and 2017. Roy reserved the location, handled the finances, drew up the invitee list, produced materials, set the schedule, and generally herded the APL cats. Jon, as usual, produced the high-quality printed materials that have been the hallmark of these workshops. David Liebtag was in charge of lubrication, both getting some great brews and storing left over booze from the last workshop. All in all, the workshop ran without a hitch thanks to the work of these gentlemen. Roy’s wonderful Border Collies, Tess and Grace, contributed to the relaxed and friendly atmosphere with their playfulness, intelligence, and unconditional love. It is a special feeling to be looked straight in the eye by a non-human being and to see that there is somebody home behind its eyes.
The presentations covered a wide range of topics, from new language primitives to language and array fundamentals, applications and robotics, as well as reminiscences and future directions. The opening session on Saturday evening started with the participants introducing themselves and saying a few words about their background and achievements. Since APL turned 50 in November 2016, we were particularly honoured to have with us Larry Breed, one of the implementers of the original APL. Most of the participants were long-time APL veterans, but it was also encouraging to see new, younger ones, who can carry forward and further develop the concepts of APL and array languages. Punsters were also well represented, which is perhaps not surprising, considering that the multiple meanings of the APL symbols make APL a very punny language indeed.
Mary Helen Foster, Garth Foster’s wife, spoke touchingly about her husband’s work with APL. She mentioned “The Big Deal Workshop” of 1977, in which most of the APL implementers of that time took part, and some of whom were also present this year. She said that Garth’s work can best be described with the words “passion, community, opportunity.” A discussion and recollections about the workshops followed. There was general agreement that the informal and confidential setting was crucial to the free flow of ideas, so that even competing implementers could openly exchange information. Both Mary Helen and Garth, who was also there, were thanked with a sustained round of applause.
Bob Smith, the implementer of the NARS2000 APL interpreter, spoke next about his recent progress. Bob, once known as “Boolean Bob” in his STSC days, has been using his interpreter as an experimental platform upon which to test new language features. He talked about his implementation of hypercomplex numbers (quaternions and octonions) as primitive data types in the language and the extension of primitive functions to them, the variant operator, and multiple precision numbers. Bob concluded with the matrix operator, which extends some functions to matrices as a whole, rather than item wise. Bob gave a second talk on the “Twelvefold Way”, which is a unified organizing principle for 12 of the most important combinatorial functions, and which he implemented as a single primitive combinatorial operator. Bob has been doing much pioneering work in implementing new functionality, and his talks are always a geek’s treat. This year’s was no exception. All of his papers, as well as the interpreter, can be found at (“APL Projects”1)[http://sudleyplace.com/APL/], and are highly recommended.
Sunday morning started with a talk by Artem Shinkarov on transfinite arrays, i.e. arrays with infinite index sets. Together with Sven-Bodo Scholz, he has developed a lambda calculus to describe such arrays, which makes it possible to unify the treatment of arrays and streams. They have also implemented the functional language Heh to handle infinite arrays, including infinite shape arithmetic. See (“A programming language with infinite arrays”2)[https://github.com/ashinkarov/heh]. Artem gave a second talk on SIMD vectorization for array languages. The talk was based on his PhD thesis, in which he explores data layout transformations for improved performance. See (“Data Layout Types”3)[http://ashinkarov.github.io/publications/asv-thesis.pdf].
Jacob Brickman (yours truly) gave a talk on the mathematical foundations of arrays, in which he started from the ZFC axioms of set theory and gradually led up to the definition of arrays. This work showed that the cause of the splintering of APL into two array systems (nested, in APL2, and boxed, in Sharp APL) lay not in the extended APLs, but in the original flat APL, because of the behaviour of bracket indexing with simple scalars (e.g.
V). Because extended APLs preserved compatibility with the original APL, this behaviour was carried forward. This led directly to the behaviour of APL2’s enclose (
⊂) on simple scalars (it returns the scalar unchanged), because
⊂ is defined in terms of indexing:
⊂X←→(X X), which implies that APL cannot have depth-1 scalars in either array system. It is possible to define an APL with depth-1 scalars, i.e. where
(2 2)←→⊂2 is a depth-1 scalar, rather than depth 0. Such a system would do away with the problem of “collapsing towers” and would have consistent depth behaviour. This system is unlike that of Sharp APL, where
(2 2)≠<2. Sharp APL has a flat array system extended laterally with user-defined depth-0 scalars produced with the box function
<. The boxes are equivalent to dynamically created single-member C structs, so
< creates new data types, not new arrays, because
< is unrelated to array formation. It is in fact possible to consistently extend the Sharp system to have nested arrays, and in that system we would still have
⊂<2←→<2, i.e. Sharp APL has the same behaviour as APL2 on an even larger class of simple scalars, because of the common origin of the dialects. The talk gave rise to many animated questions and discussions.
Bob Bernecky gave a presentation on FALCON – the Functional Array Language COmpiler Nexus. Bob went over the history of compilation of APL and functional array languages, which he said was characterized by different approaches, duplication and wasted effort, because different projects have used different front ends, intermediate languages (IL), back ends, runtime libraries and optimizations. He proposed the FALCON project, which would remedy this situation in a way similar to the gcc model, by providing a single parser per source language, a single SSA data-parallel, functional array intermediate language (PAIL), a single set of optimizations for all languages, a single code generator per target system, and a single runtime library per target system.
Sven-Bodo Scholz spoke about compiler-enabled GPU performance in SAC (Single Assignment C). His talk dealt with leveraging the highly parallel, energy and cost effective processing power of GPUs for efficient execution of array computations. Using the functional array language SAC, Bodo presented some of the challenges of translating high-level array operations into lower-level CUDA code for GPUs. The first challenge is to find the operations suitable for execution on GPUs, because not all operations benefit from it. The second challenge is to move arguments and results from main memory to the GPU and back. As these moves have a non-negligible latency, good use of GPUs requires that data transfers be minimized as much as possible. Two optimizations and their impact on a set of example programs were shown. The third challenge arises from the different forms of memory available in GPUs, which have different access characteristics and transfer costs. Smart use is crucial for getting the performance that GPUs offer in principle. The talk illustrated this using matrix transpositions. The final challenge comes from the different forms of memory management supported by the latest hardware and software stack from NVIDIA. Transfer latency and bandwidth both depend on the actual hardware used, their configuration, as well as application-specific demands. The talk showed that while naive compilation of high-level array operations for execution on GPUs is fairly simple, generating efficient GPU code requires non-trivial optimizations.
Bodo gave a second talk on memory management for distributed array programming, in which he showed a new way of mapping array operations into parallel computations on computational clusters. He showed that the absence of side effects in functional array languages enables a new form of distributed shared memory which offers the benefits of a globally shared address space without costly housekeeping. Key to this proposal is the observation that the context of array operations substantially simplifies the identification of read and write operations. Using SAC, Bodo showed that performance similar to that of state-of-the-art PGAS libraries (Partitioned Global Address Space) can be achieved despite a non-partitioned globally uniform address space.
Aaron Hsu gave three talks. The first was on his co-dfns compiler, which gives APL programmers working on large-scale problems easy high-level access to a GPU. He described the construction and architecture of the compiler, which is written in a fully data-flow, data parallel style using function composition without looping, branching, pattern matching, or other explicit control flow. It relies on traditional APL primitives to do the work.
Aaron’s second talk was on patterns and anti-patterns in APL. The latter are what he considers bad coding and design practices. He addressed the issues of getting programmers to go beyond the beginner’s plateau that makes it difficult to write non-trivial APL code in a good style, despite understanding the semantics of APL. Aaron covered eight design patterns that are in direct opposition to the traditional software engineering best practices, arguing that because good APL code often appears to contradict these practices, it can be difficult for people to appreciate or produce such code because they have internalized these practices. Aaron advocates an extremely terse programming style, with no control structures and very short and non-descriptive variable names. He is also against comments, because he argues that they make the code less readable. This talk sparked a lively discussion. His full presentation can be found here: (“APL Style: Patterns/Anti-patterns”4)[https://sway.com/b1pRwmzuGjqB30On?ref=Link].
Aaron’s third talk was on APL fonts. He discussed his dissatisfaction with the current state of APL fonts, arguing that current APL fonts have fixed width and appear either too casual or too dated. He made a plea for a set of open, free, and widely available set of modern fonts in the serif, sans-serif, and fixed width styles that can be used for presentations, academic publishing, and programming. He wants to take advantage of modern typographical technology to enable a more mathematical style of presentation that gives the viewer a feeling that what is presented is primarily mathematics, rather than code. During all his presentations, Aaron used the most beautiful cursive handwriting probably seen at any conference, except perhaps at a calligrapher’s convention.
Monday started with Joe Blaze, of APLNow and APLNext, who talked about the latest developments in his two interpreters. He first gave a brief history, going all the way back to STSC APL*PLUS. APL+Win, developed by APLNow, has improved SSE2 performance, code folding and outlining of APL source code, ultra-high resolution display support, the APLNext C# Script Engine, enhanced source code syntax colouring, and the APLNext Supervisor for application-level multi-threading. A neat new feature is an on-screen keyboard with virtual key regions, which allow the entering of characters by clicking on the appropriate quadrant of the virtual key, without needing modifier keys. Another useful new feature is the enhanced APL programmer GUI with scrollable regions containing output from APL executable statements, where session window scrolling is now programmer controlled. Multiple session ‘skins’ are available for the traditional format (results in-line), results in-line and scrollable, and results in a separate window from executable statements. The second interpreter, VisualAPL, developed by APLNext, is a .Net, OOP programming language like C#, VB.Net, etc., which emits Common Intermediate Language (CIL) code and can be interpreted or Microsoft JIT compiled at the programmer’s option. The compiled code can be deployed as interoperable .Net assemblies. VisualAPL has a 64-bit memory space. Development can be done in Visual Studio or the APLNext Cielo Explorer programmer GUI. C# code can be entered in-line with APL code. One can write traditional APL functions, .Net methods, lambda expressions and anonymous functions. Joe also outlined the issues of most importance to APL customers: performance, speed of processing, interfaces to non-APL tools and environments, continuity between versions.
Cory Skutt gave a brief presentation on parametrizing array systems. He says that it is possible to use a few system variables whose settings would control whether the array system behaves like a nested system, a boxed system, or like an array system from other languages.
Ray Polivka and Joe Blaze led a joint session. Ray led a discussion on teaching and promoting APL. He said he had introduced APL to some students at Vassar College. He suggested that presentations at ACM groups might be helpful. Hackathons were also suggested. One question that was debated was what programming language children should be first introduced to. A number of people agreed that Python was a good choice. Joe then led a discussion on attracting new APL programmers.
Morten Kromberg gave a cool demonstration of DyaLegoBot – a robot made up of a Lego Mindstorms kit, with the control unit replaced by a BrickPi3, which contains a Raspberry Pi computer. The brick comes with a Python library. Dyalog APL was running on the Raspberry Pi, and, through Dyalog’s interface to Python, Morten was able to control the robot from APL. The robot was equiped with a colour sensor pointed at the floor and a swivelling infra-red sensor for obstacle detection. The robot successfully navigated through a maze, avoiding walls and backing out of dead ends, and stopped at the exit upon detecting a coloured paper on the floor.
On Tuesday morning, Morten gave a presentation on the second 50 years of APL. He said that one of the main directions of development for APL is going to be integration with different environments and languages, in addition to the implementation of new functions and operators. Morten’s outlook on the future of APL is optimistic.
Jay Foad described Dyalog’s experience of incorporating language features from J (e.g. the rank, key and stencil operators) into Dyalog APL, and his own struggle to understand the differences between the array systems used by the two languages, in particular the behavior of the enclose and disclose functions. A lively discussion followed.
On Monday evening, Jon McGrew presented a proposal for an additional APL data type, called “Symbols.” He described the symbols data type within A+, which is Morgan Stanley’s in-house APL system. McGrew worked in A+ Development and Support at Morgan Stanley, and taught their A+ classes. Symbols take the form of
abc and are syntactically distinct from variables. He discussed some cases of potential code speed ups and simplification, and instances of improved code readability using the symbol data type. He spoke highly of the benefits that this data type provides, and urged the developers of other APL systems to look at this and consider implementing it in their own systems, directing them to (“A+ Reference Manual”5)[http://aplusdev.org] for further information.
On Tuesday morning, McGrew gave a brief presentation observing the fiftieth anniversary of APL with a tribute to Al Rose, an early APL educator. McGrew described the work that Al Rose did in the late-1960s by carrying a 120-pound “portable” 2741 paper terminal to customer sites to demonstrate APL on dial-up phone lines… and then McGrew unveiled the actual terminal that Rose lugged around with him, built into two heavy custom-built suitcases.
Jim Brown gave an entertaining and informative talk on his personal history with APL, going all the way back to the beginning of APL. Like many of us, he too was captivated by the power and economy of expression of APL. He had the privilege of working with the original design team of APL (Iverson, Falkoff, Breed, Lathwell). He later went on to design and implement APL2, and he spoke about how some of the design choices were made. Jim’s talk brought home the point that behind the formalism and rigour of a computer language there are warm-blooded human beings, whose passion and fallibility are reflected in their work. His talk can be found at (“On APL’s History”)[http://aplwiki.com/OnAPLsHistory].
Stephen Mansour talked about taming statistics with TamStat – a statistical package using defined operators. TamStat leverages the power of APL to do many of the usual statistical calculations, and the use of operators gives it additional power and flexibility, while at the same time reducing the amount of code needed. TamStat also has an interface to R. TamStat can be found at (“Taming Statistics with TamStat”)[http://www.tamstat.com].
On Tuesday evening we had one of the highlights of the workshop in Stanley Jordan’s presentation on APL in 3D modelling and music production. Stanley has been using APL for many years for both creating music and for investigating various concepts of music theory, like modes and scales. He says that APL is a “music language”. He has developed a very intricate application, which itself is almost like a work of art, for musical experimentation and production. Stanley has also been involved in data sonification, i.e. associating sounds with data, which makes it possible for the human ear to recognize patterns that the eye may not so easily see. He has also gone in the other direction, by using his application to visualize certain aspects of music or of music theory. For example, he has produced a multidimensional, colour-coded diagram of the music modes (e.g. lydian, aeolian, etc.) and their relationships, which can be rotated to bring various modes into the foreground and explore them. At the end of his talk, Stanley treated us to his wonderful guitar playing.
Wednesday morning (the last day) started with a discussion session moderated by Joe Blaze and Steve Mansour on the state of APL today. Some strategies to increase familiarity with, and the use of, APL, were suggested: present papers and demonstrations at the JSM statistics conference, create APL libraries that can be used from other languages, create YouTube videos, get involved in problem solving sites, take part in hackathons and maker fairs. It was also pointed out that most users are interested in solutions to their problems, rather than a specific language, and that most languages are available for free and have a large number of libraries, neither of which is generally true for APL. Python was mentioned as a good language not only for introducing people to programming, but also for being able to get a project off the ground quickly and do significant work. Python has an enormous, and growing, number of libraries, interfaces and specializations (e.g. NumPy), and it even has replaced Scheme as the language of the introductory computer science course at MIT. APL could learn something from Python, but Python could also learn something from APL, especially in the handling of arrays and economy of expression.
David Liebtag gave a brief presentation on his new start-up company, Naps International. In these days of hectic schedules and never enough time for sleeping, Naps has a dedicated crew to whom we can outsource our napping, thus freeing us to have even more hectic schedules. This was a brilliant, long-overdue idea. Now if somebody can come up with a way to outsource our eating, we might be able to solve the obesity problem. For more information go to “Naps International” (Liebtag)
Finally, Larry Breed showed and spoke briefly about a number of APL memorabilia, among which was a large photographic plate of the Ulam prime number spiral, which was plotted with APL.
The workshop concluded with the announcement of the next one, which will take place Wednesday-Sunday, September 18-22, 2019.
References and links
- APL Projects (Smith) [http://sudleyplace.com/APL/]
- A programming language with infinite arrays (Shinkarov) [https://github.com/ashinkarov/heh]
- Data Layout Types (Shinkarov) [http://ashinkarov.github.io/publications/asv-thesis.pdf]
- APL Style: Patterns/Anti-patterns [https://sway.com/b1pRwmzuGjqB30On?ref=Link]
- A+ Reference Manual [http://aplusdev.org]
- On APL’s History (Brown) [http://aplwiki.com/OnAPLsHistory]
- Taming Statistics with TamStat (Mansour) [http://www.tamstat.com]
- Naps International (Liebtag) [http://davidliebtag.name/naps/]
The British APL Association promotes the vector programming languages derived from Iverson’s mathematical notation.