The 20'th Tcl/Tk
Conference was held at the Bourbon-Orleans hotel in New Orleans, the
site of the first non-Berkely Tcl/Tk conference.
was our special guest. He spoke about the history of Tcl and Tk and
what features of the language led to it's continued viability.
keynote speaker was Karl Leuhenbaur, the developer of Tcl-X and founder
of FlightAware. Karl discussed how Tcl (Apache/Rivet) is used on the
FlightAware website. He described how FlightAware analyzes and reports an incredible
amount of information every second.
XML is the most popular standard for representing structured data as text. Yet, XML has one major drawback: It's difficult to read and edit by human beings. Many simpler alternative data formats exist, but none of them has reached the global support XML has. This paper proposes instead a simplified representation of XML data (SML for short), inspired by the Tcl syntax, that is strictly equivalent to XML. SML data files are smaller, and much easier to work with by mere humans. I'll present a Tcl script for doing completely reversible XML <-> SML conversions.
This paper describes the motivation for and implementation of two Web-oriented protocols for the Internet of Things. While building upon the ubiquity and flexibility of the Web, WebSockets and STOMP minimise headers to the benefit of payload, thus adapting to the scarce resources available in IPv6 mesh networks. The websocket library provides a high-level interface to open client connections and further exchange data, along with the building blocks necessary to upgrade incoming connections to a WebSocket in servers. The STOMP library offers a near-complete implementation of the latest STOMP specification, together with a simple broker for integration in existing applications.
I introduce a Tcl-based framework for development of dynamic web sites. It was written as an attempt to get away from object-oriented web development paradigms which have largely failed to deliver the advantages that dynamic scripting tools ought to provide. It leverages some of Tcl's unique features, such as hierarchical namespaces, module-based package management and extensive call stack introspection; in order to provide a hierarchy of composable dynamic services in response to a URI, inspired by service-oriented programming techniques.
Performance was the main reason that drove the inception of the TyCL compiler, to have the possibility to run Tcl programs faster and with a smaller memory footprint. In order to try to have those possibilities, a series of trade-offs at the syntax and functionality levels had to be made besides the inclusion of new syntax and features. The question now is: how can TyCL try to improve the performance of such programs? Or in other words: how it does it? ... and: how well or bad it does it? This paper describes TyCL's internal processes and data-structures and provides a basic preliminary performance analysis of the compiled programs that it generates, compared against the Tcl VM/Interpreter.
Performance analysis of applications written in a combination of a compiled language (e.g. C) and interpreted language (e.g. Tcl) can be challenging. Sampling profilers record a program's callstack at regular intervals; an analysis of the addresses on the call stack can provide the C-language stack, but has no knowledge of Tcl commands being executed at that instant. A Tcl-specific profiler can show usage of Tcl code, but doesn't include C code that may have been used to implement functions accessed from Tcl.
We present an approach to combine compiled (C) and interpreted (Tcl) callstack information for profiling purposes. Our application (QuestaSim) has a builtin sampling profiler that has unwinds the call stack at regular intervals. We have integrated Tcl call stack information to provide a more complete picture of program state at the sample times.
This paper describes the techniques used to identify the performance changes and the modifications which improved thread behavior in Tcl 8.5 compared to Tcl 8.4, as shown by Phillip Brooks in his paper 'Pulling Out All The Stops - Part II' last year.
TAO/TK is a high level architecture for TclOO based applications, currently used by T&E solutions to power the Integrated Recovery Model. TAO is a dialect of TclOO which uses an embedded Sqlite database to allow properties, class methods, and options to be inherited just like methods. TAO/TK is set of base classes which provide a foundation for complex user interactions, megawidgets, and automated generation of data entry screens. This paper describes the basic usage of TAO/TK, as well as provide details on it's inner workings.
A couple years ago we decided that the application window used to display program source file text was inadequate in a number of ways. The basic problem was performance of a few significant features: (1) keyword/syntax coloring, (2) inline annotations, (3) large files. After reviewing and testing a number of applications and widgets, we narrowed the field down to 2 options: customizing the Tk text widget, or porting Scintilla to Tk. This paper will discuss the requirements and implementation challenges with ScintillaTk.
A dynaform is a specification of a dynamic data entry form: one whose content and layout can change based on the input of the user. Consider a GUI for entering rules for filing e-mail messages into folders: each rule can have many different forms, depending on the desired criteria. The user must first select the kind of criteria; each criterion has its own set of parameters, and the user's choices for those parameters might result in a further series of choices. The dynaform mechanism consists of a little language for specifying such a set of related input, infrastructure for processing that language, and the dynaview widget, which can display any desired dynaform and accept user input.
The transition of nuclear and particle physics data acquisition systems to high speed wave form digitization technology allows a great deal of simplification in electronics setups. There are trade-offs however. So-called 'digital data acquisition systems' require higher data transmission bandwidths and can be challenging to set up for the uninitiated. This paper will address a software product that is being developed to address the first stages of setup. In this work the large and unfamiliar parameter space of the waveform digitizer is hidden behind a cognitive model that is familiar to most experimenters, that of a digital oscilloscope.
The cmdr (speak Commander) framework is a new package for the declarative definition of large and complex application command lines, spiritually derived from the CloudFoundry Ruby Gem 'mothership'. The paper describes its internals and features, and how these enable the easy definition of complex hierarchies of nested commands with options, positional arguments and invisible state, complex validation and (cross-argument) dependencies.
Public key cryptography, also known as asymmetric cryptography, is a mechanism by which messages can be transformed (either encrypted or decrypted) with a private key and then reassembled with a related, but separate public key. The public key infrastructure (PKI) that supports the trust model by which two parties that have no previous knowledge of each other can verify each others identity through trust anchors is implemented using digital signatures made possible by public key cryptography. Being able to participate in PKI directly with Tcl scripts makes many interesting and useful applications possible. The PKI module in Tcllib implements the various PKI related standards and algorithms for Tcl scripts to accomplish this goal including PKCS#1, X.509, and RSA. This paper aims to explain the high-level concepts related to PKI, describe the scope of the various standards and algorithms implemented, and explore the possibilities of PKI in Tcl.
Tcl/Tk 8.6 was announced at last year's conferences to much fanfare. Despite all of the hard work on the part of the core team, however, much work still needs to be done by the community to support software deployed in the field. While 8.6 looks the same from the command line, there have been a lot of changes under the hood that developers need to be aware of. Particularly with binary extensions to Tk. This paper will describe my experiences and lessons learned updating the TkHTML extension and the Canvas3d extension to operate under 8.6.
One of the key limitations of Tcl 8 is the use of a C int to specify lengths of various things (lists, strings, memory allocations) in arguments to Tcl C routines. The effect is that Tcl has size limits more constraining than those imposed by available memory on most modern systems. Escaping these constraints and having the full size_t indexed memory space for our Tcl values is a key requirement of the Tcl 9 interface.
One difficulty with this migration is that as the potential size of a structure is increased, the storage needed for an index value into the structure is also increased. For structures like trees or ranges or lists, the increased storage requirements for index values will be noticeable. This is analogous to the swelling of programs when moving from a 32-bit system to a 64-bit system. Since the pointers are fatter, the programs become fatter too.
Although we want Tcl 9 to have the ability to grow structures beyond the Tcl 8 limits, it will remain true that most scripts solving most problems will not need such space. For them, a simple-minded "int" to "size_t" search and replace conversion will swell them in size for no real benefit. Such a swelling might even be an impediment to Tcl's continued breadth of portability "to weird places like routers."
In this presentation selected data structure designs are outlined which attempt to tackle both halves of this scaling problem. Data structures designed to grow through the full size_t space without artificial limitation, yet do so in ways that minimize the swelling compared to their Tcl 8 counterparts for most every day scripts and programs. Featured in the presentation will be an examination of the traditional Tcl string growth algorithm, the adaptation to Brodnik arrays, and a scheme for making unlimited [expr] parse trees with an even more compact storage representation than is present in Tcl 8.
The Tcl interpreter has an evaluation strategy of parsing a script into a sequence of commands, and compiling each of those commands into a sequence of bytecodes that will produce the result of the command. I have made a number of extensions to the scope of commands that are handled this way over the years, but in 2012 I started looking at a new way to do the compilation, with an aim to eventually creating an "interpreter" suitable for Tcl 9. This paper will look at the changes made (some of which are present in 8.6.0) and the prospects for future directions.
Two case studies are described where Tcl played a key role integrating diverse detector data acquisition systems together into a coherent integrated data acquisition system. At the NSCL Tcl and NSCLDAQ were used to integrate the recently upgraded S800 Spectrograph Data acquisition system with that of GRETINA, a large segmented Germanium gamma-ray tracking detector. At Argonne National Laboratories, the CHICO-II detector system, using a tailored version of NSCLDAQ was integrated with the Digital Gammasphere detector system. The scope of integration will be discussed as well as the experience gained from campaigns with these systems.
A gofer value is a data value that tells the application how to retrieve a desired piece of data on demand, according to some rule, and a gofer type is the code that validates gofer values and retrieves the data on demand. The advantage of using a gofer is that the value returned can change over the lifetime of the gofer value as state of the application changes. The gofer infrastructure allows the definition of gofer types consisting of many different rules, with support for GUI creation and editing of gofer values.
For example, the user may desire that a particular simulation input affect all civilian groups residing in a particular set of neighborhoods. Instead of listing the groups explicitly, the user chooses the relevant rule and the neighborhoods of interest; at each time step, the simulation can determine the groups that currently reside in the chosen neighborhoods.
Bruce Ross, Clif Flynt
The second product to market is the one nobody hears about. Tcl/Tk provided the tools and techniques to take an idea from an entepreneur's mind and turn it into a releasable product in under 1 man-year. The paper will discuss techniques used to merge multiple libraries and executables into a new GUI will be examined. The basic application and need, along with SWIG, tktest, SQLite, tabs and cookit will be discussed.
A Tcl bean is a TclOO class representing a simulation entity with associated data. The set of all beans can be saved and later restored. Beans can own other beans, and the bean infrastructure supports bean deletion with undo, including cascading deletion, and bean copy and paste. The paper describes the bean implementation, including the current constraints on bean classes, as well as lessons learned while coming to grips with the TclOO framework.