tag:blogger.com,1999:blog-43680625361104712712024-03-13T21:54:42.103-07:00reparrotLet's see if this works.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-4368062536110471271.post-88739422846111614912013-02-15T23:51:00.000-08:002013-02-15T23:51:22.549-08:00It's Been QuietIt's been quiet. Too quiet.<br />
<br />
Interest in Parrot has waned over the past 18 months. The most recent flurry of activity happened when Allison Randal brought up the fact that The Parrot Foundation was in shambles and <a href="http://irclog.perlgeek.de/parrot/2013-02-09#i_6431975">suggested</a> shutting it down. This naturally brought up the state of Parrot itself and what the future holds for it, if anything. The situation is perhaps less than ideal. The short answer is that Parrot's immediate prospects are iffy at best, but there is at least one niche where Parrot still has a chance to shine.<br />
<br />
The surface problem with Parrot is that there’s a lack of people who can find the tuits to hack on it these days. Different people have their own analyses as to why this is happening. My best answer is that Parrot doesn’t have a compelling value proposition. Hosting every dynamic language was pretty revolutionary around the time Parrot was started more than a decade ago. Today that’s no longer the case and the bigger language runtimes like the JVM, CLR and JavaScript (not a VM but a very poplar compilation target) can run circles around Parrot on most of the axes that matter.<br />
<br />
Those of us who care about Parrot need to find a way to make it matter and to do so quickly.<br />
<br />
Rakudo is the current most complete and active language implementation that runs on Parrot, and even *it* is moving toward running on many backends. Parrot’s best bet is to focus exclusively on supporting Rakudo and give it a reason to stick around. If supporting all dynamic languages was ever a good idea for Parrot, that’s no longer the case. The reality of Parrot’s effective niche has become much harder to ignore. The best move is to adapt accordingly.<br />
<br />
Parrot has been inactive (among many reasons) because its developers can see that the goal of hosting all dynamic languages isn’t realistically attainable given Parrot's current resources. With a new and more tightly defined plan, Parrot has a fighting chance to find a useful niche.<br />
<br />
Parrot's new niche and reason for existence needs to be to support Rakudo and nqp until those languages either fail, succeed, or have no further use for Parrot.<br />
<br />
This will be a liberating shift for Parrot. The official policy is now “make nqp and Rakudo better”. Within that constraint, any change is welcome. In a bit more detail, the two goals by which any potential change should be judged are:<br />
<br />
1) Does it provide a benefit to Rakudo, especially a *measurable* *non-theoretical* benefit?<br />
<br />
If a change makes Rakudo happy, sold! This includes requested features, optimizations, bug fixes and the like. This is *the* primary concern and the best way to provide value to nqp and Rakudo.<br />
<br />
2) Does it make Parrot’s code simpler without increasing complexity elsewhere?<br />
<br />
Simplifying Parrot is valuable, but only in a much more indirect way. This goal is a distant second in importance to performance improvements. That said, simplifying Parrot is still helpful. Some of Parrot’s problems come from the decade of accumulated cruft. A simpler Parrot is more approachable and easier to profile, maintain and debug. Simplicity should be pursued as long as that simplicity doesn't mean shuffling complexity elsewhere and *especially* if the simplification comes with a performance bump.<br />
<br />
That’s all there is to it. With simple and immediate rules rather than a slow and deliberate deprecation policy, half-done features that were kept around for years “just in case” can safely be removed.<br />
<br />
Another implication of all this is that our deprecation and support policy are going away. They were well-intentioned but appropriate for a project in a much more mature and stable state. Our new support policy is “we’ll try to fix bugs and keep nqp running”. We’ll continue to make monthly releases but they will not be labelled as “supported” or “developer” as in the past.<br />
<br />
Observers of Parrot will note by now that this isn’t the first time that Parrot has tried something radical. This isn’t even the first time that *I’ve* tried something radical. What's different this time is that we’re no longer trying to be all things to all languages; we’re trying to be one thing to one language that’s already our customer. This will still involve a ton of work, but the scope reduction shrinks the task from Herculean to merely daunting.<br />
<br />
So here’s where you, the reader come in. Whether you’ve hacked on Parrot in the past or came for the lulz and accidentally got interested, you can help. The big goals are to make Parrot (and by extension nqp and Rakudo) smaller and faster. Below are a few specific ways you can help. Whatever you do though, don't make any changes that will be detrimental to nqp and Rakudo, and coordinate any backwards-incompatible changes before they get merged into Parrot master.<br />
<br />
Grab a clone of <a href="https://github.com/parrot/parrot">Parrot</a> and <a href="https://github.com/perl6/nqp/">nqp</a>. Build and install them. Play with the <a href="https://github.com/parrot/parrot/tree/sixparrot">sixparrot</a> branch, where some initial work is already in progress. Already there? Great! The next steps are a little harder.<br />
<br />
Remove code paths that nqp doesn’t exercise. This can be single if statements or it can be whole sections of the source tree. Tests are the same as code; if the nqp and Rakudo’s tests don’t exercise them, out they go. Tests exist to increase inertia, but are only useful to the degree that they test useful features. When in doubt, either ask in #parrot or just rip it out and see what happens.<br />
<br />
Relatedly, profile and optimize for nqp. If you like C, break out valgrind, build out a useful benchmark and see how fast you can make it run. If you find some code that doesn’t seem to be doing anything, you’ve just found an optimization!<br />
<br />
Learn nqp and Perl 6. There’s been a lack of tribal knowledge about nqp’s inner workings ever since Parrot started distancing itself from Rakudo. We need to reverse that tendency so that nqp is regarded as an extension of Parrot.<br />
<br />
Overall, the next few months will be interesting. I don't know if they'll result in success for Parrot, but I'm willing to give it one more shot.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com2tag:blogger.com,1999:blog-4368062536110471271.post-48653725412876161842011-07-30T11:44:00.000-07:002011-08-01T20:04:14.875-07:00M0 Roadmap Goals for Q3 2011M0 has been coming down the pipeline for several months. It's still pretty raw and has a number of known functionality holes, but it's getting better by the week. I'd like to make the next few stages of M0 part of our official roadmap, so this post spells out the overall plan and what I think we can accomplish in the next three months.<br /><br />M0 currently exists as a fairly hacky Perl 5 prototype. This is of necessity because Perl isn't generally intended to operate at the level that M0 requires. Perl is still serviceable as a prototype implementation language, but the form that will be integrated into Parrot will be written in C. There will be many stages between now and when the M0 migration is complete, but the goal I'll focus on is noop integration. I'll explain what I mean by that below.<br /><br />I see Parrot's migration to M0 falling into 7 stages:<br /><br /><span style="font-size:130%;">M0 Prototype</span><br /><br />We're working out bugs in the Perl 5 M0 interpreter and making certain that M0 will be a sufficient foundation for Parrot. M0 may change significantly but we're making an effort to stabilize it.<br /><br /><span style="font-size:130%;">C89 Implementation</span><br /><br />We're happy with M0 and have a reasonably efficient compiler-agnostic implementation of M0, written in C89, which passes all tests. Separate compiler-specific implementations are fine, but not a priority.<br /><br /><span style="font-size:130%;">Noop Integration</span><br /><br />C/M0 is linked into libparrot and exposes an interface that C code can use to call into M0 code. At this point no subsystems have been reimplemented in M0.<br /><br /><span style="font-size:130%;">Mole</span><br /><br />We specify and implement Mole, which will be a C-family langauge that compiles directly to M0. Writing M0 is painful (this was an explicit design goal), so Mole is what a large chunk of the M0 that implements Parrot will be written in. M0 bytecode is what will be run from Parrot, so other code generation possibilities exist.<br /><br /><span style="font-size:130%;">Early Integration</span><br /><br />We've started moving subsystems over to M0. The order of which systems hasn't been determined yet, but producing a complete list and making sure we're aware of the dependencies will prove important.<br /><br /><span style="font-size:130%;">C/6model in Core</span><br /><br />Having a solid implementation of 6model in core will eventually be a blocker. Implementing our current object semantics in M0, only to switch to 6model later isn't a wise use of our hackers' tuits.<br /><br /><span style="font-size:130%;">Pervasive Integration</span><br /><br />At this point, everyone can jump in. We have a couple major subsystems converted and have worked most of the kinks out of the process of translating C into M0. We'll be converting every subsystem that we can find to M0 and will have plenty of example code and documentation to lower the barrier to entry.<br /><br /><span style="font-size:130%;"> Complete Integration</span><br /><br />Parrot has a fairly small core of C code consisting of little more than the M0 VM and the GC.<br /><br /><br />Committing to a timeline can be tricky. It's much more important to have an M0 that's thoroughly well thought-out than one that's usable by a certain date. That said, the M0 spec and prototype are coming along nicely. Completing the "Noop Integration" stage and possibly getting a solid Mole compiler by the 3.9 release are reasonable goals, depending on how many interested parties make themselves known. I'm happy to see that whiteknight has made C/6model one of his roadmap goals. C/6Model in Core is largely orthogonal to M0 except that it needs to be integrated and solid before we start translating Parrot's object-related C code into Mole.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com0tag:blogger.com,1999:blog-4368062536110471271.post-64514464267371243042011-07-20T00:06:00.000-07:002011-07-20T00:06:08.402-07:00When Interpreters CollideNote: this post is about implementing an <a href="http://www.modernperlbooks.com/mt/2011/07/less-magic-less-c-a-faster-parrot.html">M0 interpreter</a> in Perl and is more a lightly edited braindump than a polished presentation of a concept.<br />
<br />
Recently some test failures in M0's test suite revealed that the prototype Perl interpreter had been sneaking some of its perl-nature into the implementation. The M0 assembler had been storing all values as strings and the interpreter had been secretly using its perlishness to convert the number-like values into ints at runtime. This doesn't work well for an M0 implementation because M0 needs to be very specific about the low-level behavior of an implementation and the way it treats registers.<br />
<br />
Perl is not C, and the basic problem I'm running into is that Perl is not designed to operate at the low level that M0 (as it currently stands) requires. M0 is all about bytes and assigning meaning to the value in a register by using a certain classes of ops on it. Perl is much higher-level and doesn't even have a particularly strong distinction between strings and integer values. If I want Perl to have string byte-oriented C-like semantics, it means that I'll be widely (ab)using the bytes pragma and pack/unpack. This is doable, but it's also torturing Perl into implementing something even further from its intended use case than the current (and subtly-incorrect) M0 implementation already is. sorear rightly freaked out when he looked at the M0 interp code, because it's doing something that Perl wasn't intended to do and something that Perl isn't particularly well-suited to.<br />
<br />
Still, javascript has been used to emulate at least x86, 6502, Z80 and 5A22 and with surprisingly reasonable performance. Arguably that's also pretty far from javascript's intended use case, and still it works. This many just be an issue of finding the least hacky way to do something inherently very hacky.<br />
<br />
The alternative is to specify M0 to have flexible underlying semantics, but I don't know that it'd be either practical or advisable to go too far down this road. It's worth giving some thought to making the M0 spec be minimally unnatural to implement in a high-level language, but M0 is by its nature a low-level beast. Implementations are bound to reflect that to some<br />
degree.<br />
<br />
In the end, the best way forward will probably be to plow through the craziness of implementing a simplified CPU in Perl and look forward to building on chromatic's C implementation, where the intent of the implementation language is much closer to the aim of the project.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com1tag:blogger.com,1999:blog-4368062536110471271.post-42076348077239961452011-07-03T13:24:00.000-07:002011-07-05T13:23:09.535-07:00Parrot Weekly News for July 3rd, 2011Welcome to the first edition of PWN. At YAPC::NA, long-time developer chromatic expressed frustration at the fact that Parrot as a community hasn't been effective in communicating the knowledge of its members. IRC, while great for immediate communication, doesn't lend itself to transparency for those who don't have time to hang out on #parrot 24/7 or to follow our irc logs. My hope for this newsletter is to make Parrot's development more transparent, even for those with only have an hour or two per week to keep up with Parrot. I also hope that this will serve as a common channel of communication for all Parrot developers in order to provide a basic understanding of what's been happening in Parrot and what's needed.<br />
<br />
<span style="font-size: large;">YAPC::NA</span><br />
<br />
The past week contained YAPC::NA, a grassroots Perl conference organized by the Perl community for the Perl community. There were three Parrot-related talks given by kid51, dukeleto and me, and one Perl 6 talk given by colomon. There was also a well-attended Parrot/Perl6 BoF session on Tuesday and a hackathon on Thursday. The hackathon was largely focused on coding and didn't generate significant directed discussion.<br />
<br />
<span style="font-size: large;">kid51's 10 Questions</span><br />
<br />
kid51 had a short talk in which he raised a number of important questions about OSS projects in general. He then proceeded to apply those questions to Parrot, with less than stellar results. He had some of good points, particularly that Parrot needs to become production-ready before it can be considered a true success, that Parrot needs to have a better-defined purpose and focus, and that the project needs to "get to the point". Asking tough questions isn't usually fun, but kid51 did Parrot a great service by honestly and directly pointing out some of the flaws of our community. I hope his feedback will lead to positive changes in the way we look at ourselves and the products we're producing.<br />
<br />
kid51's slides and a recording of his talk are <a href="http://lists.parrot.org/pipermail/parrot-dev/2011-June/005960.html">here.</a><br />
<br />
<span style="font-size: large;">dukeleto's Visual Introduction to Parrot</span><br />
<br />
dukeleto presented an introduction to the world of Parrot. His intent was to give Parrot newbies a high-level overview of Parrot, its community and its ecosystem. It was lighter in content due to being targeted toward less experienced audiences. Nevertheless, it was an entertaining talk for people who already knew Parrot and provided a novel metaphor for understanding VTABLEs. Once we're based on 6model, I look forward to seeing what kind of metaphor he comes up with.<br />
<br />
dukeleto's slides are <a href="http://www.yapc2011.us/yn2011/talk/3303">here</a>.<br />
<br />
<span style="font-size: large;">cotto's State of Parrot</span><br />
<br />
I presented a talk on the state of Parrot just after dukeleto's talk. I covered developments in Parrot over the past year, some of the issues we need to deal with and what we expect the future to hold. The short version is that there are a number of problems that are keeping Parrot from realizing its potential, but I think we have it within ourselves to overcome them and to produce an exciting production-ready virtual machine with some novel and useful properties.<br />
<br />
My slides are <a href="http://www.yapc2011.us/yn2011/talk/3311">here</a>.<br />
<br />
<span style="font-size: large;">colomon's Numerics in Perl 6</span><br />
<br />
colomon gave a worthwhile talk about performing numerical calculations in Perl6, both in Rakudo and Niecza (pronounced "niecha"). The talk was a good display of how people are using code that's built on top of Parrot and Rakudo. As with all beta software, there were places where colomon ran into holes in the implementations of both Niecza and Rakudo, but the talk was hopeful and make me proud to be a Parrot hacker.<br />
<br />
His slides are <a href="http://www.harmonyware.com/perl/p6numerics/">here</a>. <br />
<br />
<span style="font-size: large;">Parrot/Perl6 BoF</span><br />
<br />
The Perl6 and Parrot BoF session was considerably more organization-focused than most attendees were expecting. Although the majority of attendees were from Parrot, Perl 6 (Larry Wall) and Rakudo (colomon) were also represented. A primary point was that Parrot need to get better at communicating communal knowledge among its members and users.<br />
<br />
Someone also suggested an intriguing way of reframing participation in Parrot. Many of us developers work to scratch our own itches, but question "What would you be doing if the Parrot Foundation were paying you a salary?" provided a new way to look at how we manage Parrot and spawned a <a href="http://lists.parrot.org/pipermail/parrot-dev/2011-June/005963.html">couple</a> <a href="http://lists.parrot.org/pipermail/parrot-dev/2011-June/005965.html">threads</a> on parrot-dev. For my part, this question provided the morivation for putting together this newsletter. I hope it will also provide a motivation for all developers to take a more complete view of Parrot.<br />
<br />
<span style="font-size: large;">Room For Improvement</span><br />
<br />
In this section of the newsletter, I will highlight areas of Parrot that are ripe for optimization. Due to YAPC::NA this newsletter is already filling up quickly, so I'll highlight just one area.<br />
<br />
config_lib.pir creates a hash that contains all data picked up by Configure.pl during configuration. It has more than 250 entries, the majority of which don't provide any useful information. Figuring out which entries in the hash are necessary and removing all the rest will help trim Parrot's startup time and make parrot_config a bit easier to sort through. If you're interested in this, drop by #parrot or parrot-dev and chances are good that someone will be able to put you to work.<br />
<br />
Other possible areas for optimzation are listed on the following pages on our wiki.<br />
<a href="http://trac.parrot.org/parrot/wiki/PerformanceImprovements">http://trac.parrot.org/parrot/wiki/PerformanceImprovements</a><br />
<a href="http://trac.parrot.org/parrot/wiki/chromaticTasks">http://trac.parrot.org/parrot/wiki/chromaticTasks</a><br />
<a href="http://trac.parrot.org/parrot/wiki/PCCPerformanceImprovements">http://trac.parrot.org/parrot/wiki/PCCPerformanceImprovements</a><br />
<br />
<span style="font-size: large;">Submitting</span><br />
<br />
If you see an interesting conversation on either #parrot, parrot-dev or #perl6, please mark it by saying "PWN". When preparing this newsletter, I'll search through <a href="http://irclog.perlgeek.de/parrot/today">irclog</a> (moritz++) for any mentions of "PWN" and a summary of the conversation to the next edition of PWN.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com0tag:blogger.com,1999:blog-4368062536110471271.post-10268161086990680922011-05-15T14:03:00.000-07:002011-05-15T19:29:00.580-07:00Thoughts on the PDSA number of useful conclusions and targets came from the Q2 2011 Parrot Developers Summit that happened yesterday. This post will contain a summary of the event and my take on what we'll be doing as a result. Props go out to kid51 for organizing an agenda for the meeting and keeping us more-or-less in line. Strict organization isn't vital for an irc meeting, but he did good job of making sure that our limited time was used effectively.<br />
<br />
We started out reviewing the state of our previous roadmap goals.<br />
<br />
The Deprecations-as-Data goal was substantially met. I love this goal because it has potential to make life easier for our users (especially Rakudo) by expressly delineating what features are going to need upgrading. A recent issue with nci and the 't' type demonstrates that we still have more room for improvement. (pmichaud and whiteknight discussed a proposed solution after the meeting, but it needs a little experimentation first.) My hope for data-based deprecations is that we end up with a better early warning system that alerts Parrot's users and gets discussions started before things break horribly. pmichaud's concern was that that the web tends toward passivity and that what's needed is active notification of pending and actual removals. I think this will be a boon.<br />
<br />
whiteknight's IMCC Isolation goal is making excellent progress. pmichaud commented that it's had no negative impact on Rakudo's development, which is impressive given its scope and invasiveness. IMCC isn't yet an optional component, but it's quite possible to run libparrot without initializing IMCC at all. Excising it completely is quickly becoming a possibility. whiteknight has been doing a bang-up job and isn't showing any signs of slowing down.<br />
<br />
The third goal is one that dukeleto and I have been working on, of getting M0 prototyped. dukeleto's working on the assembler and I've got the interpreter, both being written in Perl 5 with the binary M0 format (".m0b") being the only interaction between them. The punchline is that the interpreter is fully-implemented with stubs for all ops and the assembler is a couple weeks from being usable, depending on duke's tuits. On the one hand I'm a little disappointed that we don't have a fully usable prototype, but it is what it is. Even once both prototypes are "complete", there are several questions we need to get together with allison and/or chromatic to answer. Our M0 plan is to get the prototypes as complete as we know how and to have another meeting where we get all our questions answers, possibly even hacking the last few needed bits into the prototypes as we meet.<br />
<br />
Once we moved away from the retrospective, pmichaud quickly asked what Parrot's plans were concerning Rakudo. He specifically asked if Rakudo should consider itself officially blessed in developing against master rather than a release (we said "yes"), and if we planned to use Rakudo for regular benchmarking. This second concern is especially important because Rakudo has seen some significant performance regressions in the last couple months, in spite of the introduction of the new generational mark & sweep GC. The expectation is that regular performance testing would have brought this to light sooner and that once it's in place, we'll be more conscious of how our changes affect Rakudo's performance. We've had a distinct lack of benchmarking in the last few months. I hope this is the first of many attempts to revitalize our efforts to improve performance.<br />
<br />
On the same note, Codespeed (which runs speed.pypy.org) was mentioned as a possibility. I remember mentioning this in the past without effect, but hopefully the time was right at PDS. We didn't formally ask for someone to investigate it though. I hope it doesn't get dropped on the floor again.<br />
<br />
The next PDS was scheduled for July 30th or 31st, which seems comfortably far away from any known conferences. whiteknight volunteered to set up a Doodle, which is proving to be a very handy tool for scheduling these things.<br />
<br />
The next topic to come up with profiling. While working on Rakudo, pmichaud hacked out very quick and dirty sub-level profiler that immediately pointed out an important hotspot. This indicated to me that we need to up the game of the profiling tools that we provide as part of Parrot. whiteknight and I were on the same page, so one of our new roadmap goals is to dig into the current profiling runcore, find out what's keeping it from being useful and fix it. It currently depends on IMCC to get its information about the currently running code, so there's potential for much yak-shaving. On paper the goal is only to investigate. I hope we can get much more done. I love providing useful tools to people, so I'm glad to have a chance to redeem the profiling runcore. Unfortunately having whiteknight work on profiling will mean that he won't be spending as much time figuring out how to apply 6model to Parrot, but that's what it means to have priorities.<br />
<br />
A third concern was raised by pmichaud, who said that it's difficult to gauge what Parrot's leadership thinks about certain issues. One of the triggers in this case was my rather foolish removal of the intiailization of Parrot's PRNG (pseudo-random number generation) using the system clock. At the time Peter Lobsinger made the reasonable-sounding argument that there's no single way to correctly do PRNG that will satisfy the needs of every possible use case. After too little thought, I decided to interpret that as meaning that it didn't matter that I'd changed Parrot's PRNG behavior because Rakudo should be doing what makes sense for them. This ended up being a bad idea that caused some pain for Rakudo, and while I eventually reinstated PRNG intialization from the system clock and later from the system entropy pool, it showed the need for a better-delineated interface to gather option from Parrot's developers as a whole. To this end, whiteknight and I will serve as a sort of ombudsmen for when technical decisions end up harming users and need to be appealed. I don't think we'll need to put on our ombusdmen hats often, but we'll be glad to have them when we do.<br />
<br />
Breaks in compatibility are inevitable, but what whiteknight and I hope to achieve as ombudsmen is to make sure that users have a respectful ear and will get fair consideration for their problems. A disconnect between the needs of our users and our goals is very unhealthy and can only harm both parties.<br />
<br />
Overall, it felt like a very productive and well-organized discussion. pmichaud did a great job of representing Rakudo's concerns and I think that the coming months will see several improvements in Parrot's process and tools to make it a better plaform for Rakudo to build on.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com0tag:blogger.com,1999:blog-4368062536110471271.post-40620113709593002922011-05-01T10:44:00.000-07:002011-05-01T10:44:08.212-07:00M0ving Forwarddukeleto and I shared a hotel room at LinuxFestNorthwest and had a great opportunity to talk about M0 after our respective talks. We went over the state of the spec and what the best forward might be. We also tried to look at what the future M0-based Parrot workflow will look like and how we can get there, though we got distracted before the crystal ball was delivered.<br />
<br />
First, dukeleto mentioned that M0 is less discoverable than it needs to be, especially for a project that we expect to become Parrot's new foundation. He suggested that we write a document that someone can read to get a clear 10,000 foot view of M0 and how its pieces fit together, a glossy brochure of sorts. This could be either an introductory section in the M0 spec or a separate document. The important thing is to have something we can point people at so that dukeleto and I aren't the only ones who can readily articulate what M0 is and where M0 is headed.<br />
<br />
We also made some updates to the spec to make getting values from the variables table less confusing. This is fairly minor in the scheme of things, but so is Perl's "say".<br />
<br />
Last of all, we hammered out a plan for how get a working M0 prototype assembler and interpreter.<br />
<br />
atrodo has been very valuable in providing his prototype Lorito implementation, both in his documentation and in the way he's had to bring assumptions to the surface to get a runnable interpreter. His implementation differs from the spec in a number of ways (many of which are because it predates the spec), but it's been helpful in those places because it shows us what we want by counterexample. The next (brief) stage was a set of prototype PIR dynops of M0 I hacked together. This was great to get some runnable code that was close to the spec, but it very quickly ran into the impedance mismatch between the high level of PIR and the low level of M0. The effort on the m0 prototype dynops wasn't wasted, but they've reached the limit of their usefulness.<br />
<br />
The next step we've decided to take is to implement a separate prototype M0 assembler and interpreter. dukeleto is be working on the assembler and I'll do the interpreter, both based on the M0 spec in the m0-spec branch on GitHub. The only interface between the two will be M0's binary representation, so we can easily change one without needing to modify the other. We're trying to converge on the structure of both the interpreter and assembler, but we expect this to the last prototype rather than a final implementation. We'll also be writing tests against both the interpreter and assembler which we can later use against any future implementations.<br />
<br />
dukeleto has started hacking in the m0-prototype branch in src/m0 and managed to get some very basic tests passing before he went to sleep. We'll both be using Perl 5.10 as an expedient, since we don't expect these projects to serve as more than prototypes. As a temporary measure one of us will need to hand-generate a couple simple bytecode files to verify that the assembler is working correctly. These files will live in t/m0 in the branch. The test code will be a minimal hello world program and a slightly more complex multi-chunk M0 program to help iron out inter-chunk interaction. We haven't decided on what the complex example will be yet. This is a part of the spec we'll need to work on as we come to understand what implementation makes the most sense.<br />
<br />
Overall, rooming together at LinuxFestNorthwest has been very helpful in moving M0 forward. Both of us have used the opportunity to bounce ideas off each other and to get the M0 train out of the station. We're still a couple stages (and probably one more face-to-face meeting with allison and/or chromatic) away from a final implementation, but we can see the light at the end of the igloo, and it's looking pretty good.<br />
<br />
There are a couple things that still need to get done. In the interest of trying to keep them from getting dropped on the floor, they are:<br />
<br />
<ul><li> Map out what a future m0 workflow will look like, what we need to do now to make it possible.</li>
</ul><ul><li>Make M0's roadmap and status more discoverable by making a glossy brochure that will communicate the idea effectively to someone who hasn't heard of M0 before.</li>
</ul>Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com0tag:blogger.com,1999:blog-4368062536110471271.post-46514710700304762772011-01-10T00:23:00.000-08:002011-01-10T00:23:39.121-08:00Thanks, Code-In students!This post is intended for students who've participated in this year's <a href="http://socghop.appspot.com/">Google Code-In</a>, specifically for those who worked on some of the tasks for <a href="http://parrot.org/">Parrot</a>. If you worked on another project, this post is still for you. Just ignore the Parroty bits.<br />
<br />
When we Parrot developers first decided that Parrot would be participating in Google's new Code-In program (gci), I was quite skeptical. Most of our initial tasks were for translations and many didn't seem to me like they'd help Parrot as a project, especially since Code-In was a new (and untested) initiative. If you'd asked me what I though before the start of gci, I'd say that I had low expectations but would be glad if proven wrong.<br />
<br />
I'm glad to say that the amount and quality of the contributions we've received from gci students has proven me very wrong. We've had a few low-quality results, but the large majority have been of excellent quality. Over the course of gci, we've added thousands of lines of tests and code, squashed lots of bugs and had several reported, and have increased our test coverage by about 3.5%, all of which which represents a great deal of work for a large project like Parrot. As gci progressed, we've even been able to bump up the difficulty of our "difficult"-rated tasks substantially to challenge our most ambitious students. Parrot is much better off because of the efforts of all of you.<br />
<br />
But gci isn't what this post is about. Now that gci is over, you students will have the opportunity to continue hacking on OSS projects such as Parrot, but you won't be doing it for the artificial currency that Google has been kind enough to create. If you continue, you'll be working for the same reason any other developer hacks on an OSS; for scratching an itch, for the excitement of having people use something you've helped build, and for the ability to contribute to something useful that's much bigger than any one person could create.<br />
<br />
The Parrot project will welcome your contributions, as I'm sure any other gci projects will also do. Google gave you a motivation to get over the initial hump of finding a project and figuring out a couple accessible things to contribute to, but now it'll be your job to keep going. Much of OSS development happens because people are scratching their itches*. I have mine**, and I hope some of you gci students will find your own itches to scratch too. Along the way you'll run into all kinds of roadblocks, from broken libraries to half-assed implementations to outright lies in documentation, but those are just some of hazards of building something new. The best you can do is shave the requisite yaks*** so the road won't be as bumpy for the next hacker and get back on the track to making something awesome.<br />
<br />
I hope to see all of you continue to make contributions to Parrot after the end of gci. Your incentives will be different from now on, but they'll also become much more exciting. If you're interested and don't know quite what you want to do, we'll always try to help you find something awesome to keep you busy. Please stick around and keep on hacking!<br />
Thanks,<br />
<br />
Christoph Otto<br />
Architect, Parrot VM<br />
<br />
<br />
<br />
* This includes corporate-sponsored OSS development, where you get hired to scratch an itch. gci and OSS experience looks good when companies search for these kinds of people.<br />
<br />
** My personal itch is to make a PHP interpreter on Parrot that can interoperate with other Parrot-based languages. Yeah, it's a big itch.<br />
<br />
*** Yak shaving means solving problems to solve problems to solve problems, etc. You may end up doing a lot of that in Parrot since it's more meta than many projects. Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com1tag:blogger.com,1999:blog-4368062536110471271.post-28070846102314165982010-12-14T23:51:00.000-08:002010-12-16T23:20:01.099-08:00Notes from the Lorito Braindump - ContextsLast Thursday, allison, chromatic, dukeleto and I met to discuss the direction that Lorito was taking and to try and get as much as we could out of chromatic's head and into the wider world. As it turns out we came up with some significant changes in the design of Lorito as an interpreter, but I think they'll end up being quite beneficial once they solidify a bit. The following summary is a bit less warty and incomplete than the rough notes I nopasted to #parrot as soon as I'd typed them up after the meeting, but there are still a number of unanswered question. I'll recap these at the end.<br />
<br />
<span style="font-size: large;">Terminology</span><br />
<br />
<b>M0</b> - Lorito ops. Think of magic. M0 has no magic, i.e. no complex behaviors or subtleties. Higher levels are M1 (anything built from M0, e.g. PIR), M2 (nqp-rx and winxed) and M3 (Rakudo and Partcl).<br />
<br />
<span style="font-size: large;">Context is the new Interp</span><br />
<br />
<span style="font-size: small;">The biggest decision we made was that contexts would play <i>most</i> of the roles that the interpreter currently fills. They will contain all the mutable state needed by a running program. This includes the PC, registers, return PC, exception handler PC, exception payload and a pointer to the calling context. Some things such as bytecode segments and iglobals will still belong to the interp, but it will be going on a pretty severe diet for Lorito. The GC may or may not live in the interp. We'll flesh this out as we go.</span><br />
<br />
<span style="font-size: small;">Having an explicit PC also means that a dedicated goto op is no longer necessary in M0. Jumping around within (or between) a bytecode segments simply means that the PC is explicitly set to an address rather than automatically incremented. We can also allow the PC to escape into the system stack for ffi, though this idea hasn't been sanity-checked yet and may in fact be insane. This is all, of course, very low-level M0 stuff. Higher-level languages will have all of the proper control flow constructs.</span><br />
<br />
<span style="font-size: small;"> It's important to realize that M0 is designed to be as powerful as C, just easier to analyze. If an attacker can get a context to execute arbitrary M0, that'll be sufficient to own a machine. Security will be present, but it will live above M0, e.g. M0 bytecode verification or modification of the current context.</span><br />
<br />
<span style="font-size: small;">Each context will also have its own REPR and HOW according to jnthn's <a href="http://6guts.wordpress.com/2010/10/15/slides-and-a-few-words-on-representation-polymorphism/">6model</a> work. What this means is that we plan on using the MOP as the basis of our contexts. A context will have control over how it implements cloning and subclassing. This will give us numerous specialization possibilities. We can make contexts that only allow a restricted subset of operations for something like <a href="http://pl.parrot.org/">PL/Perl6</a> or a more static-oriented context for low-power embedded or mobile platforms. A context can decide that it will no longer allow itself to be subclassed or cloned, and there'll be no way to do so without circumventing the MOP. All security concerns need a great deal of thought and scrutiny, but I believe that this will give us a solid foundation to build on.</span><br />
<br />
<span style="font-size: small;">We will also take advantage of representation polymorphism to allow for different types based on differing storage constraints, e.g. compactness, speed, or compatibility with calling conventions.</span><br />
<br />
<span style="font-size: small;">The current context will be the first argument to each M0 op. We're now going with a fixed-length 4 argument op format. The context may be implicit or explicit, depending on what we can figure out. A fixed op width will go a very long way toward simplifying any code that needs to work with bytecode. It will be a most welcome change to get away from pbc and its variable-length (and occasionally variadic) ops. It'll be a joy to rip that code out. We need to make sure that this doesn't cause enough pain in other places to cancel out the benefit.</span><br />
<br />
<span style="font-size: small;">During the discussion, chromatic wondered out loud if there were a way to make contexts immutable. I'm not entirely sure what he meant, but I'm recording the question here to try to keep it from being forgotten.</span><br />
<br />
<span style="font-size: small;">With the context-based approach, on function invocation (or any CPS-based control flow changes), a clone of the context is created and given a pointer to its caller. When this happens, data from the calling context will be COW'd to the called context to avoid excessive memory usage.</span><br />
<br />
<span style="font-size: small;">One of my burning questions was how CPS could work in a low-level assembly language where there weren't any continuations or closures. The answer is that we'll fake it by using the context as a continuation. We can get at a context's guts by a few simple loads and derefs. I'm a little fuzzy on the details, but I can at least see how it's possible to do CPS in M0 with a bit of hand-waving.</span><br />
<br />
<span style="font-size: small;">I had originally intended to reformat all of my notes into a nice post, but it's already close to bed time and I'm only though the first point. The rest of my notes will have to wait for another day. Until then, here are some of the remaining unanswered questions:</span><br />
<ol><li><span style="font-size: small;">What kind of data belong in the interp and what all do we need in the context? The answers are settling, but there's still some uncertainty.</span></li>
<li><span style="font-size: small;">Where does the GC live? Is it a separate context, part of the interp or something else?</span></li>
<li><span style="font-size: small;">Is manipulating the PC a reasonable primitive to build an ffi on top of?</span></li>
<li><span style="font-size: small;">What pain will be caused by fixed-argument ops? Is it a worthwhile trade-off?</span></li>
<li><span style="font-size: small;">How would an implicit context as the first argument to each op work?</span></li>
<li><span style="font-size: small;">Is it possible to have immutable contexts and to do so more efficiently than straightforward COW'd contexts?</span></li>
</ol>Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com2tag:blogger.com,1999:blog-4368062536110471271.post-86178754355183585512010-12-06T23:26:00.000-08:002010-12-07T00:39:51.624-08:00Roadmaps: Fact or Fiction?Parrot's roadmaps haven't historically been a great source of encouragement or accurate information. Our goals have often been overly optimistic with the result has been that most of the time spent dealing with our roadmap has been spent pushing back uncompleted tasks. The current system has been based on tickets attached to a specific version of Parrot which it was hoped would be completed by the time that version of Parrot rolled around. Sometimes the tasks had champions, sometimes not.<br />
<br />
Unfortunately these tickets are often placeholders for ideas that are fully-formed only in the mind of one person. This prevents otherwise willing developers from jumping in and makes tasks hard to re-start after a break. There are also tasks that have received a good deal of attention but that simply haven't been completed. These tasks make the roadmap into a reminder of what we haven't accomplished rather than a list of our accomplishments and a source of encouragement.<br />
<br />
Parrot's hackers have been hard at work making valuable contributions, but work has been largely independent of the current roadmap. It's always a challenge to keep an accurate roadmap in a project based on volunteer tuits, but whiteknight and I are sure that we can do better.<br />
<br />
He and I chatted briefly on <a href="http://irclog.perlgeek.de/parrot/2010-12-07#i_3063531">#parrot</a> earlier this evening about how we want to structure Parrot's roadmap in the future. What we'd propose follows:<br />
<br />
The roadmap will be based on major versions (essentially calendar years). Each year at the post-x.0 Parrot Developer's Summit, we will finalize the roadmap for that year. This roadmap will be wiki-based, since the wiki integrates nicely with Trac's ticket system but also allows a more flexible structuring of information. We will have a solid plan for the next year centered around the supported (.0, .3, .6 and .9) releases. The roadmap will list only major features which have a champion* and which we are confident we will be able to deliver. If we aren't confident of being able to deliver a feature in time for a supported release, it's better to have a release with no planned roadmap items than to have a pleasant fiction. We will also have a fuzzy plan for the following year, though it shouldn't be considered binding. Anything beyond two years will be planned only in a very general sense. We will maintain a wishlist for tasks which we want to undertake but don't have any dedicated volunteers, so that such features won't be lost or clog up the roadmap.<br />
<br />
Parrot has an unfortunate history of over-promising and under-delivering. This has not helped our reputation among other OSS hackers and I want us to correct the trend. I want our new roadmaps to center around promising only what we're highly confident of being able to deliver. Establishing a track record will take time and effort, but two or three years from now I want to be able to look back with pride and say that we proved we could deliver what we promised.<br />
<br />
<br />
*In this case, a champion means that this person is dedicated to seeing a feature to completion. "Owner" is another way of communicating the idea.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com1tag:blogger.com,1999:blog-4368062536110471271.post-67369074201706114792010-11-23T12:56:00.000-08:002010-11-23T12:56:05.460-08:00What happened in the dynop_mapping branch?Several months ago, fperrad opened a <a href="http://trac.parrot.org/parrot/ticket/1663">ticket</a> complaining about difficulties with dynops*. (For those just joining us, dynops are libraries which can be loaded at compile-time to create user-defined PIR ops.) Parrot worked reasonably well with one or fewer dynop libraries loaded, but the problem people were seeing occurred when there was the possibility of multiple dynop libraries being loaded in different order. At the time, Parrot's approach to storing ops in bytecode was simply to store the op's number directly in bytecode, followed by any arguments needed by the op. When a dynop library was loaded, its ops would simply be appended to the interpreter's op tables and those offsets would be stored in bytecode.<br />
<br />
You might already see where this is going. When dynop libraries were loaded in a different order between compilation and loading, the dynop offsets within the interpreter's op tables would no longer be valid and hilarity would ensue. Also, by hilarity I mean segfaults. This could happen when a program which loads foo_ops followed by bar_ops was compiled to pbc. If that pbc were then loaded by a separate program which loaded bar_ops first, boom.<br />
<br />
plobsing took it upon himself to fix Parrot so that dynop libraries could be loaded in any order without invalidating previously compiled bytecode. His solution was to do away with the per-interpreter op tables and move the op tables down into the excessively-capitalized <a href="https://github.com/parrot/parrot/blob/12740ed9f9ea57587d4ee2afbb3f1df045640884/include/parrot/packfile.h#L247">PackFile_ByteCode struct</a>. When ops are added by a bytecode segment, they're given entries in the op mapping table. The offset into those tables is what's stored in bytecode. The first op used will always get 0x00, the second will get 0x01, etc, no matter what the ops are. If you've been looking at pbc_dump's disassembly output, this is why the op numbers don't correlate with the numbers in src/ops/core_ops.c after the dynop_mapping merge. As part of the process of wrapping my head aroung plobsing's changes, I modified pbc_dump to output op mappings as well as the disassembled ops:<br />
<div style="font-family: "Courier New",Courier,monospace;"><br />
</div><div style="font-family: "Courier New",Courier,monospace;">cotto:/usr/src/parrot $ ./pbc_dump -d hello.pbc</div><div style="font-family: "Courier New",Courier,monospace;"><snip></div><div style="font-family: "Courier New",Courier,monospace;">BYTECODE_hello.pir => [ # 5 ops at offs 0x30</div><div style="font-family: "Courier New",Courier,monospace;"> map #0 => [</div><div style="font-family: "Courier New",Courier,monospace;"> oplib: "core_ops" version 2.9.1 (3 ops)</div><div style="font-family: "Courier New",Courier,monospace;"> 00000000 => 00000164 (say_sc)</div><div style="font-family: "Courier New",Courier,monospace;"> 00000001 => 00000022 (set_returns_pc)</div><div style="font-family: "Courier New",Courier,monospace;"> 00000002 => 0000001d (returncc)</div><div style="font-family: "Courier New",Courier,monospace;"> ]</div><div style="font-family: "Courier New",Courier,monospace;"> 0000: 00000000 00000000 say_sc</div><div style="font-family: "Courier New",Courier,monospace;"> 0002: 00000001 00000001 set_returns_pc</div><div style="font-family: "Courier New",Courier,monospace;"> 0004: 00000002 returncc</div><div style="font-family: "Courier New",Courier,monospace;">]</div><div style="font-family: "Courier New",Courier,monospace;"><snip></div><br />
Since I'm trying to reimplement the code in the PackFile PMCs, it was important to figure out how this code works at a low level so that non-imcc code can once again build a valid pbc file. For this to work, the PackFile PMCs need to be updated to do the same thing that imcc's pbc code does now. The first question, then, is what exactly the current code does. This breaks down into three stages: loading a packfile from a stream (usually a file), executing loaded bytecode and serializing bytecode to a stream.<br />
<br />
Execution is the simplest change. In C, it means that code that deals with ops now needs to perform lookups on a packfile bytecode segment's op tables rather than on the interpreter's (now removed) global op tables. There are two important tables; op_info_table, which contains information on ops such as their names, family, arguments, etc; and op_func_table, which contains a list of pointers to the op functions. There's also save_func_table, which is used as temporary storage when something messes with op_func_table. These three pointers now live in the PackFile_ByteCode struct, so most code that deals with ops only needs to be changed as follows:<br />
<div style="font-family: "Courier New",Courier,monospace;">- op_info_t * const op_info = interp->op_info_table[*base_pc];</div><div style="font-family: "Courier New",Courier,monospace;">+ op_info_t * const op_info = interp->code->op_info_table[*base_pc];</div>The value of *pc will generally be lower, but that's an implemention detail.<br />
<br />
For storing and loading, the PackFile_ByteCode_OpMappingEntry, PackFile_ByteCode_OpMappping and PackFile_ByteCode structs (see <a href="https://github.com/parrot/parrot/blob/master/include/parrot/packfile.h#L235">https://github.com/parrot/parrot/blob/master/include/parrot/packfile.h#L235</a> ) are used. Because the bytecode segment (the PackFile_ByteCode struct) now contains op maps, the op maps need to be stored and loaded before the bytecode segment can be meaningfully used. An op map (PackFile_OpMapping) consists of an array of entries, with each entry contiaining all the mappings which use the same library. In the simple case where all ops are core, the op map will have only one entry, for the "core_ops" library. "core_ops" is the name for the ops that are built as part of Parrot and are always available. There will be another op mapping entry for each loaded dynop library such as "perl6_ops" or "math_ops".<br />
<br />
The contents of an op mapping entry are minimal. The PackFile_OpMapping_Entry contains the name of the library (*lib), the number of ops (n_ops) and two arrays called lib_ops and table_ops. table_ops is an op's number according to the op mapping table and lib_ops is its number within an op library. When imcc needs to look up an op's number (using <a href="https://github.com/parrot/parrot/blob/12740ed9f9ea57587d4ee2afbb3f1df045640884/compilers/imcc/pbc.c#L649">this function</a>), it will ensure that the necessary library is loaded and perform a linear search through through all mapped ops and all loaded op libraries looking for an op with the correct function pointer. When it finds a previously unmapped op, it will add it to the entry for the right library and return its index.<br />
<br />
This is a problem because a single packfile implementation isn't good enough. We actually have five. And by five, I mean two. The first implementation is the one that works and is implemented as C structs and functions. The second implementation is a PMC-based interface which is intended to allow PIR code to generate valid pbc. (It also allows the generation of wildly invalid pbc with hilarious results, but that's an unintended benefit.) The PMCs are what PIRATE uses to generate pbc that worked with Parrot before the dynop_mapping branch merged. The packfile PMCs are largely untested apart from PIRATE, so because the pbc format change didn't cause any new failures for those PMC, they were never updated.<br />
<br />
The packfile PMCs are important because they're the future. imcc, which is our current PIR compiler, is widely disliked and has been used by at least one developer to frighten his children. imcc's code has several performance issues, poor maintainability and an undesirably low bus number. It's also tied into Parrot's internals much too tightly for anyone's good. Once PIRATE is ready, I want us to be able to rip out imcc and use the parrot executable as nothing more than a bytecode interpreter. In addition to decoupling imcc from Parrot, this will let us use more self-hosted tools and will help us work out how to make pbc manipulation more accessible to Parrot's external users.<br />
<br />
Making the Packfile PMCs opmap-aware is an important step because it will mean that pir code will once again be able to produce valid pbc. From there, world domination is a smop.<br />
<br />
<br />
* As always, Parrot is better off because people mentioned problems they ran into. There was some pain in the interim, but Parrot is more robust as a result of the reports we receivedChristoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com1tag:blogger.com,1999:blog-4368062536110471271.post-42428293709527942732010-10-25T19:19:00.000-07:002010-10-25T19:27:41.366-07:00Parrot's Teams: Five ScenariosParrot's concept of teams was rushed into service without being entirely fully-formed. That doesn't make it an automatic disaster, but it does mean that we're figuring out inter-team and intra-team dynamics as we go. To help this process along, here are some hypothetical (or not) events and my best guess as to how the different teams would interact in addressing them. After each example, I've tried to list the major advantages and disadvantages that the team structure creates but more are welcome. Note that these cases are idealized somewhat and are still speculative. Real life is always messier.<br />
<br />
<span style="font-size: large;">1: Research Paper</span><br />
<br />
We've got a couple developers who keep their eyes peeled for new research papers and we're always glad use relevant research papers to improve our code. If someone presents us with some research that they think is relevant to Parrot, here's how I'd envision our process working:<br />
<br />
<ul><li>Someone posts to parrot-dev or #parrot saying that they found a research paper we should consider.</li>
<li>The architecture team takes the lead and looks over it, explicitly soliciting feedback from the community and from other teams.</li>
<li>If the improvements look viable, the architecture team says so and writes up the algorithm as it's relevant to Parrot on the wiki, along with any relevant notes.</li>
<li>The architecture team puts out the call for someone to implement the code.</li>
<li>A Parrot hacker picks up the project.</li>
<li>Someone from the architecture and product teams follow the progress of the branch and review commits.</li>
<li>As the branch stabilizes, the product team benchmarks it (or ensures that it's benchmarked) to demonstrate a meaningful improvement.</li>
<li>As the branch stabilizes, QA also makes sure that it has good test coverage and documentation.</li>
<li>As the branch gets ready for merging, the product team checks that external projects won't be disrupted by the change.</li>
<li>The code is merged, well-documented and tested and doesn't break anything for Parrot's users.</li>
</ul><br />
<b>advantages</b>: Teams will ensure that Parrot has a unified direction as new research comes to our attention. They'll also give us a clear path from paper to mergable code and will help enforce a higher bus number for new code, in addition to ensuring that code is documented and tested before it gets merged.<br />
<b>disadvantages</b>: There will be a higher barrier to entry and increased dependence on the architecture team.<br />
<br />
<br />
<span style="font-size: large;">2: Significant Design Change</span><br />
<br />
Say that a Parrot developer proposes a significant design change to address a bug or misfeature. An example of this is Peter Lobsinger's dynop_mapping merge, which made some small but significant changes to bytecode. The branch did a good job of solving the problem at hand, but one important test and a significant external project (examples/pir/make_hello_world_pbc.pir and PIRATE, respecitvely, which will be the subject of a later post) broke because of it and have yet to be fixed. Here's how the process might work with teams in full force:<br />
<br />
<ul><li> Someone files a ticket or posts to parrot-dev or #parrot about a design flaw in Parrot that requires some redesigning.</li>
<li> A Parrot hacker steps forward to fix it.</li>
<li> Said hacker figures out a fix and discusses it with the architecture team.</li>
<li> The architecture team reviews it and either gives the ok or helps iterate the design.</li>
<li> The hacker starts implementing his changes.</li>
<ul><li>While hacking, he describes the API consequences to the product QA teams, who update the relevant docs and/or add tests.</li>
</ul><li>When the code is ready to merge (and ideally while the branch is being developed):</li>
<ul><li>the architecture team reviews the code for bugs and to make sure design changes go as planned.</li>
</ul><ul><li>the product team reviews the code for user-facing changes.</li>
<li>QA makes sure that the changes are well-tested and documented.</li>
</ul><li> The code is merged, the relvant ticket is closed and everyone's happy.</li>
</ul><br />
<b>advantages</b>: Parrot maintains a unified direction across design decisions. The team structure ensures that code is well-reviewed for different aspects while it's being worked on and that when coding is done, the branch will be (mostly) ready to merge.<br />
<b>disadvantages</b>: This process will take more effort from the originator of the fix to explain his thinking and to answer questions during code review. This will raise the bus number of the code, but will also raise the barrier to entry.<br />
<br />
<br />
<span style="font-size: large;">3: API Overhaul</span><br />
<br />
Let's say that we decide that some part of our API needs a massive overhaul. An example of this may be coming soon: Andrew Whitworth has expressed some distaste at the state of Parrot's embedding API and may soon take a much-needed jackhammer to it. Here's how I envision the process working with teams:<br />
<br />
<ul><li>The product team decide that an API needs massive refactoring in order to be useful to users, either through review or due to user feedback.</li>
<li>The product team figure out what the API should look like.</li>
<li>The product team hacks everything together in a branch.</li>
<li>QA looks at the branch to make sure that the new API functions are well-tested and that upcoming deprecations are documented</li>
<li>The architecture team does a brief review for sanity.</li>
<li>After the proper time for deprecations has passed, the changes are merged into trunk, causing much user jubilation.</li>
</ul><br />
<b>advantages</b>: API changes will have more dedicated code review with a specific aim. More people will be looking over code changes and will be familiar with what will be merged into trunk.<br />
<b>disadvantages</b>: The refactor will be more sensitive to tuit shortages on the part of different teams.<br />
<br />
<br />
<br />
<span style="font-size: large;">4: Lorito</span><br />
<br />
Lorito is an upcoming major reenvisioning of Parrot at a low level. Currently most of Parrot is written in C and PIR, and the impedance mismatch between the two is a significant bottleneck. Lorito will be a very low-level and minimalist set of ops which will provide sufficient power to reimplement most of the C components of Parrot, eliminating the impedance mismatch, among other benefits. Here's one way Lorito could become a reality:<br />
<br />
<ul><li>We decide that Lorito is a good idea.</li>
<li>The architecture team leads the effort to figure out a rough timeline and order of events.</li>
<li>The architecture team leads the design and documentation effort to work out what a Lorito VM will look like. Everyone is actively encouraged to participate.</li>
<li>Volunteers are solicited to implement prototypes to find holes in the design. These holes are filled in as they're discovered.</li>
<li>As the design stabilizes, the product team looks at Lorito from a product perspective, helping further refine the design.</li>
<li>Once the design is settled, hacking on the final implementation begins in earnest according to the timeline.</li>
<li>The architecture, product and QA teams review major branches for design, test coverage and documentation as they progress.</li>
<li>After much effort, we are able to use Lorito overlays* as a replacement for internal Parrot components currently implemented in C.</li>
</ul><br />
<b>advantages</b>: There's a consistent force ensuring that progress is made and a well-defined timeline. All relevant parties have opportunity to voice their concerns and influence the final product.<br />
<b>disadvantages</b>: The process depends on having input from different teams and will be sensitive to tuit shortages.<br />
<br />
* By "Lorito overlay", I mean anything that compiles down to Lorito ops.<br />
<br />
<br />
<span style="font-size: large;">5: Major Security Vulnerability</span><br />
<br />
Let's say that a major security vulnerability is discovered and made known to Parrot's developers. For this example, say that the latest supported release was 3.9.0 and that the latest developer release was 3.11.0. Here's how we'd deal with this to ensure a minimal turnaround time:<br />
<br />
<ul><li>The issue is raised and both 3.9.0 and 3.11.0 are found to be vulnerable. Consistent with our support policy, the supported 3.9.0 release needs to be fixed.</li>
<li>Someone writes a proposed fix, either as a patch or a branch, depending on the vulnerability.</li>
<li>Representatives from QA team, product team and architecture teams briefly meet to make sure that the fix is sane (architecture), that the fix is valid, tested and documented as being fixed (QA), and that the fix doesn't negatively impact users (product).</li>
<li>The fix is committed to trunk, along with a backported version for 3.9.0 . QA makes sure that new 3.9.1release is produced and distributed with appropriate notification.</li>
</ul><br />
<b>advantages</b>: We provide a known-good fix in a timely manner, along with a regression test to ensure that the bug doesn't resurface.<br />
<b>disadvantages</b>: The structure requires some synchronization of schedules.<br />
<br />
<br />
<br />
I hope that this provides a good idea of what I think the teams will look like as they work together to improve Parrot. Nothing's set in stone yet, but my hope here is to provide a starting point for further discussion.<br />
Internal organization of the architecture is a subject for another day.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com2tag:blogger.com,1999:blog-4368062536110471271.post-3640927928859128652010-10-21T23:15:00.000-07:002010-10-22T00:07:04.983-07:00Parrot has a new architect. What now?Close followers of Parrot have probably noticed that Allison Randal, our esteemed architect, hasn't been very active over the last few months. After her <a href="http://allisonrandal.com/2010/08/20/ubuntu-ta-intro/">recent announcement</a> that she'd been hired as Technical Architect for an obscure Linux distribution called "<a href="http://www.ubuntu.com/">Ubuntu</a>", folks might be wondering what Parrot's future looks like. This is doubly true because the architect position has had a bus number of one. If Allison were hit by bus or otherwise incapacitated, there was no structure in place to ensure that someone could step up and keep Parrot moving in a consistent direction.<br />
<br />
Burnout has also been a problem for Parrot's past architects, partly because the architect ended up being responsible for managing most of Parrot. We've done a great job of making Release Manager a straightforward process that can be performed by any Parrot developer with a commit bit. The Release Manager position, however, has been the exception. Most of the interesting roles, e.g. managing Parrot as a product or working with the wider OSS community, haven't been formalized and have fallen to the architect in the absence of someone willing to take the lead. Allison is a capable leader and an A-list hacker, but Parrot has passed the point where it can be formally managed by a single volunteer, even one of her caliber.<br />
<br />
It was in this environment that Jim Keenan put together a <a href="http://parrot.org/content/pacific-northwest-parrot-developers-gathering-summary">meeting of Parrot developers</a> in Portland, Oregon. Many topics were discussed, among which was a restructuring of Parrot to split responsibilities into separate roles. Andrew Whitworth has already covered the idea in its current state, which will undoubtedly change as we progress. The end result is that we'll be splitting responsibilities into <a href="http://trac.parrot.org/parrot/wiki/ParrotTeams">5 teams</a>, only one of which will cover architecture. We'll be solidifying the structure and formally voting on leads in the coming weeks, but interim leads have already volunteered for most available positions to get the process bootstrapped. Andrew is provisionally in charge of the Product Management and in addition to posting <a href="http://wknight8111.blogspot.com/2010/10/parrot-teams.html">some thoughts</a> on the team structure, has already <a href="http://wknight8111.blogspot.com/2010/10/product-management-team.html">started fleshing out</a> his vision for the Product Management team.<br />
<br />
Then at last Tuesday's #parrotsketch meeting, Allison announced that she would be stepping down immediately, and that she had chosen me to succeed her as head of the architecture team.<br />
<br />
What this means for Parrot's immediate future is that while I'll be the closest analog to Allison, Parrot won't rest primarily on my shoulders in the same way that it did on previous architects'. It will be the architecture team's job to look to the future and determine where Parrot needs to go, but other jobs will be delegated to different teams, allowing all of us to specialize without letting anything important falling by the wayside.<br />
<br />
Allison mentioned that after the meeting, she felt like a huge weight had been lifted from her shoulders. She plans on staying with Parrot as a developer, but will be focusing most of her energy on <a href="https://launchpad.net/%7Epynie-dev">Pynie</a>. For those of us wondering what Parrot's future looks like, we now have part of the answer and a reason for optimism. It will take some time until we figure out just how the different teams will interact and what it means to be on a team, but the new team structure promises to help us become a more focused community and to produce a high-quality production-ready platform for interoperable dynamic language implementations.Christoph Ottohttp://www.blogger.com/profile/05589658458274816699noreply@blogger.com0