Archives for October 2002
Uploaded a new article and sample code, which demonstrates how to extend Dolphin's standard menu behavior.
Uploaded a new version of the My Dolphin Environment. This fixes a bug in one of the scripts.
I had a quick look at InfoSnorkel and it does build local html files from RSS feeds. Unfortunately my trial has expired, so I cant test to see what they do with script or object tags in the description.
I am not sure if I will maintain this, nor if I am breaking any acceptable use policy.
My program polls the Recent Changes page (at this stage I will probably run it once per day). It then extracts the links from the page and generates new http requests for the first 15 links it finds. Resources that it has retrieved in the past have been persisted to disk, along with their http headers. This allows me to include both "If-None-Match" and "If-Modified-Since" headers with the requests, and therefore the majority of the time the server can respond with a 304 not modified.
To create the RSS items, I run the html through a rewriter, which removes html nasty tags such as object,script,iframe etc. This also removes links and images (but retains the content of link tags). The thinking behind this is that it probably isnt the best idea for users to be creating wiki pages when clicking on links in their RSS reader. It also limits the size of the item's description, replacing nodes with a "." once a certain threshold has been reached.
The whole thing could be achieved far more easily on the server side, however it was fun to do, and made for a simple use case for the framework I am spending my weeks on.
I needed to test a WinInet function this week. One of my early Dolphin 4 projects was a wrapping of the WinInet dll. This was the first time I had used or looked at this code since completing it, and I found it interesting to see how my Smalltalk style has changed, and what I would do differently if I was doing it as a project now.
The first thing that struck me was the code formatting and layout, which reminded me of how much time I used to spend thinking about issues like; should the #ifTrue:ifFalse message be on one line or two etc etc. Now, with the Refactoring Browser's "Reformat/Accept", the layout of a method doesn't get a first thought, let alone a second thought. If a method doesn't look right after being automatically reformatted, I start thinking about refactoring.
The first thing I did with the code was to use my IDE Extension to reformat the source of all packages.
The second thing that struck me was the uselessness of Dolphin 4's autogenerated comments for accessors and debugger stub methods. I was surprised to see so many comments of the form "Private - this answers the foo instance variable" and "Private - this method was generated .... and is yet to be implemented". I think I must have started seeing those comments like a ruled line between the method signature and body.
In most cases I had also left the generic autogenerated argument name "anObject" rather than changing it to include type information. Now, as a rule, I always rename my arguments to include type information, and I work to maintain this as I refactor my code. I think it improves the readability of the code, and means that the argument can serve as a quick way to open a browser on a class ... ie "bar: aFoo" ... I can select "Foo" and "browseIt".
The major problem I have with the code is the lack of comprehensive tests. This was written soon after I started writing tests, and I remember making excuses to myself that it was too hard to write tests for.
These days, that just isnt an option for me, there is no choice, nothing to excuse ... code with tests is useful and maintainable, code without tests is close to useless; like a shanty town or ground slime.
What could I do with this code? ... I opened up the demo FTP application and banged away for 5 minutes ... it seemed to work, whoopee. Aside from the issue of bugs being introduced from moving to Dolphin 4 to 5, the major issue for me to reuse this code would be if I had a need for a sub-set of its functionality that I wanted to tear-out into a new project. Without tests, how would I know where to fix up the rips from that tear? I don't need tests to tell me how to fix the rips, I have a brain to do that, but I do need them to tell me where to look. This may sound violent, but I find that I often don't see the natural modules in my code till after it is written. If I cant tear them out and cleanly separate them later, then my projects cant be built using layers.
When I wrote this code, I was still using Lazy initialization for most of my instance variables. Now I very rarely use lazy initialization, and most of my classes have an initialize method. I know this is a controversial issue, but for me, an initialize method works better. From a communication perspective, it means that all the initialization code is in the one place. More importantly to me, is that it gives me greater control over the state changes of my objects. Many of my projects involve multiple processes, and I find that how an object changes state an important design decision. As an example, certain objects (and their object graph) may not change state once created/initialized and are therefore naturally safe to participate in multiple processes ... letting the initialization take place in stages, in response to message sends, is too loose, and too hard for me to reason about.
I know there are times that lazy initialization is useful, but since in my own code I am not expecting an accessor method to do anything more than set/get an instance variable, I will typically try to give that method a name which communicates to me it is both accessing and initializing.
I found many methods not categorized. I think I have higher standards now, and wouldn't release code in that state, but I now use categories and public/private (and source object renaming) more naturally in my development process. For lack of a better description, it is a bit like doodling ... sometimes a cool idea can jump out of a doodle. For me, Browser doodling is changing methods from public to private, (re)categorizing them, and renaming.
I certainly spotted a fair bit of duplication without specifically looking for it. The duplication was easiest to see at the method level, possibly the code reformatting made it easier to spot, but I don't think that duplication elimination was as important to me then as it is now.
From a design perspective, I think I "over-wrapped" the library ... I tried to offer too much, which meant that I was making decisions on how I thought the library would be used. I was fairly happy with how I had packaged the project; I had separated the functions/structures into their own packages (and for what I needed to use this week, I only installed two of the smaller packages).
My main problem was with the SWWinInetHInternet package, which was the package that contained classes that wrapped the functionality of the dll. This is what I feel I over-engineered. For example, I remember spending alot of time trying to write adaptors for all the Wininet option settings, so that they could be get/set in a Dolphin Published Aspect inspector ... useless code. As I see it now, this kind of functionality might be appropriate for a higher layer package, but not in the layer that directly wraps the dll.
If I was to redo this project now, I think I would split this package into two layers. As it stands I have classes that both provide helper methods to access the dll functions, and which do fairly complex state management (they arrange themselves into tree, do finalization and in some place manage weak references). The state management functionality makes too many assumptions as to how the library will be used. I would have classes with helper methods as a lower layer, and then probably build the state management as a separate higher layer which a user of the code could choose to use, but not be forced to. The layer separation would also offer the opportunity to introduce mock objects for testing.
I was surprised at how many temporaries I used. I don't specifically avoid the use of temporaries, but I use far less of them now. This didnt seem to be a related to methods being to long as I found many methods like:
tempNoun := self noun adjective. self verb: tempNoun.
This is like introducing a stutter into the code. The only reason I can come up with is that I was not comfortable in the debugger, and the temporary was a way to see the intermediate state of a method. Now I would either step into the method and/or I know where to look in the debugger for intermediate state objects and/or I would just inspect/debug the code in the debugger to open a new inspector/debugger.
The code had no method comments, but a few classes did have comments. I now do write method comments, as I have found them somewhat useful when studying other peoples code. I dont automatically write method comments, and I dont write comments describing what a method does, but if I feel it will help communicate the intent of the code, I will try and provide some "context" for the method.
In studying my old code, what I would have most liked to have seen was class examples that I could run. I have been doing this with my recent projects and I find that it is an easy form of documentation to maintain ... the RB maintains it for me.
How did I ever do without ifNotNil: [:object | ] ?
I spotted a bit of cruft/duplication, for example a number of methods that just sent a single to message to self. It reminded me of how it was tedious to rename without the RB. For methods, I would open two browsers, one on the implementors and one on the senders, and manually do the renaming. Error prone, and for methods with many senders/implementors, the decision to rename something had a cost that had to be considered. Even renaming classes meant going through all references and manually renaming. The RB has made this zero-cost, and as a result I think less cruft builds up in my code. Certainly the cruft I spotted would be swept away by the RB without a moments thought.
I fixed a few bugs and moved the project from www.chartexplorer.com to dolphinharbor. I didnt make any of the changes I have just talked about ... how could I ... there are no tests :(
This weekend I played around with DirectX Transforms. I found 4 type libraries and while Dolphin had a couple of problems generating the interfaces because of multiple outputs, it was easy to clean up and get them working. These controls are what puts this kind of gunk on webpages, but I found it easy to apply them to my own bitmaps.
I found some cool transforms by MetaCreations that were described in a Microsoft type library, and apparently included with IE5. I would like to use them in my project but they require a copywrite string, and the company seems to have disolved. I have found numerous websites that supply the required strings, but I am not sure what the legal status/requirements are if I want to use them in a commercial project.
Last week I spent some time evaluating whether to make more use of the WebBrowser control to provide an alternate UI for my project. I would get alot of functionality for free, and be able to offer first time users a familiar interface which they could use to explore the rest of the program. As I found out this weekend, it would also allow me to add eye candy to the UI without much work.
I was able to get the customizations working except for this one, and I am sure that a Dolphin/COM expert could find some way of doing it.
One thing I dont need is the http engine, I want to use my own framework in its place, which gives me a way to queue http requests, control how and what requests are sent, and how to process the retreived contents. That is the whole point of my project.
I have been able to shove a string of html into the WebBrowser control, but it is a bit trickier to fake it into believing that the string came from a specific url, which is required to form absolute urls from the relative hrefs in the html. I do rewrite the html, so I could include a base element, but there may still be security risks for the user. Probably the easiest way is to include a small proxy server that hands the http requests back to my http client. That way I can customize and control it, from the inside and outside ... It also means I can rely more on Smalltalk code rather than COM interface magic.
First Groove, now Mitch Kapor. Jerry, you may be getting your web-based Agenda replacement for free. Why Python?, why not Squeak and start with Celeste's categories and filters!?. Found some more details on this page ... an interesting mix of projects.
This looks cool ... and another nail in the Terra idea's coffin.
I have upgraded to Bottomfeeder v2.0. I had not noticed this before, but in the program's settings, you can change the "Look" to various OS settings. I now have Bottomfeeder running with an OS/X Aqua look on my w2k machine.
I am subscribed to a couple of feeds that had one or two items that were repeatedly showing up as "unread". They are now showing as "read" in the new version. Thanks again James and David!
Did a bit more work on the blog today.
I used the Smalllint CHB plugin a fair bit this week, and I am happy with the way it is working. I think I will wrap it up and send it through to a couple of testers this week for some feedback. I am definitely making more use of Smalllint now that it is only one click away.
I also added a couple of new lints to my LintAdditions package; "Not sent from TestCases" and "Only sent from TestCases". The first is handy for locating UnitTest holes, the second for locating methods that can be deleted ... which is one of my favorite things to do.
From: Lambda the Ultimate
Philippe Mougin. F-Script: Smalltalk Scripting for Mac OS X. ESUG 2002
F-Script is a scripting language that was inspired by both Smalltalk and APL. The language was discussed here previously.
These slides show how F-Script is integrated with Cocoa (the OS X framework).
I know one passionate Mac OS/X Smalltalk addict who may be interested in this :)
I got a message from Jerry this morning saying that he had made some progress with Groove/Dolphin. I downloaded his file, which consisted of about 50 packages and a sample workspace script. Wow!, I counted ... it installed 550 COM interface classes.
The script that Jerry had included, initialized Groove, located my account logged me in, accessed all my "shared spaces" and opened the one I have been using with Jerry, and then, ta-da ... opened the discussion tool!
I had been seeing Groove/Dolphin ideas as Dolphin components embedded in Groove, this has opened my eyes to ideas of Dolphin driving Groove tools.
Unfortunately neither Jerry nor I have a lot of time to spend on this right now, but it has certainly raised my level of interest. Thanks Jerry!
Update on the Terra problem.
I have just uploaded a patched version of the "Terra.exe" SOAP client, and a new "ToGoSprayClients.zip", file which contains all the samples.
I simply changed the hardcoded url to another hardcoded url. I am planning to do more work on this application, but dont have the time right now. The url I have switched to is the one that they listed at XMethods.
I tried out the other clients, and amazingly, all but the Weather service were still responding. When I was developing Spray, listed services were dropping off like flies, so it is good to see these ones have stuck around.
If you are using the source version, you need to locate the WSDL in the TerraService(class)>>externalWSDL method, and at the bottom of this method, change the service/port/soap:address location attribute to "http://terraservice.net/TerraService.asmx".
Look familiar? :) I am writing this from the comfort of a Dolphin workspace!
What I got done:
A good indication of lack of sleep ... a transparent CHB
I have been tossing up various options for uploading content to a server, and strategies for keeping the server synchronized. I want the basis of the design to be that the local filesystem is the primary data source of the blog, and the constraint that the system will work with 3rd party standard HTTP servers.
While using something like Availl would be cool, I doubt I could convince my service provider to install it on their servers (however I am fairly sure I could convince the dolphinharbor administrator to:) While initially I will use FTP, I would like to avoid the need for FTP.
The user can add/edit/delete files on the local filesystem. The directory structure of the local filesystem is mirrored on the server as a url namespace. A method is needed to identify differences between the local filesystem, and upload those differences to the server.
WebDAV is the obvious technical solution. It would allow me to attach my own properties to the resources, and I would set a timestamp property when PUTing the resource, and use that to test if the local copy of the resource (the document) had changed. With a WebDAV PROPFIND, you get a response which not only gives you the namespace layout of a container/collection resource, but also the properties of the container resource and child resources. This would allow me to enumerate both the local filesystem documents and server resources to identify the differences.
A plus for this method is that I have a working WebDAV client in Dolphin that builds a tree of the server namespace. It would be quick for me to implement this solution.
The problem with WebDAV is that my current service provider for this host does not have it installed. It is almost tempting to switch the blog over to dolphinharbor which is running mod_dav, but I would still like to find a more general solution.
Only a passing thought but, we can use the local filesystem names to construct the server resource urls (since no resources are created on the server except by the one client). We could do an HTTP HEAD on each resource and use either a 404 response or the HTTP date and cache headers, to determine if the resource needs to be created/updated.
Along the same lines is to force a document/url naming scheme that includes a hash of the content in the name. Therefore a resource with the same name always has the same content. This is nice because it avoids many naming problems, but may become restrictive. I guess it could be used as a lower layer for a versioning based framework, but I dont need that.
I think this is the way I will go. I think the idea is RESTful, but I am no REST expert. I like it, because it fits in nicely with the client work I have been doing for my main project. (The project is a framework for managing local copies (documents) of server resources, and various pluggable strategies for extracting and following the links from the retrieved documents.)
The process would be that after a successful synchronization between a local filesystem directory and a server container, two index files would be uploaded. One would be the typical html human readable index. The other would be an xml file containing links to the child resources (XLink?) and timestamps. Before a successful synchronization, this resource would be retrieved and the links parsed. Posts would be stored in one container, Articles in another. Certain error conditions could lead to situations where resources had been created, but are not in the index file, however this would only result in them being replaced next synchronization.
The index file, or the information in it, could be stored on the client, but I think it is more RESTful to have it on the server where it could be used for other purposes.
All I would need from the server is the ability to do an HTTP PUT to it.
Radio has been working well for me, but I decided to have a look at what else is available. I found this Blogging Software Roundup article a good summary.
I didnt realize that Radio was fairly unique in that it uses a local server, and constructs the content locally (like Citydesk)before uploading. The other solutions involve either 1/ storing info on a server, and using Perl etc to construct the content dynamically or 2/ Using a 3rd party provided web application.
I like Radio's model.
I am thinking a good project for this weekend would be to roll my own "blogging" software. I dont need any advanced features; all I need to do is to construct an index page and rss file from a number of news items. The bottom line is that it could be done with notepad and an ftp client.
Down the track it might be a good reason to port STT.
I will probably stick with a 3rd party FTP client for the moment. A couple of years ago I wrote an FTP client, but it uses WinInet, and I dont like the thought of using that. Maybe I could use HTTP's PUT ... mmm, maybe this could be nice and RESTful.
... So, this blog may have a new (and very basic) look and feel next time you visit. It may also just not be here if something goes wrong :)
I got a report that the Spray Terra client app is currently not working. It looks like terraserver.net is not responding, but terraservice.net is. The terraserver.net endpoint is hard coded into the "ToGo" application (along with the whole wsdl), and I would need to re-deploy it if the outage on terraserver.net is permanent. I remember at the start of the year, using terraservice.net, but having to switch to terraserver.net when the former went offline. Back and forward.
I have been thinking about doing some more work on this app, and doing a shareware release, but it is issues like this which worry me. I would make it more tolerant, for example allowing a new wsdl to be downloaded, but still, the applications continued viability depends on Microsoft's continued good-will and Spray/.NET interoperability.
I am still trying to work out what level of support I need to offer for skinning. Actually, I am still trying to work out what skinning is.
My conclusions so far are;
At minimum I want my applications to be "skin friendly" ... I just have no idea what that means in practice.
I think I need to bite the bullet and upgrade my development machine to XP.
The new Bottom feeder version has been working well. A couple of nice UI improvements, and it is doing a great job of parsing all the different rss feeds ... I have read that isnt the easiest thing to do with the current proliferation of rss versions. Thanks James and Dave.
I had two small projects I wanted to do this weekend, and I ended up finding the answer to both in the same place.
I wanted to add the common "Open/Open With ..." menu commands to my applications, and I wanted to find a way to open the content of an email in a users default email program. It was one of those weekends that I ended up spending 95% of my time trawling through MSDN for a best practice solution that will work for most flavors of windows.
For my current project, I dont need reliable email ... the email function is a cosmetic additional feature. I played around with Dolphin 5's new CDO interfaces. They were easy to use and would do what I wanted, but are for Win2000 only. I was able to send attachments, and also use the IMessage interface to generate an email with the lovely MHTML content-type.
The other options were MAPI, Simple-MAPI, CMC and older versions of CDO. At this point I am starting to get a headache. On my own machine, (which has never had Office or Outlook installed), the recommended practice for determining which options I could use came up with Simple-MAPI only. I found it interesting that I had been using CDO 5 minutes before that. Arrrg.
I decided to take a leaf out of IE's book, and have a look at how "Send To"/"Send" works. Since Dolphin requires IE, I think it is safe to make the assumption that on most of my users machines, there will be a "sendto" folder with "Mail Recipient.MAPIMail" file. I found that I could drag/drop files onto the "Mail Recipient.MAPIMail", I could copy from the Shell and paste to "Mail Recipient.MAPIMail", but I couldnt copy from Dolphin. This is a pretty good indication that it uses the OLE DataTransfer, and I didnt want to have to implement my own IDataObject for this.
Back to the first project ... I had decided to use the "Scriptable Shell Objects" for this. The alternative was to trawl through the registry collecting the information myself, but these interfaces make it very easy.
I found I could create a Folder and ask it for an item's "Verbs" which are effectively what is displayed in a Shell context menu for the file. To perform the command, I send the #doit message to the verb's interface. I have only poked at it so far, but it looks to be the simplest way to do what I need, and fingers crossed, it will work as expected when I test on Win98/2000 and XP.
Back to the second project ... my stumbling block with "send to ... mail recipient" was that I couldnt easily put content on the OLEClipboard in Dolphin. I decided to see if I could use these Shell verbs to do the work for me. I created a file with my content, then got the "Copy" IFolderItemVerb for that file and sent #doit. I then found the "Paste" IFolderItemVerb for "Mail Recipient.MAPIMail" and sent it #doit. Yippee!, it worked.
The bottom line is that I think I have a solution for both, that should just work. I have reached my MSDN and Windows saturation point for this week, I need to do some pure Smalltalk to calm my nerves, but I will come back to this and implement it properly.
"It all adds up to one more piece of evidence supporting my hypothesis that we are about to enter a golden age of desktop software ..."
Groove is a great example, Jerry and I have been playing with it this week, and it is both a powerful tool and very easy to use. While it is browser like, ( and it seems to use browser components), its user interface has been designed for a specific purpose, and that, IMO, is what makes it so effective.
Zinio is also worth checking out. I was suprised at how readable their content is. It uses a nifty automatic blurring/zooming which first gives me the context of the page, and then brings it into focus. It feels very natural to me. They have plenty of free trial magazines, and seem to be adding new magazines regularly.
I think the plug-in needs some UI tweaking. Currently I am using the same ValuePresenter in both the Shell and plugin, but I found that the Shell's rule tree is not as useable when living in the plugin. Because the lint is for a single class only, there are less problems, and I dont see the need to be able to select individual rules to run.
I am thinking of changing the tree to a filtering list which just shows the rules with problems. This would mean that you could switch to the lint pane, and at a glance, see the problems. I tried auto-expanding the tree, but I still needed to scroll up and down. (Lazy in the extreme!, but I think it will be worth the effort).
I didnt make many posts to this blog during the week, however last weekend, I set up a separate blog which I did make regular posts to. The goal of the second blog is to keep a journal of marketing ideas, for my current project, for review by my marketing department (my sister!). It is a one way dialog, but is working well for the moment.
I have a bad habit of ignoring the rest of the world when I am in the middle of a project that is going well. I have a stack of email, newsgroups, news and blogging to catch up on this weekend. During previous projects I have gone for weeks before attempting to catch up, but for this project I have decided to put the code aside every weekend and do other things. Two line monosyllabic emails are a good indication my current project is going well. A two page ramble means I am stuck or in between projects.
I see that http://safari.oreilly.com have a new look and feel. My first set of books was: "Network Security with OpenSSL", "Peer to Peer", "Programming Jabber" and "Software Architect Bootcamp". What I need right now is more books on User Interface design. This book on MSDN is a good reference, and Joel on Software has some great information.
Copyright 2002 Steve Waring