Serge van den Oever [Macaw]

SharePoint RIP. Azure, Node.js, hybrid mobile apps

  • Macaw Discussion Board on Quick Launch bar

    If you create an instance of a list  you get the question: “Display this XXX on the Quick Launch bar?”. If you select “Yes” a link to the list is displayed in the quick launch bar of your site. The Macaw Disucussion Board has an issue that even if you select “Yes”,  it does not appear on the quick launch bar.

     If you go to “Modify settings and columns” of a Macaw Discussion Board list and then select “Change general settings”, you can again answer the question “Display on the Quick Launch bar?”. If you answer this question with “Yes” the link to the Discussion Board becomes available on the quick launch bar.

    This same procedure is needed for tools that utilize this setting for displaying lists in their navigation. A good example is the Advis Site Navigator that only displays lists with this setting set to true.

  • MacawSharePointSkinner 1.0.0.1 released

    Welcome to the MacawSharePointSkinner. MacawSharePointSkinner is a tool designed to enable non-intrusive modifications to the visual and functional design of SharePoint. The tool can be used for both Windows SharePoint Services 2.0 and for Microsoft Office SharePoint Portal Server 2003. Actually, it can be used for any web site utilizing the ASP.NET technology. Download at http://www.gotdotnet.com/Workspaces/Workspace.aspx?id=3ed68681-ae28-4d33-8c36-403e6af7fa11 UPDATE: can now be found at http://www.codeplex.com/SharePointSkinner.

     

    One of the major issues that we encounter in the implementation of SharePoint within organizations is that organizations want modifications to the visual and functional design that are almost impossible to implement without a major overhaul of the standard files and templates provided with SharePoint. SharePoint is constructed as a kind of standard product that is best used out of the box. Some design can be applied by specifying themes (for team sites) or by modifying CSS stylesheets (for the portal). The possibilities here are limited however, and changes to the actual HTML that is rendered results in changes to hundreds of the standard files.

     

    When implementing customer requested visual modifications, one of the big problems that we encountered in making extensive modifications to the files and templates delivered with SharePoint was that the rendering of the same HTML is implemented differently by different pages. Some pages contain the actual HTML that is outputted and can be easily modified. Other pages contain server controls that do the rendering of the same HTML. These pages are almost impossible to modify. Another problem is that modifications must often be made to hundreds of pages.

     

    The approach that MacawSharePointSkinner takes is two-fold:

     

    Text Replacements – MacawSharePointSkinner lets SharePoint render the final HTML, and just before this HTML is sent to the browser MacawSharePointSkinner makes the needed modifications to this HTML. This is done in such a way that no modifications are needed to the internal files of SharePoint, so it is non-intrusive. Another advantage is that it will survive service packs (although the output HTML may change in a service pack!) and template modifications.

     

    Url Redirections – MacawSharePointSkinner can translate requested url’s into other url’s. This allows you to redirect standard SharePoint url’s to your own url’s.

     

    MacawSharePointSkinner is implemented as an HttpModule that provides functionality for url replacements and powerful replacements in the HTML output rendered by SharePoint.

     

    I will not describe the inner workings of an HttpModule, for more information have a look at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpguide/html/cpconhttpmodules.asp.

     

    1         How to install MacawSharePointSkinner

    1.1      Introduction

    MacawSharePointSkinner is an HttpModule. HttpModules are configured in the web.config of your ASP.NET web site. SharePoint is an ASP.NET web site. The required DLL is installed in the Global Assembly Cache (GAC).

    1.2      Procedure

    Follow the steps below for installation:

     

    Step

    Action

    1

    Deploy the DLL Macaw.SharePoint.Skinner.dll from the Release directory to the GAC by dragging[1] it to the directory c:\windows\assembly using Explorer.

    2

    Make a directory to contain the MacawSharePointSkinner configuration file, for example c:\MacawSharePointSkinnerConfig. Copy the files SkinConfig.xml and SharePointSkinner.xsd to this directory.

    3

    Open the web.config files of the portal for which you want to enable the MacawSharePointSkinner functionality, and the SharePoint /_layouts virtual directory in NotePad or another text editor. Those files can be found in the virtual directory of the portal (when SharePoint is configured on the default web site, this directory is c:\inetpub\wwwroot), and the directory C:\Program Files\Common Files\Microsoft Shared\web server extensions\60\TEMPLATE\LAYOUTS\web.config.

     

    In step 3-10 the needed changes are described as the bold lines in the boxes. The other lines of the configuration file are there to give you the context where to find the place to do the modifications.

    4

    Enable support for an appSettings section:

     :<configSections>
        <section name="appSettings" type="System.Configuration.NameValueFileSectionHandler, System, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
        <sectionGroup name="SharePoint">    :

    5

    Add the appSettings section with the following keys:

     

    MacawSharePointSkinner-ConfigFile

    Path of the configuration file. Must be a directory path, not an URL.

    MacawSharePointSkinner-Logging

    ‘on’ or ‘off’ to enable or disable debugging information in comments in the page

     :</configSections> 
    <appSettings><!-- MACAW: configuration for MacawSharePointSkinner --><add key="MacawSharePointSkinner-ConfigFile" value="c:\MacawSharePointSkinnerConfig\SkinConfig.xml"/><add key="MacawSharePointSkinner-Logging" value="on"/><!-- MACAW: end of configuration for MacawSharePointSkinner --></appSettings>
         <SharePoint>  :

    6

    Add the MacawSharePointSkinner HttpModule:

     :<httpModules>  <clear />  <add name="OutputCache" type="System.Web.Caching.OutputCacheModule" />  <add name="WindowsAuthentication" type="System.Web.Security.WindowsAuthenticationModule" />  <!-- <add name="Session" type="System.Web.SessionState.SessionStateModule"/>-->
      <add name="MacawSharePointSkinner" type="Macaw.SharePoint.Skinner.Skin,Macaw.SharePoint.Skinner, Version=1.0.0.1, Culture=neutral, PublicKeyToken=efcf6ac388b9b555"/>
    </httpModules>:
     

    1.3      Final step

    The final step is to modify the MacawSharePointSkinner configuration file SkinConfig.xml.

    1.4      Alternative configurations

    This section describes some alternative configuration possibilities for the HttpModule dll, and for the used configuration files.

    1.4.1      HttpModule dll deployment

    The procedure described above deploys the Macaw.SharePoint.Skinner.dll to the global assembly cache. This deployment has the advantage that you only need one step to deploy the assembly and it is available in all virtual directories. Disadvantage is that an IISRESET is needed to activate the DLL.

     

    If you don’t want to deploy Macaw.SharePoint.Skinner.dll to the global assembly cache, you need to deploy it to the following bin directories:

     
    • C:\inetpub\wwwroot\bin (the path to the SharePoint virtual directory)
    • C:\Program Files\Common Files\Microsoft Shared\web server extensions\60\ISAPI\BIN (to keep FrontPage working, and have skinning support on the help pages)
    • C:\Program Files\Common Files\Microsoft Shared\web server extensions\60\TEMPLATE\LAYOUTS\BIN (to have skinning enabled on all pages in the ‘/_layouts/’ directory)

    1.4.2      Configuration files

    It is possible to specify different configuration files for the different virtual directories in their corresponding web.config files. This allows for specific skinning configurations for the SharePoint virtual directory pages and the /_layouts virtual directory pages.

     

    It is possible to specify a file pattern as a configuration file, instead of a single file. So for example if you specify c:\MacawSharePointSkinnerConfig\SkinnerSharePoint*.xml as configuration file in the web.config of the SharePoint virtual directory and  c:\MacawSharePointSkinnerConfig\SkinnerLayout*.xml in the web.config of the /_layouts virtual directory, you can have multiple configuration files to define your skinning operations. This is used in large Share Point modification projects where each subproject has its own configuration files. Note however that the configuration files are read in undefined order, so make the configuration files as independent as possible of each other. Especially overlapping URL redirections can lead to unpredictable behavior.

     

    If order of interpretation of configuration files is important, it is also possible to supply multiple configuration files separated by ‘;’ characters. For example: c:\MacawSharePointSkinnerConfig\mefirst.xml; c:\MacawSharePointSkinnerConfig\restoffiles*.xml

    2         MacawSharePointSkinner configuration

    2.1      Introduction

    Configuration of the MacawSharePointSkinner is done in an XML file named SkinConfig.xml. This file can be found in a directory called c:\MacawSharePointSkinnerConfig or another directory as defined in step 2 of the installation procedure defined in section 2.2. This file can be edited in any text editor like notepad or in a special XML editor[2].

     

    Within the configuration file regular expressions[3] are used extensively to define match patterns.

    2.2      Structure of the configuration file

    The structure of the configuration file is unambiguously defined by the corresponding XSD schema SharePointSkinner.xsd.

     

    In this chapter some configuration examples are given

    2.3      Skinning language

    This section describes the skinning elements that make up the skinning language. The elements are given, and their hierarchy. Between brackets the occurrence count is specified.

     

    (1)

    exactly once

    (0,1)

    optional

    (0,n)

    zero or more times

    (1,n)

    one or more times

     

    skinner (1)

                default-uri-matchtype (0,1)

    cache-time (0,1)

    parameters (0,1)

                parameter (1, n)

    urlredirections (0,1)

                urlredirection (0,n)

    rules (0,1)

                rule(0,n)

                            uris (1)

                                        uri (1,n)

                                                    match (0,1)

    parameters (0,1)

                                                                parameter (1, n)

                                                    texts (0,1)

                                                                text (0,n)

                                                                            match (0,1)

    parameters (0,1)

                                                                                        parameter (1, n)

                            blocks (1)

                                        block (1,n)

                                                    match (0,1)

                                                                replacements (1)

                                                                            replacement (1,n)

                                                                                        find (1)

                                                                                        replace (1)

      

    Below is a detailed description of the available elements.

     
    Element Description

    Skinner

    Root element in the skinning configuration file.

    default-uri-matchtype

    Default way of matching for all match elements for uri.

     
    Type Name Req. Possible values / description
    Attribute matchtype No RegExp|WildCard|Exact, not specifiedàRegExp
     

    Currently only the type RegExp is supported. This is always the initial default value.

    cache-time

    Time to cache the configuration file in seconds.

     
    Type Name Req. Possible values / description
    Attribute duration Yes 0, -1, n
     

    Currently time expiration is not supported. Only the following values are supported:

    0: the configuration is reread on every replacement (for testing purposes only)

    -1: the configuration is never reread. A new initialization happens on IISRESET

    parameters

    Group element for specifying parameters using the parameter element.

    parameter

    Parameter definition that can be used for replacements in other elements like match, find and replace.

     
    Type Name Req. Possible values / description
    Attribute name Yes Name of the parameter
    Value n.a. Yes Value of the parameter
     

    Parameters are replaced in the text of elements when the text {{parametername}} occurs.

    urlredirections

    Group element for specifying url redirections using the urlredirection element.

    urlredirection

    Url redirection definition that specifies how to redirect a matching target url to a destination url.

     
    Type Name Req. Possible values / description
    Attribute name Yes Name of the redirection rule

    Attribute

    permanent

    No true|false. If true, redirections are done through an HTTP 301 response. This means an extra roundtrip to the server. Complete Url (http://servername/...) must be specified for the destination.I false, the redirection is done within the same application domain (same virtual directory)
    Attribute enabled No true| false. If true this redirection is used, if not specified redirection is used
     

    Parameters are replaced in the text of elements when the text {{parametername}} occurs.

    Target

    Specifies the expression to match the target uri.

     
    Type Name Req. Possible values / description
    Attribute matchtype No RegExp|Exact, not specifiedàRegExp
    Value n.a. Yes Expression to match. Regular expression match in CDATA section

    destination

    Replacement for the matched uri. May contain captures and parameters.

     
    Type Name Req. Possible values / description
    Value n.a. Yes Replacement text. Regular expression replacement in CDATA section

    rules

    Group element for specifying rules using the rule element.

    rule

    Skinning is implemented by execution of rules. More than one rule can be defined. When a rule matches, skinning can stop at this rule or it can continue to match next rules. A rule contains two elements:

    • uris specify the match the requested page must make on uri, text or both
    • blocks to specify the replacements to be executed on the page if matching
     
    Type Name Req. Possible values / description
    Attribute enabled No true| false. If true this rule is used, if not specified rule is used
    Attribute name Yes Name of the rule
    Attribute description No Description of the rule
    Attribute match-continue No true| false. If true continue matching next rules if this rules already matched, if false stop after match

    uris

    Group element for specifying uri matches using the uri element. Within the uris element we specify which pages will match this rule, either on uri match or text match or both.

    uri

    Uri match. If no match element is specified all uris match. Parameters can be defined under the uri element that can be used in the block replacements.

     
    Type Name Req. Possible values / description
    Attribute enabled No true| false. If true this uri is used, if not specified uri is used

    match (in uri)

    Specifies the expression to match the uri. If this element is missing, all uris match.

     
    Type Name Req. Possible values / description
    Attribute matchtype No RegExp|WildCard|Exact, not specifiedàRegExp
    Value n.a. Yes Expression to match. Expression match in CDATA section

    texts

    Group element for specifying texts using the text element.

    text

    Text match. If no match element is specified the text always matches. Parameters can be defined under the text element that can be used in the block replacements.

     
    Type Name Req. Possible values / description
    Attribute enabled No true| false. If true this text  is used, if not specified text is used

    match (in text)

    Specifies the expression to match the text. If this element is missing, the text always matches.

     
    Type Name Req. Possible values / description
    Attribute matchtype No RegExp|Exact, not specifiedàRegExp
    Value n.a. Yes Expression to match. Regular expression match in CDATA section

    blocks

    Group element for specifying blocks using the block element.

    block

    Block selection. If no selection element is specified the whole text is selected for replacements.

     
    Type Name Req. Possible values / description
    Attribute name No Name of the block
    Attribute description No Description of the block
    Attribute enabled No true| false. If true this block is used, if not specified block is used

    selection

    Specifies a selection for a block to do replacements on. If this element is missing, replacements specified in the block are executed on the complete text of the requested page.

     
    Type Name Req. Possible values / description
    Value n.a. Yes Block selection. Regular expression in CDATA section

    replacements

    Group element for specifying replacements in the block using the replacement element.

    replacement

    A replacement to be executed. Contains of a find and replace element.

     
    Type Name Req. Possible values / description
    Attribute name No Name of the block
    Attribute description No Description of the block
    Attribute count No N, number of replacements to execute, if not specified then infinite
    Attribute enabled No true| false. If true this rule is used, if not specified rule is used

    find

    Regular expression for the selection of text that may contain captures. Find text may contain parameters.

     
    Type Name Req. Possible values / description
    Attribute matchtype No RegExp|Exact, not specifiedàRegExp
    Value n.a. Yes Expression to find. Regular expression match in CDATA section

    replace

    Replacement for the selected text. May contain captures and parameters.

     
    Type Name Req. Possible values / description
    Value n.a. Yes Replacement text. Regular expression replacement in CDATA section

    When it is specified to specify text in a CDATA section to prevent invalid XML, use the following syntax: <![CDATA[text]]>

    3         Advanced Skinner configurations

    Pages are skinned by the skinner if the following conditions are met:

    • The page request is in a ASP.NET virtual directory
    • The web.config file contains the Macaw.SharePoint.Skinner HTTP module
    • The page request returns content of type text/html
     

    If you have a page that returns for example XML (content type is text/xml) the page is NOT skinned.

     

    If you don’t want a page to be skinned (and no comments added tot the top, even if there is no URL match), you can add skinnerskip=1 to the query string.

     

    Example: http://server/default.aspx?skinnerskip=1

      

    4         Regular expressions

    4.1      Introduction

    Matches, selections, finds and replacements are all done using regular expressions. There are multiple flavors available in regular expressions. MacawSharePointSkinner uses the .Net flavor. For more information on regular expressions have a look at:

     
    Description Url

    .Net regular expression documentation

    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/cpconregularexpressionslanguageelements.asp

    Small overview of much used language constructs

    http://www.regexlib.com/CheatSheet.htm

    4.2      Regular expression matching configuration

    All regular expression matches performed in MacawSharePointSkinner are done with the following options enabled:

     

    IgnoreCase              Specifies case-insensitive matching.

    Multiline                  Specifies multiline mode. Changes the meaning of ^ and $ so that they match at the beginning and end, respectively, of any line, not just the beginning and end of the whole string.

    CultureInvariant     Specifies that cultural differences in language is ignored.

     

    To increase the performance of matching, all regular expressions are compiled when the configuration file is read.

    4.3      Tools for regular expression construction

    When constructing regular expressions I always utilize a regular expression construction tool. These tools allow you to specify a source text (use the ‘view source’ text of the page you want to do replacements on), a regular expression (including captures) and a replacement. The tool visualizes the matches in the text and the resulting text after the replacement.

     

    See http://www.larkware.com/RegexTools.html for an overview of available tools. One of my favorites in “The regulator” (http://regulator.sourceforge.net).

    4.4      Tips & tricks

    This section contains some tips and tricks in smart regular expressions to perform skinning tasks.

    4.4.1      Block selection of head

    In one situation we had to replace the stylesheets within the head. These are four replacements. To improve replacement speed the replacements are done on a block that matches only the head section. The head can be matched as follows:

     <selection><![CDATA[<head>(?:.|\s)*?</head>]]></selection>

    5         Using MacawSharePointSkinner

    There are many, many usages for the MacawSharePointSkinner. Some examples of the usage of the MacawSharePointSkinner are:

    • Apply different style sheets to different areas in the portal area tree.
    • Remove system account from the “Assigned to:” dropdown boxes in the new and edit pages of certain lists (issues, tasks).
    • Redirect access to certain pages in the /_layouts directory to your own, modified, versions of these pages.

    5.1      Url redirections

    Url redirections in SharePoint works different from url redirections with normal ASP.NET applications. SharePoint uses special handling of url’s, because it uses a kind of “in context” page access. Examples are the pages in the /_layouts virtual directory. If you request the url http://servername/sites/mysite/_layouts/1033/aclinv.aspx, you actually access the page /_layouts/aclinv.aspx, but in the context of the SharePoint site mysite.

     

    Due to special handling in SharePoint, we also have to take this into account in specifying the url redirections.

     

    If you want to redirect the page /_layouts/1033/aclinv.aspx to /_layouts/my1033/aclinv.aspx, do the following:

     <urlredirection name="aclinv.aspx"><target>/_layouts/1033/aclinv.aspx</target>      <destination>/_layouts/my1033/aclinv.aspx</destination></urlredirection> 

    This redirection is performed “in context”, so in the destination page we are still in the same context.

     

    If you want to redirect all access to the “in context” page /_layouts/1033/aclinv.aspx (for example http://servername/sites/mysite/_layouts/1033/aclinv.aspx and http://servername/sites/othersite/_layouts/1033/aclinv.aspx) to a page NOT in /_layouts, the complete url of the destination page must be specified, and the permanent attribute must be set to true (if target is full url, permanent is automatically set to true).

     <urlredirection name="aclinv.aspx" permanent="true"><target>/_layouts/1033/aclinv.aspx</target>      <destination>http://www.disney.com</destination></urlredirection> 

    If you only want to redirect all access to the page http://servername/sites/mysite/_layouts/1033/aclinv.aspx (so NOT access to all /_layouts/1033/aclinv.aspx pages in any context), a complete url of both the target and destination page must be specified, and the permanent attribute must be set to true (if target is full url, permanent is automatically set to true).

     <urlredirection name="aclinv.aspx" permanent="true"><target>http://servername/sites/mysite/_layouts/1033/aclinv.aspx</target>      <destination>http://www.disney.com</destination></urlredirection> 

    6         Frequently Asked Questions

    Q: MacawSharePointSkinner works great in my SharePoint sites and in the portal, but not for the pages in the /_layouts virtual directory. It also does not work for the help pages of SharePoint.

    A: See section 2.2 for information on how to modify the web.config file to add the HttpModule. The procedure to add it to the /_layouts virtual directory is included in this section. For each virtual directory you want to skin you have to modify the web.config file. So for

     

    Portal/Wss:

    /_layout:

    /_vti_bin (help pages):



    [1] copy – paste does not work on the assembly directory, dragging is needed for automatic installation of the DLL in the GAC. The assembly can also be installed using the gacutil tool. In this case execute the following command: gacutil /i Macaw.SharePoint.Skinner.dll

    This tool can be found in the directory C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322.

     

    [2] Because the corresponding XSD schema file is provided, syntax checking on the XML can be used in XML editors like XMLspy and Visual Studio .NET 2003.

    [3] If you don’t know what regular expressions are, go to Google, and in the search string type define:regular expression. See also chapter 4 for more information on regular expressions.

  • Try Ruby online, and learn it the interactive (Ajax) way

    Never heard of Ruby? Its that new hot language that is at the tipping point of blowing away all existing languages;-)

    Read about it at http://ruby-lang.org, and of course don’t forget the great Ruby on Rails framework at http://www.rubyonrails.org that lets you build a data driven, Ajax style web site garanteed under 15 minutes. And the book “Agile web development with Rails” is absolutely a great reading.

    To get a touch of the language without installing the whole shabang and even get an interactive training (Ajax style), check out http://tryruby.hobix.com/. It is really cool! (Uhhh… needs Firefox by the way, sorry!) <== UPDATE: Problem is solved, works on IE and Safari now, but Opera seems to be a problem!! Great work!!

  • Using SharePoint web services from Flash: put a cross-domain policy file on your SharePoint server

    I a weblog posting on calling SharePoint web services from Flash I forgot to mention that you need to put a special file on the root of your SharePoint server to allow the Flash client to access the web services on the server. This file is called the cross-domain policy file.

    Russ asked a question on this in a comment on this posting. I forget to answer this question, sorry Russ.

    For more information on the cross-domain policy file see for example this documentation page at Macromedia.

    I used FrontPage to drop the file crossdomain.xml in the root of the SharePoint site. The contents of the crossdomain.xml file are in my case:

    <cross-domain-policy>
       <allow-access-from domain="*" secure="false" />
    </cross-domain-policy>

  • Nokia 770 development options... Flash?

    For some time now I’m looking into building rich internet applications using Flash. I’m very impressed by Flex 2 and did some adventures in developing with it. Flex 2 is still alpha and targets the new Flash version 8.5. Now I have a Nokia 770 device, and the Flash version on it is 6.0.82.0.

    I’m also looking into how applications can be developed for the Nokia 770 (yes, I’m a busy man without much focus;-) ). I installed the C/C++ development environment, but I actually don’t want to go back to C/C++ development with all its difficulties. Other options are Mono and Python which I’m also currently invesigating.

    Today I was at the first day of first European Flash conference http://www.sparkeurope.com/. There was a very interesting session on the open source community for Flash.One of the sessions was by Edwin van Rijkom on Screenweaver, an open source project for building Rich Desktop Applications. He is wrapping Flash in a host application and provides access to the host OS and native code on the host OS through the Screenweaver host application. Because the Nokia 770 also has Flash installed I was dreaming about the possibility to create such a host application for Flash that provides access to all Nokia 770 features and the possibility to hook it into the Nokia 770 menu structure. I discussed this idea with Edwin and he thought that it would be a great idea.

    I searched for people who tried such an approach on linux. I ended up on a blog post by Darron Shall who created a C# host for Flash, wasn’t happy with it and searched for a more cross platform approach. He looked into XulRunner to host the Flash application. I just looked into this approach, but I’m afraid that running XulRunner on the Nokia 770 would be to havy. The runtime is big, and on top of that you get the additional Flash runtime which is something like 1.5 Mb as well. Way to much for creating small apps for the 770.

    It would be great if a really small Flash hoster could be created for the Nokia 770. I think Flash is a great platform for creating applications. Maybe someone did already something in this direction? If this is the case, please let us know!

  • Setting up the development environment for the Nokia 770

    The development environment for the Nokia 770 linux-based device has to run on.. you guessed: Linux.
    As a developer the best place to go to is http://maemo.org. From here you can get all the information to get you started.
    Even if you don't have the device yet you can start developing for it. Setting up the development environment is NOT easy, it's not like on Windows for developing for Windows CE: starting an msi and done, you really need to dive into Linux, and I must say it became a lot better but also complexer since the last time I used it (quite some years ago!).

    As stated in this tutorial, the Pre-requisites for developing applications for the Nokia 770 are:

    • Intel compatible x86 processor at 500 MHz or faster
    • 256 MB RAM or more
    • 2 GB free hard disk space
    • Linux OS (Debian or Ubuntu are recommended, but others fairly recent distributions should also work)
    Because the 770 is running Debian as well I decided to go for the Debian linux distribution.

    Because I don't have a space computer to get this started I decided to set up my Debian linux in a Virtual Server 2005 virtual machine.
    I downloaded a net-install version of the 3.1 stable release at http://www.debian.org/CD/netinst/. This is a 180mb ISO image that you can bind to your CD-Rom in Virtual Server 2005 virtual machine. After installing the core functionality, you can install the other needed parts from the internet.

    Installation using Virtual Server 2005 ok, but I kept getting a kind of timer value exceeded error reported through all my screens. This was really irritating!  The only solution I find to getting rid of this error  was recompiling the kernel, not something I was really waiting for.  Another problem is the video card emulated by Virtual Server 2005: an Ati Trio64, This card is not supported in the X-Windows installed with Debian. The card is too old. I configured X-Windows with vesa as a graphics board (a standard implementation), but I didn't get to a higher resolution than 800x600 and it was really slow.

    I decided to move over to VmWare.. installation was completely painless, except that I couldn't create a harddisk with size larger than 8Gb. No problems with timer value exceeded errors, and VmWare has a graphics card vmware that installed without problems in X-Windows. I now have X-Windows running at 1600x1200x16 bits (16 bits is needed for correct emulation of the Nokia 770 I read somewhere).

    I tried to follow this tutorial for installation, this costed me a lot of time, and is NOT the best way to go. There is an Eclipse plugin available for 770 development called Laika, and the description for this pluginand the tools to install works better. Go to the tools page, and read about the tools that you need and how to install them.

    First thing I did was install the java sdk. The laika page mentions Java(TM) 2 Runtime Environment, Standard Edition 1.4.2_08 to be used, I decided to download the JDK 5.0 at page http://java.sun.com/j2se/1.5.0/download.jsp, I downloaded the linux self-extracting file jdk-1_5_0_05-linux-i586.bin

    I couldn't install this file directly on Debian, so I converted it to a Debian package using the information at http://www.debian-administration.org/articles/142. Now you can install the package using dpkg.

    Now you can download Eclipse. I installed it in the /usr directory. Unpackage the downloaded file using tar xvfz filename while you are in the directory /usr. run /usr/eclipse/eclipse to run Eclipse.

    If you want to see what you get for development tools have a look at these screenshots!

    I didn't get the Xvnc working (In Eclipse see Windows -> Preferences -> Scratchbox  Preferences X-Environment), so I'm using the Xephyr as described in this tutorial. Make sure that you use equal X-Windows screen numbers.

    In Eclipse I configures the Xephyr tool as follows:

    Syntax for starting X-server: /home/serge/start-xephyr.sh

    The ".. IP and number" field in my configuration is set to: DISPLAY=127.0.0.1:2
    The viewer is in my configuration not used I think.

    I now have everything up and running I can run and debug my applications in the Scratchbox emulation environment!

    Good luck with your installation, you will need it ;-)







  • The Nokia 770: an amazing device!

    Lets have this said first: I really don't want to advertize for Nokia, and I'm in no way affiliated to Nokia, I'm just a happy user!
    In a previous post I already described the Nokia 770 device. Within 24 hours of ordering UPS came at my door to deliver my "present".
    I'm not really a gadger guy, I just wanted a device to read books and browse the internet while moving around the house and lying in my bed. And I must say: it absolutely goes far beyond my expectations! Browsing is flawless, you can read PDF files perfectly, and the RSS news reader works good enough to follow the blogs I read. What more do you need? No: it's not a phone! Who cares? I have a nokia phone to make my phone calls!

    The good thing is that you boot up the device in the morning and can keep it in "standby" all day. Within seconds you have access to your documents and the internet wherever you are (as long as you have Wifi access). And with a 800x480 resolution with 225 dpi the display is as sharp as paper!! What a difference with those lousy 320x200 resolution devices!

    I did not try the email functionality of the device yet, I just use outlook web access, works great!!

    Being a developer I wanted to know how I could develop for the Nokia 770 device. It is running a modified version of Debian linux, and all development is also done on linux. In comming post on the Nokia 770 I will explain how I did set up the development environment.


  • Nokia 770, linux-based tablet, 800x480, bluetooth, Wifi, USB, and.... .Net development using Mono!

    Nokia just came out with a great new device: the Nokia 770.  It is a Linux based device running Internet Tablet 2005 software edition. It has a small form factor and 800x480 resolution, with 225-pixel-per-inch in 65536 colors. It is a computer, not a swiss army knive. So no phone, no camera. I think this is a pré… you don’t want such a larger device against you ear… use the tool where it is made for, use your telephone!. Make sure you phone has Bluetooth, so you can communicate with the 770 through your phone. Use Wifi when you are at home, that’s what this thing is made for: Bluetooth and Wifi.

    Price in Europe is Euro: 359 euro. Available NOW! (And I order it, two days they say!!;-) )

    Have a look at the specifications!!!!

    It uses a Texas Instruments OMAP1710 processor, which seems to be an ARM based processor.

    I’m looking for a device that can solve one of my biggest problems: getting to bed early and be able to browse the web and read a book without keeping my girlfriend out of here sleep, because she can’t sleep when the lights are on. I read the Da Vinci Code on my Cassipeia, at 320x200 resolution. It did the job, but the text quality wasn’t good enough. Since then I’m thinking about what to do… buy a full blown tablet PC of Toshiba, or go for a phone/pda thingy with 640x480 VGA resolution like the MDA Pro (or one of it’s other named incarnations). I think I found my solution!!!

    I’m a Microsoft development guy, so a Pocket PC based thing seems to be the best idea. But if you look at applications comming out for PPC I’m not really impressed. Maybe it’s the 320x200 screen that is used everywhere that turns me off… I worked on Linux for years, and I know that getting your system up and running the way you want it takes a while, but as a technical guy I should manage. Maybe the 770 isn’t that bad for me...

    Have a look at this article to see what people think about it.

    Nokia created a site http://www.maemo.org/ for developers who want to target the new 770. Already quite some info on there. Especially the RSS aggregator page is interesting: have a look at http://planet.maemo.org/.

    The 770 runs on X-Windows, and there is a special widget set available for all the UI. It is called the Hildon widget set. See https://stage.maemo.org/svn/maemo/projects/tools/trunk/osso-ui-performances/ (login, password: guest, guest) to see some test code using these widgets.

    There is already a huge set of applications made available to the 770. Have a look at the ApplicationCatalog at the Maemo Wiki.

    Currently most development for the 770 is done in C/C++. But he, as a C# guy I don’t want to go back there, I’m getting to old for that. Enter Mono for Maemo and the Nokia 770!!

    There is one thing I’m missing however: a microphone in line! How to do skype? Maybe an USB microphone can be solution?

    Another thing I’m wondering: can the device run applications from RS-MMC? If this is the case it would be great to throw in a 1 GB RS-MMC card!

  • SharePoint, querying lists using DspSts, and consuming this information in Flash

    UPDATE: See this blog post on the crossdomain.xml configuration file you have to put on your SharePoint server to allow Flash to access the SharePoint server using web services.

    Summary

    This blog entry describes how to do complex SharePoint web service calls from a Flash client application to retrieve arbitrary information from SharePoint lists. The very powerful web service DspSts.asmx is used to accomplish this task. This web service allows you to dynamically query any field and item selection from a list and apply ordering on this selection. It is also possible to restrict the number of items to return. The query applied is in the CAML query format.

    Introduction

    In my search for a possible new implementation of the MacawDiscussionBoard for SharePoint I’m looking into Flash as possible UI for SharePoint discussion lists. In this implementation it is key that all information sent to retrieved from SharePoint must be accomplished using the SharePoint web services interfaces. I’m using Flex 2.0 Alpha release to generate my flash application (Flash 8.5), but the techniques displayed here are for a large part also applicable for Flash 8 (the current release).

    Problem I had was that I wanted to get list items not through the simple SharePoint web service interface Lists.asmx that provides a set of simple remote methos calls, but through the web service DspSts.asmx that requires a complete XML document to be passed, and has special additional SOAP headers.

    Reason for using this more complex DspSts.asmx web service: the GetListItems(listName, viewName) call in Lists.asmx is too restrictive:

    • It can only return fields of list items as defined in a predefined list view
    • It returns all list items in the list view

    The DspSts.asmx web service is way more powerful:

    • Build queries dynamically in the CAML query format
    • Query any field from the list
    • Query a selection of items from the lis
    • Apply ordering on this selection
    • Restrict the number of items to return
    • Get paged sets of items

    Using the  DspSts.asmx web service with an XML document based on a complex schema defined in the WSDL and additional SOAP headers was way more complex than I thought it would be. I couldn’t find much information on this and was really struggling until I found the following article on the IBM site: Develop Web services clients with Macromedia Flex. It still took me really a lot of time to solve the ‘puzzle’, that is why I thought I better write this down for others trying to go the same route.

    I moved my code into a simple test application where you can specify the URL of a SharePoint site containing a discussion list, and the ID of this list (Use SharePoint Explorer to get the ID). The application retrieves all list items from the discussion list and displays them in a grid. Only a small set of fields is retrieved… the set of fields needed to create a hierachical structure of all discussion list items. See below for an example of this application.

     FlashListReader1

     The code below is in MXML (Th Flex markup language), a really powerful language for defining Flash applications. The application executes Query operation of the DspSts web service when the “Execute Query” button is clicked. Variables between ‘{’ and ‘}’ characters are data bindings. The result of the web service call is returned in XML, in the e4x format, the intrinsic XML format of ActionScript 3.

    <?xml version="1.0" encoding="utf-8"?>

    <mx:Application xmlns:mx="http://www.macromedia.com/2005/mxml" xmlns="*">

          <mx:Script>

          <![CDATA[

                import mx.rpc.events.*;

                import mx.rpc.soap.*;

                import mx.controls.Alert;

                import mx.controls.gridclasses.*;

                import mx.collections.*;

           

                [Bindable] public var siteURL:String="https://MySharePointPortalServer/personal/serge";

                [Bindable] public var listID :String="{020013c8-5efc-49a8-b28f-339d401c1046}";

                [Bindable] public var dataGridItems:IList = new ArrayCollection();

          ]]>

          </mx:Script>

     

     

          <mx:Canvas width="638" height="414">

                <mx:Label x="11" y="14" text="SharePoint Site URL:"/>

                <mx:Label x="12" y="44" text="List GUID (use \{..\}):"/>

                <mx:TextInput x="138" y="12" width="375" id="siteURL_textinput" text="{siteURL}"/>

                <mx:TextInput x="138" y="42" width="375" id="siteID_textinput" text="{listID}"/>

                <mx:Button x="521" y="43" label="Execute Query" click="wssDspStsService.Query.send()"/>

                <mx:DataGrid id="listRowsOutput" height="325" dataProvider="{dataGridItems}">

                      <mx:layoutConstraints>

                            <mx:EdgeAnchor left="13" right="13" bottom="13"/>

                      </mx:layoutConstraints>

                      <mx:columns>

                            <mx:DataGridColumn headerText="Ordering" columnName="Ordering" />

                            <mx:DataGridColumn headerText="ThreadID" columnName="ThreadID"/>

                      <mx:DataGridColumn headerText="ID" columnName="ID"/>

                      <mx:DataGridColumn headerText="Title" columnName="Title"/>

                      <mx:DataGridColumn headerText="Author" columnName="Author"/>

                      <mx:DataGridColumn headerText="Created" columnName="Created"/>

                      </mx:columns>

                </mx:DataGrid>

          </mx:Canvas>

         

     

    <!-- For more information on format: See the WSS SDK, and search for "DspSts". Select the topic "Query Method".

           Online documentation: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/spptsdk/html/soapmqueryRequest_SV01071735.asp

    -->

    <mx:WebService

        id="wssDspStsService"

        wsdl="{siteURL}/_vti_bin/DspSts.asmx?wsdl"

        service="StsAdapter"

        port="StsAdapterSoap"

        useProxy="false"

        showBusyCursor="true"

        fault="Alert.show('Failed to load the DWSL. Error: ' + event.fault.faultstring)" load="wssDspStsService_AddHeaders()">

            <mx:operation name="Query" concurrency="single" resultFormat="e4x" fault="wssDspStsService_fault(event)" result="wssDspStsService_result(event)">

                      <mx:request xmlns="http://schemas.microsoft.com/sharepoint/dsp">

                        <queryRequest>

                          <dsQuery select="/list[@id='{listID}']" resultContent="dataOnly" resultRoot="Rows" resultRow="Row" columnMapping="attribute">

                              <Query QueryType="DSPQ">

                                  <Fields>

                                      <Field Name="Ordering"/>

                                      <Field Name="ThreadID"/>

                                      <Field Name="ID"/>

                                      <Field Name="Title"/>

                                      <Field Name="Author"/>

                                      <Field Name="Created"/>

                                  </Fields>

                                  <OrderBy>

                                      <OrderField Name="Ordering" Type="xsd:string" Direction="DESC"/>

                                  </OrderBy>

                              </Query>

                          </dsQuery>

                        </queryRequest>

                      </mx:request>

            </mx:operation>

        </mx:WebService>

       

        <mx:Script>

          <![CDATA[

                // We need to generate the following SOAP headers:

                //

                // <soap:Header xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">

                //    <dsp:versions xmlns:dsp="http://schemas.microsoft.com/sharepoint/dsp">

                //        <dsp:version>1.0</dsp:version>

                //    </dsp:versions>

                //    <dsp:request xmlns:dsp="http://schemas.microsoft.com/sharepoint/dsp" service="DspSts" document="content" method="query">

                //    </dsp:request>

                // </soap:Header>   

                //

                // Only issue is that I can't service, document and method as attributes of request, but only as child elements. But it seems to work!

                // On the other hand: it already works if only the request header is there, ot does not matter about its attributes.

                private function wssDspStsService_AddHeaders()

                {

                      trace("Add HEADERS");

                      wssDspStsService.Query.addSimpleHeader("versions", "http://schemas.microsoft.com/sharepoint/dsp", "version", "1.0");

                      var qName: QName = new QName("http://schemas.microsoft.com/sharepoint/dsp", "request");

                      var requestHeader: SOAPHeader = new SOAPHeader(qName, {service:"DspSts",document:"content",method:"query"});

                      wssDspStsService.Query.addHeader(requestHeader);

                }

               

                      private function wssDspStsService_fault(event: FaultEvent)

                      {

                            Alert.show("Failed to execute the query. Error: " + event.fault.faultstring);

                      }

     

                      // Returned XML is in followin  format

                      // <queryResponse xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/sharepoint/dsp" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

                      //   <dsQueryResponse status="success">

                      //     <Rows>

                      //       <Row Created="2005-10-23T14:03:20" ThreadID="{20051023-1203-12E1-8E71-1F4426DE71D5}" ID="1" Title="Disc1" Ordering="20051023140320" Author="Serge van den Oever"/>

                      //       <Row Created="2005-10-23T14:03:32" ThreadID="{20051023-1203-2649-92C4-FA7641C9F5F7}" ID="2" Title="Disc2" Ordering="20051023140332" Author="Serge van den Oever"/>

                      //       <Row Created="2005-10-23T14:03:45" ThreadID="{20051023-1203-2649-92C4-FA7641C9F5F7}" ID="3" Title="Disc2" Ordering="2005102314033220051023140345" Author="Serge van den Oever"/>

                      //     </Rows>

                      //   </dsQueryResponse>

                      // </queryResponse>;

                     

                      private function wssDspStsService_result(event:ResultEvent)

                      {

                            var queryResultXML:XML = wssDspStsService.operations.Query.result[0];

                            var dsp:Namespace = queryResultXML.namespace();

                            queryResultXML.setNamespace(dsp);

                            var queryResultRows:XMLList = queryResultXML..Row;

                            trace("#rows=" + queryResultRows.length());

                           

                            dataGridItems.removeAll();

                            for each(var item:XML in queryResultRows)

                            {

                                  dataGridItems.addItem({

                                                           Created: item.@Created,

                                                           ThreadID: item.@ThreadID,

                                                           ID: item.@ID,

                                                           Ordering: item.@Ordering,

                                                           Title: item.@Title,

                                                           Author: item.@Author

                                                        });

                            }

                      }

          ]]>

        </mx:Script>

    </mx:Application>

     

    The format of data returned from the DspSts web service

    One of the nice and easy things of Flash is that you can have the results of a web service call return as an Object. This object can have fields of any type, hierarchical structures with fields of type object, and arrays of any type (also of type object). Problem is that the converter from the SOAP result to the object does its own interpretation of types, it does not check the schema data that can be returned in a SOAP result, you see an small part of this schema below:

    <x:element name="Ordering" minOccurs="0" d:displayName="Ordering" type="x:string" />
    <x:element name="ThreadID" minOccurs="0" d:filterSupport="IsNull;IsNotNull;Eq;Neq;" d:displayName="Thread ID" type="x:string" />
    <x:element name="ID" minOccurs="0" d:filterSupport="IsNull;IsNotNull;Eq;Neq;Lt;Gt;Leq;Geq;" d:displayName="ID" type="x:int" />
    <x:element name="Title" d:filterSupport="IsNull;IsNotNull;Eq;Neq;Lt;Gt;Leq;Geq;Contains;BeginsWith;" d:displayName="Subject" type="x:string" />
    <x:element name="Author" minOccurs="0" d:filterSupport="IsNull;IsNotNull;Eq;Neq;Lt;Gt;Leq;Geq;" d:displayName="Posted By">

    I got into trouble with the Ordering field in a discussion list item. This field contains an ordering value in the format YYYYMMDDHHMMSS, for example 20051020140312. This field is interpreted as a field on type number in the conversion to an object. This is NOT what we want, because a reply on this list item gets an Ordering field with the original Ordering field value, with the timestamp of the reply appended. This string can become really long, does not fit in a number, and as a number is useless for ordering.

    I found that it is also possible to return the result as XML, or e4x, the intrinsic XML format of ActionScript 3. The downside is that I have to interpret all returned XML data manually (it is all seen as strings), but the upside is that we don’t need to return the schema data in the result, which can become really large, because all users are returned  as a restriction enumertation for the Author field. This means that all users that ever visited the site containing the discussion list (and are therefore registered in the UserInfo table for this site) are returned in a list. No problem for a site only used by a few users, but a huge problem if you have a large user base. 

    Things I couldn’t accomplish in the WebService implementation of Flash/Flex 2

    During my adventures with WebServices in Flash/Flex 2 I couldn’t accomplish the following things:

    Complex SOAP headers

    I needed to create SOAPheader with the following format:

    <dsp:request xmlns:dsp="http://schemas.microsoft.com/sharepoint/dsp" service="DspSts" document="content" method="query">

    What ever I tried, the closest I could get was:

    <dsp:request xmlns:dsp="http://schemas.microsoft.com/sharepoint/dsp">
    <service>DspSts</service>
    <document>content</document>
    <method>query</method>
    </dsp:request>

    I will never know if this has the same effect, because the service call already worked when I added a header with the name ‘request”. See the code for the creation of the SOAP header.

    Access the SOAP result in case of a fault

    The SOAP envelope returned from a web service call can contain really detailed information of the exact problem that occured. Flash only gives access to a very general error message through event.fault.faultstring. I could’t find a way to access the actual SOAP envelope returned by the web service call. If anyone has more information on how to accomplish this task, please let me know.

    Information on e4x

    The information on XML as native datatype in ActionScript 3 is quite sparse. Luckely enough the e4x standard is a well documented standard. For more information have a look at:

    Final words

    If I look back at the code above it looks really simple. It is actually really simple, if you know what to do. And that is where the problem lies. There are so many examples on using web services from Flex/Flash, so much information in the documentation, but not about the difficult nitty-gritty parts of using more complex web services. The Flex/Flash implementation proofed to be very powerful and flexible. I’m really impressed. I hope that this blog post will help others in building Rich Internet Application on top of SharePoint using the Flash platform. If you have any questions, let me know!

  • Macromedia Flex 1.5/2: development and pricing model...

    As Pau stated in a reaction in this blog entry, current development of applications using Flex is expensive. I have no idea how expensive, I never used the current Flex system, but if I may believe the many stories on the internet it is very expensive (everything is relative in this case however;-)). I heard a price tag of $10.000, but hey, thats quite normal in server software licenses. By the way, there is also a free license for non-commercial/nin-institutional applications and for bloggers who want to showcase on their blog. Have a look at: http://www.macromedia.com/software/flex/productinfo/faq/#section-8

    How about the new Flex 2 system? If you read this story by David Wadhwani, VP of Product Development for Macromedia Flex, he writes:

    If you're familiar with Flex 1.0, it's very important to recognize that Flex 2 is far more than just a new release. It represents a major milestone in the evolution of the Flex technology and a continued evolution in Macromedia development processes. From a technical standpoint, Flex 2 introduces capabilities that enable developers to build an entirely new class of Rich Internet Applications, ushering in a new generation of RIAs. At the same time, we're opening up Flex development to a much broader group of developers by re-introducing Flex Builder, which has been built from the ground up on the Eclipse open-source IDE framework and now includes the Flex Framework and the compiler. That means that Flex applications can be deployed as a stand-alone option by placing a compiled SWF file on any web server or in conjunction with Flex Enterprise Services 2.

    This means that no server component is needed is you need webservice connections only.

    There will be multiple product in the Flex product line. One of them is Macromedia Flex Framework 2. This is a framework that build on top of the foundation of Flash 8.5. It is a client side framework. If you only utilize this, the price must be low. How low? Probably in the same range as VS.NET is my guess.

    Also with the current Flex system it is possible to build stand-alone Flash applications if I read this blog post correctly.

     

     

  • Macromedia, I'm impressed! Flex you way into RIA

    Macromedia (or should I say Adobe?) did release an alpha version of their new Flex 2.0 platform and the Flash player 8.5. As you can read in some of my previous blog posts I was searching for the best toolset to build RIA applications. I think I found it! Although I did not have much hands-on time yet I was really impressed. Not the fancy authoring environment that is just suboptimal for designers like me, but a full fledged IDE, based on Eclipse 3.1. Complete with forms designer to design your constraints based interface.

    Flash 8.5 features ActionScript 3.0. The next advance in the Javascript based language used in Flash. Some of it’s most powerful features: completely type dafe, also in the runtime. Powerful exception handling, delegates, regular expressions, XML as native type. Finally a powerful, grownup language like I’m already so used to in the .Net development space (c#).

    I will dive into this new technology, and keep you posted!

    Check it out @ http://labs.macromedia.com

  • WSS SP2 and SharePoint Portal Server using .Net 2.0 and SQL Server 2005?

    Bil Simser describes in this post that it is possible to use WSS SP2 with SharePoint Portal Server, although not all new features are supported in Portal Server.

    Patrick Tisseghem says in a comment on this post that there will be no SP2 for SPS.

    I’m wondering how you can get SharePoint Portal Server freshly installed on ASP.NET 2.0 and SQL Server 2005.

    I think it will be something like:

    • Install Windows Server 2003
    • Install SQL Server 2000 + SP4
    • Install SharePoint Portal Server
    • Install WSS SP1
    • Install SPS SP1
    • Install the .Net Framework 2.0
    • Install WSS SP2
    • Upgrade SQL Server 2000 to SQL server 2005
    • Uninstall SQL Server 2000

    Or is there a faster route?

     

  • Discussion board and RIA technologies

    In a previous blog post I was writing about the possibilities for the implementation of a new version of the Macaw Discussion Board. I mentioned a few technologies that I was looking at. The reason I am looking into those technologies is not only for the implementation of the discussion board, but also to have a look at what the best way would be to build the next generation Rich Internet Applications. Besides good old DHTML with Ajax technologies, Flash as a runtime platform looks like one of the possible ways to go for the near future.

    A great overview of possibilities for developing for the Flash platform is given by Aral in this post. He also discusses NeoSwiff, a tool that allows you to write Flash Applications in C# .NET using Visual Studio 2003, and Xamlon Pro Flash Edition, which lets you use XAML to create your UI declaratively. As a .Net developer those last two alternative are especially interesting, because for me it will be .Net/C# that is used at the server side.

    Darron also has a great post where he also discusses the problem with developer tools for creating RIA running on Flash. He also mentions NeoSwiff.

    I dived in both Xamlon and NeoSwiff. The adventure with Xamlon was over soon. It integrates with both VS.NET 2005 and VS.NET 2003. I tried installation on multiple machines and I keep getting a warning that my license key expired. Even the forums of Xamlon couldn’t provide me with a solution.  I was also discouraged with this news post stating “… Xamlon is backing away from XAML, saying developers aren't really ready.”

    I moved over to NeoSwiff, and I must say I’m impressed. One of my most important requirements is the possibility to easily create an application that resizes it’s controls when the window is resized. All I got during my Flash adventures was a zooming application. NeoSwiff is really great in this respect. See this post by Darran for a great example!

    However, in the first two minutes of starting with NeoSwiff I encountered the three most important problems for me:

    • No direct support for web services 
    • No debugger
    • No UI designer

    For the first and third point Jesus describes a solution. See his post. Completely happy with this results I continued with NeoSwiff. Everything went great. I designed a small UI in a normal C# .Net forms project, and copied over the code. Next part was the web service. I don’t want to do an HTTP GET call (function and parameters on the url) like Jesus does in his post. I wanted to call using either a soap message (using a HTTP POST) or through proxy code generated based on the WSDL of the service. NeoSwiff does not provide a tool to generate the proxy code, and the code produced by Microsoft’s wsdl.exe can’t be used. I had to go the manual way. But to no avail. NeoSwiff has two ways to do a request: XmlRequest (as used by Jesus) and WebRequest. With the XmlRequest it is not possible to specify the body of the request. You can only specify an url. The WebRequest provides a way to specify the body of the request, but only through a StringDictionary, resulting in name-value pairs as used in a post back from a HTML form. I also couldn’t change the content type, it remained application/x-www-form-urlencoded, the content type used on form post backs. I have been trying for hours and hours but I didn’t get any further. Maybe I missed something, maybe it is something they are still working on. Ig someone knows: please tell me!!

    Of course I could work around the limitations by writing a set of web services I can always call through url syntax, but I want to reuse all available SharePoint web services. Thats my whole point: I want to build RIA’s againt web services.

    I think the NeoSwiff platform can become a great platform if they succeed in getting access to web services working. It is so nice to work in VS.NET using code completion and all the power available in this IDE!

     

  • SharePoint discussion board upgrade: which way to go?

    The Macaw Discussion Board, a reworked version of the SharePoint discussion board, works quite well, but it is not really the way I would like to have a SharePoint discussion board to be implemented. I don’t like the technical implementation. It is built on top of the existing discussion board functionality. I converted all list views to data views and did extensive modifications to the XSLT code in the data views. Due to the restrictions of the XSLT in a data view web part there are absolutely no possibilities for code reuse or centralized localization. See the conclusion of my MSDN for more information on those restrictions. Although the usability of the discussion board is good, it is not the way it could look in the new era of Rich Internet Applications. I want it to be slick, smooth, single-page UI, maintainable, configurable, localizable…a rewrite is dawning.

    And now my biggest issue! Technology! Which technology should I use. Should it be Atlas, Ajax.Net, Backbase, Flash 8, OpenLaszlo, or one of the other RIA technologies that are available?

    One of the most important things is the availability of good controls (or widgets as some others call them). I don’t want to be stuck with the standard HTML controls. Those are so ‘90s. Modern controls are needed like at least a hood tree control and resize controls.

    Flash and OpenLaszlo

    I’m not really a Flash developer. I really love their approach however: a complete, small and accepted platform running in any browser that provides a really rich user experience. I did some adventures with Flash MX2004 pro in the past. I wasn’t impressed by the quality and stability of the controls in their component framework 2.0. I don’t seem to be the only one as you can read at the ActionStep home page, and as you can see at  http://www.osflash.org where everyone seems to restart on a new control library to replace the Flash component framework. I have no idea yet if Flash 8 made any progression in this matter, I will dive into this soon. Another problem I have with the Flash development environment is that I’m a developer, and still can’t really get used to the Flash way of development. Maybe that is the reason I like the approach taken by OpenLaszlo. OpenLaszlo applications are written in XML and Javascript. The OpenLaszlo server then compiles the XML and Javascript into .swf files, which run in the Flash player. I’m currently downloading the system and will soon have a look at it. It looks really promising!

    Backbase

    Another interesting technology is Backbase. It is a javascript solution for Rich Internet Applications that runs on most browsers. They have a powerful set of controls available and it all looks really powerful and smooth. There is full support for single page web based user interfaces. Extra data can be retrieved on the fly, it uses ajax technologies. The biggest advantage is that it does not depend on any third party plug-ins like Flash. one of my biggest problems with is is that although there is a community version available for non-commercial applications, an implementation of the new version of the discussion board running within an organization will require a commercial license. This will make it more difficult to get people using the new version of the discussion board.

    Ajax.Net

    In the .Net world everyone is talking about Ajax.Net and Atlas. Ajax.Net is a powerful ajax library written by Michael Schwarz that integrates ASP.NET development with client side ajax techniques. It is available today, it works with the .Net 1.1 framework. I must say that I don’t have any experience with the library yet, but as far as I can see right now is that the library focuses on the client-server communication, not on providing a full-fledged widget set. Development in that direction is on it’s way in the community.

    Atlas

    Atlas is the code name for the ajax library Microsoft is working on. We are a Microsoft house, so although other systems might be better or may be further, we will finally end up doing Atlas development. For now however, Atlas does not seem to be the best approach for SharePoint development. With Service pack 2 for WSS there is the possibility to do ASP.NET 2.0 development within SharePoint context. Jan Thielens is working on hosting ASP.NET 2.0 web parts into the current version of SharePoint through “Son of SmartPart”. Problem will be that not many companies will move over to the .NET 2.0 framework on their SharePoint servers yet. The demands for the extra functionality will be too high. Another issue is that no information came out yet on the support of the .Net 2.0 framework for SharePoint Portal Server. I didn’t see a service pack for SPS yet.

    Conclusion

    Do I have a conclusion? Not yet actually! There are many possibilities to investigate and I hope to get feedback from you on what you think what would be the best direction to take. There are probably many things I overlooked. My guess on best approach in order of best bet are currently as follows:

    1. OpenLaszlo (Flash as runtime)
    2. Flash 8 (Flash as runtime)
    3. Backbase (Standard browser support)
    4. Ajax.Net (Standard browser support, use of ASP.NET 1.1 at server side)
    5. Atlas (Standard browser support, use of ASP.NET 2.0 at server side)

     

     

  • SharePoint 2003: Displaying ASP.NET Error Messages instead of the standard WSS error message

    From the online documentation @ http://msdn.microsoft.com/library/default.asp?url=/library/en-us/spptsdk/html/tsovOMGdlnsShowASPErrors_SV01108995.asp?frame=true:

    Displaying ASP.NET Error Messages

    You can disable Microsoft Windows SharePoint Services error messaging so that ASP.NET error messages are displayed instead.

    To display ASP.NET error messages, perform the following steps:

    • In the Local_Drive:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\TEMPLATE\LAYOUTS folder, change the name of the global.asax file to global.bak.
    • In the web.config file in the same folder, change customErrors = "On" to customErrors="Off".

    That is the stupid thing when you are working with SharePoint for so long: you forget to reread the documentation for new additions!

  • SharePoint v3: making your way through all bits and pieces

    The PDC is almost finished, and finally we get the first bits of information on the next version of SharePoint: SharePoint v3. I’m wondering if they actually have a code name for it!

    This is a list of resources I found up to now, it’s more for myself to keep track of it..

    First of all the downloadable PowerPoint presentations of the sessions at PDC2005: http://commnet1.microsoftpdc.com/content/downloads.aspx. Many interesting SharePoint sessions in there!

    Eli Robillard visited the PDC and has some nice weblog entries at: http://weblogs.asp.net/erobillard/. Have a look at:

    Also Bil Simser was at the PDC and has great entries at his weblog: http://weblogs.asp.net/bsimser.

    Check out entries:

    Angus Logan was also at the PDC, and has some short statements on the new version of SharePoint at: http://msmvps.com/anguslogan/. Most important ones:

    • SharePoint vNext has security trimming (ie. context based display)
    • SharePoint vNext will ship with XSD's
    • SharePoint vNext has a built in scheduler
    • SharePoint v3 to support ASP.NET 2.0 Web Parts

    Dustin miller is another great name in the SharePoint world who also has a great blog at: http://www.sharepointblogs.com/dustin/. Have a look at hier entry SharePoint "V3.0" Features (http://www.sharepointblogs.com/dustin/archive/2005/09/14/3503.aspx) for a great overview!

    Patrick Tisseghem was also at the PDC, see his weblog at: http://blog.u2u.info/DottextWeb/patrick/ for some nice info on SharePoint vNext.

    Also Mike Fitzmaurice has some info, see his weblog at: http://blogs.msdn.com/mikefitz/.

    Also check out Mart Muller’s weblog at: http://blogs.tamtam.nl/mart/default.aspx.

    PJ Hough, Group Program Manager for Windows SharePoint Services also started a blog: keep an eye on: http://blogs.msdn.com/pjhough/

    On Channel 9 a video on SharePoint vNext is available. Have a look at: http://channel9.msdn.com/showpost.aspx?postid=115383. Also http://channel9.msdn.com/Showpost.aspx?postid=115364 on the new workflow engine is interesting.

    UPDATE 2005–9–27:

    Some other great posts:

    http://jopx.blogspot.com/2005/09/enhancements-in-sharepoint-v3.html
     

     

  • Writing high-performance C# code

    Once in a while it is good to review your code practices, and have a look if the coding approach you take in solving your problems is still “state of the art”. One important thing is performance: in a language like C# it is so easy to do great things in just few calls… it looks nice and clean, but results for example in the creation of loads of objects.  good example: sometimes it is way better to just use a good old plain array like we used to do in the C days, instead of using the fance ArrayList collections.

    I stumbled accross a great article by Jeff Varszegi on the performance topic that gives you some good ideas in a fast ten minute read: http://dotnet.sys-con.com/read/46342.htm.

  • VS.NET 2005: Code coverage for signed assemblies

    I am currently working on an application using VS.NET 2005, and because all the TDD tools like unit testing and code coverage are available I started to use them.

    When I started code coverage on my signed application I got the following exception:

    Test method X threw exception: System.IO.FileLoadException: Could not load file or assembly 'Y, Version=1.0.0.0, Culture=neutral, PublicKeyToken=Z' or one of its dependencies. HRESULT: 0x8013141A Strong name validation failed. ---> System.Security.SecurityException: Exception from HRESULT: 0x8013141A Strong name validation failed at X.

    Not so strange if you think about it. Assembly is signed, code coverage needs code instrumentation, means modifications of the assembly, resulting in incorrect assembly so the validation failed.

    Solution is to resign the assembly after instrumentation.

    If you open the localtestrun.testrunconfig file (or something similar) in your solution items (double-click it), you can enable resigning in the Code Coverage section. This solves the problem.

    I found this solution through the following bug post on the Microsoft Product feedback Center: http://lab.msdn.microsoft.com/ProductFeedback/viewFeedback.aspx?feedbackid=5f59ce2a-65b1-487d-9f46-da8707179184
    .

  • The Collective vs. the Individual... a possible first step? Or just Google...

    Bil Simser is wondering if it is possible to have a collective place to collect all information on SharePoint. See his post at http://weblogs.asp.net/bsimser/archive/2005/06/27/415834.aspx

    In my opinion one of the best ways of getting “knowledge” out is through weblogs. This is exactly what a lot of people are doing. In out company everyone has access to its own internal weblog, and this is the way information is best shared within our company.

    When I was setting up these weblogs I was also planning to create automatic categories you could assign your weblog post to, based on the SharePoint topics struxture, and based on the list of projects we are working on. It never came that far.

    I still think it is a good idea, and an “easy” way to collect one part of the collective knowledge on SharePoint. Can’t we come up with a set of common post categories we assign our weblog entries to? Each blog engine support categories and separate RSS feeds on them. We could collect those RSS feeds into one big RSS feed, and based on these RSS feeds create a site that displays all collective knowledge categorized to a fixed set of topics…

    But then the biggest issue: comming up with a set of common post categories... I’m afraid that part will brings us into taxonomy hell… to what level of detail do you define categories, how many categories are needed, etc.  etc.

    Hmmm… maybe Google isn’t so bad;-)

    Talking about Google: How about http://www.google.com/Top/Computers/Software/Document_Management/Products/Microsoft_SharePoint

    We could link all our information in here..

  • SharePoint versioned document magic

    Bil Simser describes in post http://weblogs.asp.net/bsimser/archive/2005/06/22/414258.aspx a “bug” he encountered with versioned document libraries. When a document is saved in a versioned document library, the timestamp of the then latest version of the document gets changed to one minute before the time stamp of the new saved document.

    This problem is related to one of the biggest problems I have with document libraries. A document in a versioned document library becomes a "version" when a newer version of the document is saved.

    An example:

    I create a document called “VersionedDoc.doc” in the document library “http://myserver/personal/serge/VersionedDocumentLibrary”.

    This document is URL addressable at: “http://myserver/personal/serge/VersionedDocumentLibrary/VersionedDoc.doc”.

    As soon as I save a new version of the document, this version becomes “http://myserver/personal/serge/VersionedDocumentLibrary/VersionedDoc.doc”, and my first version of the document becomes: “http://myserver/personal/serge/_vti_history/1/VersionedDocumentLibrary/VersionedDoc.doc”. This means that at the moment of saving the new version of the document, the previous version of the document gets moved to a new location. It is not already created at this location. This leads to the timestamp issue. By the way: the next version is saved as: “http://myserver/personal/serge/_vti_history/2/VersionedDocumentLibrary/VersionedDoc.doc”.

    Besides the fact that the version naming schema leads to weird URL’s, it also means that you can’t provide a URL to a version of your document upfront.

    If I remember correctly, SharePoint Portal Server 2001 had the possibility to address the latest document as http://…/VersionedDoc.doc, and versioned versions as http://…/VersionedDoc.doc$versionnumber. If version 4 was the latest version it was both URL addressable as http://…/VersionedDoc.doc and http://…/VersionedDoc.doc$4. It is a pity that Microsoft decided to leave this naming schema.

     

     

  • About Site Definitions and making waves inside Microsoft walls;-)

    A great post by Ryan Rogers on the Site Definitions “challenge”: http://blogs.msdn.com/ryanrogers/archive/2005/06/04/425148.aspx

    He was the guy I cited in my post http://weblogs.asp.net/soever/archive/2005/05/26/408948.aspx, without knowing it;-)

    It is good to hear that weblogs post can make waves within Microsoft walls…

    It is good that it is possible to do the modifications through code, as Ryan stated. We have an escape! A costly one if you have MANY site instances, because the pages you modify will become unghosted (a copy is created) in the database… you remember, that issue you try to prevent by not modifying pages through FrontPage;-) But hey… storage is cheap these days!

    Another issue I had are the MySite pages… you don’t want to have to change 30.000 My Site pages if you need a small modification to the private or public view. Good thing is that the MySite public and private pages are handled differently from normal sites… if you modify the MySite homepage as an administrator in “Shared View” it changes for all users! These pages are in a special location:

    Private homepage MySite: http://servername/MySite/default.aspx

    Public homepage MySite: http://servername/MySite/Public.aspx

    Modifying these pages in the site definition is probably unsupported, but making modifications through the “Shared View” or using FrontPage not (yet?) as far as I know.

  • More tunes in the "unsupported" blues... adding web parts to Forms pages unsupported!

    Adding web parts on forms pages are “banned”, there goes my MacawDiscussionBoard list template!! Welcome to the wonderful world of SharePoint!

    http://www.bluedoglimited.com/SharePointThoughts/ViewPost.aspx?ID=176

    But this one is even worse than the Site Definitions: http://weblogs.asp.net/soever/archive/2005/05/26/408948.aspx

    You may not even modify the template page before instantiating, or the instance of the page after creation…. This is in my opinion again a major problem!

    Time to get a list of what is unsupported, but will work…

  • SharePoint statistics: source processing

    In post http://weblogs.asp.net/soever/archive/2005/05/21/408207.aspx I did some investigations into the information logged by SharePoint in the IIS and STS log files. In this post I describe some decisions I’m going to make on processing these log files, based on the information that became available during my investigations. I’m writing these blog posts while doing these investigations, so if you have any comments on the decisions I make, please let me know!!

    Goal of this weblog post is to massage the available log data into a format that can easily be processed for importing into the “Stage Area – IN”, a SQL server (2005) database where we import all source data that will eventually en up into out data warehouse.

    STS logs

    First of all we need a tool to convert the STS binary log files to a format that can easily be processed. The article Usage Event Logging in Windows SharePoint Services contains the code for a C++ application to do this conversion. I also got my hands on a C# implementation through Steven Kassim, a colleague of mine. He got this code from a newsgroup, but I couldn’t find where it exactly came from, and who wrote it. I’m doing some modifications to the code to change the output format (so LogParser can handle it), and to improve the speed. I will publish the code as soon as I’m ready. [Update: tracked down the newsgroup: http://groups.yahoo.com/group/sharepointdiscussions/, and the author: Fred LaForest].

    IIS logs

    Although the IIS log files are already in a format that could be easily parsed, there are some good reasons to do a preprocessing parse to accomplish the following:

    • Handle the problem of the IIS log header appearing in the log file on each IIS-RESET
    • Filter out log entries we are not interested in:
      • Requests made by service account, like the full text indexer account
      • Request to assets in team sites resulting in  /_vti_bin/
      • Requests made to assets we are not interested in, like javascript files, css stylesheet, images, etc.
    • Filter out fields we are not interested in, like in our case the client IP address, be base the location on the main loacation of a user in the company directory (can also be done through IIS by only selecting the properties in our log file that we are interested in!)

    IIS supports multiple log formats, and multiple ways to log information. It is possible to do direct ODBC logging to a database, but this approach gives a heavier load on the web servers. The best format IIS can log in is the W3C Extended Log File Format. In this log format it is possible to select the fields we are interested in:

    W3ClogProperties

    Carefully selecting the the properties we are interested in can greatly reduce the amount of data that will be logged.

    For more information on the W3C Extended Log File Format see:

    Processing the log files: the tool

    There are many good systems around to process log files. Two log file processors I would really like to mention are:

    I have selected LogParser, because of its following features:

    • It supports any log file format (handy for the STS log files)
    • It might even be possible to implement direct binary parsing of the STS log files through a custom component into LogParser (still investigating this)
    • It support incremental input parsing through checkpoints, which simplifies incrementally importing of log file data into our database
    • It has a powerful query syntax
    • It is very powerful in its supported output formats
    • There is extensive programmability support available

     For more information on LogParser see:

    For information on LogParser with respect to SharePoint, where direct reporting on the log files is done see:

    Back to the IIS log, what do we need

    As stated in the previous post, in the STS log all successful requests to all pages and documents that are within WSS sites are logged. This includes WSS site based SPS things like MySite and Areas. All those request are logged in the IIS log as well, and they are difficult to correlate due to time differences. It is also the question if it is interesting to correlate those log entries, the STS log contains all the information that we need… although… I have one issue: the bandwidth consumed by the request. I can’t get the correct value out of the STS log (although it should be in there), while the IIS log contains the correct values (sc-bytes = cs-bytes). This would be the only reason to do the correlation. I’m still working on this issue (I post on this later), so lets assume that problem will be solved.

    So where do we need the IIS logs for:

    • Pages not found (404 errors)
    • Pages in the /_layouts folder, this is also the location where we store our custom web applications and our custom services
    • Unmanaged paths in the SharePoint virtual directory (paths excluded for the SharePoint render-engine “treatment”)
    • IIS logs of other web sites, not related to SharePoint, but part of our intranet

    Any requests for images, javascript files and stylesheet files in the IIS log can be skipped in our case, because those files are static files, supporting the SharePoint UI and our custom applications. We also filter out requests made by service account, we are not interested in those reuqests.

    In the STS log requests for images are interesting, because these images are user uploaded documents within the WSS sites.  We do filter out request made by service accounts as well for the the STS logs.

    Moving IIS log files into the database

    To move the IIS log files into the database we need a table definition for the IIS logs. I’m currently using the following table definition:

    CREATE TABLE [dbo].[IISlog] (
     [date] [datetime] NULL,
     [time] [datetime] NULL,
     [csUsername] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [sComputername] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [csMethod] [varchar](16) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [csUriStem] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [csUriQuery] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [scStatus] [smallint] NULL,
     [scSubstatus] [smallint] NULL,
     [scWin32Status] [int] NULL,
     [scBytes] [int] NULL,
     [csBytes] [int] NULL,
     [timeTaken] [int] NULL,
     [csHost] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [csUserAgent] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [csReferer] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [application] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
    ) ON [PRIMARY]

    And the following LogParser script to move the data from the log files to the database:

    "C:\Program Files\Log Parser 2.2\logparser.exe" "SELECT date, time, cs-username, s-computername, cs-method, cs-uri-stem, cs-uri-query, sc-status, sc-substatus, sc-win32-status, sc-bytes, cs-bytes, time-taken, cs-host, cs(User-Agent) as cs-User-Agent, cs(Referer) as cs-Referer, 'SharePointPortal' as application INTO IISlog FROM c:\projects\IISlog\*.log WHERE (cs-username IS NOT NULL) AND (TO_LOWERCASE(cs-username) NOT IN ('domain\serviceaccount'))" -i:IISW3C -o:SQL -server:localhost -database:SharePoint_SA_IN -clearTable:ON

    This is the first step where I filter out all request made by the system account used to index the SharePoint content. I did not do the filtering out of the WSS sites requests (we will use the STS log for this) and the unwanted files in the /_layouts/ directory yet. I’m moving one step at a time. So we now have all log files (collected into the directory c:\projects\IISlog) moved into the database.

    Moving STS log files into the database

    To move the STS log files into the database we need a table definition for the STS logs. I’m currently using the following table definition:

    CREATE TABLE [dbo].[STSlog](
     [application] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [date] [datetime] NULL,
     [time] [datetime] NULL,
     [username] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [computername] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [method] [varchar](16) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [siteURL] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [webURL] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [docName] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [bytes] [int] NULL,
     [queryString] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [userAgent] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [referer] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
     [bitFlags] [smallint] NULL,
     [status] [smallint] NULL,
     [siteGuid] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
    ) ON [PRIMARY]

    And the following script to move the data from the binary log files to the database:

    "C:\projects\STSLogParser\STSLogParser.exe" 2005-01-01 "c:\projects\STSlog\2005-01-01\00.log"  c:\projects\logparsertmp\stslog.csv
    "C:\Program Files\Log Parser 2.2\logparser.exe" "SELECT 'SharePointPortal' as application, TO_DATE(TO_UTCTIME(TO_TIMESTAMP(TO_TIMESTAMP(date, 'yyyy-MM-dd'), TO_TIMESTAMP(time, 'hh:mm:ss')))) AS date, TO_TIME( TO_UTCTIME( TO_TIMESTAMP(TO_TIMESTAMP(date, 'yyyy-MM-dd'), TO_TIMESTAMP(time, 'hh:mm:ss')))), UserName as username, 'SERVERNAME' as computername, 'GET' as method, SiteURL as siteURL, WebURL as webURL, DocName as docName, cBytes as bytes,  QueryString as queryString, UserAgent as userAgent, RefURL as referer, TO_INT(bitFlags) as bitFlags, TO_INT(HttpStatus) as status, TO_STRING(SiteGuid) as siteGuid INTO STSlog FROM c:\projects\logparsertmp\stslog.csv WHERE (username IS NOT NULL) AND (TO_LOWERCASE(username) NOT IN (domain\serviceaccount))" -i:CSV -headerRow:ON -o:SQL -server:localhost -database:SharePoint_SA_IN -clearTable:ON
     

    This script currently moves only one day, but you get the drift. As you can see we also set day, computername and application in the log file. Currently using fixed values, we will move this into a dynamic system later on. The date field is obvious, we want to record the date into the database for each log entry. We need the computer and application fields because we will have multiple servers, and multiple “applications” build on SharePoint, like for example ‘SharePointPortal’, ‘TeamSites’ (Intranet) and ‘ExternalTeamSites’ (Extranet).

    The STSLogParser is an application to parse the STS log file from it’s binary format into a comma serperated ASCII log file. I will post the code for this converter in one of my next posts.

  • SharePoint custom site definitions... again...

    There were a lot of comments on my post: SharePoint custom site definitions... I’m lost…, where I described the problems I have with the following statement in a new knowledge base article by Microsoft:

    "Microsoft does not support modifying a custom site definition or a custom area definition after you create a new site or a new portal area by using that site definition or area definition. Additionally, Microsoft does not support modifying the .xml files or the .aspx files in the custom site definition or in the custom area definition after you deploy the custom site definition or the custom area definition."

    I also asked John Jansen from Microsoft for a comment, and through his connections within Microsoft he came back with the following reaction:

    "The statement that appears to be making the most waves on Serge's blog -- "You modify a custom site definition or a custom area definition after you deploy the custom site definition or the custom area definition" -- was already in place in the SDK (http://msdn.microsoft.com/library/en-us/spptsdk/html/tsovGuidelinesCustomTemplates_SV01018815.asp?frame=true) before this KB article was published; this KB article was, more or less, a reminder and summary of the "rules" that had already been defined in various places throughout the SDK."

    I don’t know since when this is in the documentation. I started with the documentation when SharePoint was still beta, and the documentation was really sparse, a lot changed since then. Every new release it gets better and better, and bigger and bigger… so I must have missed this one while upgrading to ther new documentation when a new version came out. The documentation does contain the following sentence:

    “Changing a site definition after it has already been deployed can break existing sites and is not supported. If you find that you must modify a site definition after it has already been deployed, keep in mind that adding features can cause fewer problems than changing or deleting them. Changing features often results in loss of data and deleting them often results in broken views.”

    So we are having a problem, and we need a solution to it… An interesting post by Cornelius van Dyk describes such a solution (http://www.dtdn.com/blog/2005/05/microsoft-support-scenarios-for-custom_24.htm). Lets hope he is willing to provide more information on his tools….

    For more information on issues with changing site definitions after instantiating a site based on the site definition, see also: http://weblogs.asp.net/bsimser/archive/2005/05/17/407237.aspx

    I would like to thank everyone for their responses, good to know what we can do and should not do.

  • SharePoint statistics: the sources

    First Issue in SharePoint statistics is: where can we find the information to do statistics on.

    Normally for a web application I grab the IIS log files and start from there. SharePoint is another case. Besides the IIS log files there are also the STS log files. The name STS log files dates back to the SharePoint Team Sites from the past.

    IIS log files: used to log ALL activities on a web site
    STS log files: used to log all activities on Windows SharePoint Services  (WSS) sites, also the basis for SPS area’s

    There is a good reason why for SharePoint you need logs in two different places: although all web access is logged in the IIS logs, many accesses to SharePoint go through the FrontPage Server Extensions. Yes, most of SharePoint is still running on FSE, and still implemented in COM. In these URL accesses there is no detail information available on what is exactly requested. In the IIS logs you find entries like:

    2004-12-31 23:58:06 SRV-P-INTRA-3 10.10.4.15 POST /_vti_bin/_vti_aut/author.dll - 443 domain\username 10.10.4.102 HTTP/1.1 MSFrontPage/6.0 - - hostname 200 0 0 1061 614 0
    2005-01-01 00:08:08 SRV-P-INTRA-3 10.10.4.15 POST /_vti_bin/_vti_aut/author.dll - 443 domain\username 10.10.4.102 HTTP/1.1 MSFrontPage/6.0 - - hostname 200 0 0 1061 614 140

    (I removed the author, because no one should have to know this guy does not have a life: editing SharePoint pages when everyone in the world is celebrating the new year!!!)

    As you can see FrontPage does all page accesses through author.dll, but no information is available on which page is edited using FrontPage. Also access to documents in WSS goes through a FSE dll.

    In the following example we access the homepage and a document test.doc in the document library  docs  in the site test:

    IIS log (stripped down a bit to save space):

    2005-01-01 00:52:22 SRV-P-INTRA-3 GET /default.aspx - 443 domain\username
    2005-01-01 00:52:22 SRV-P-INTRA-3 10.10.4.15 GET /_layouts/1033/owsbrows.js - 443 domain\username
    2005-01-01 00:52:22 SRV-P-INTRA-3 10.10.4.15 GET /_layouts/1033/styles/ows.css - 443 domain\username
    2005-01-01 00:52:26 SRV-P-INTRA-3 10.10.4.15 GET /_layouts/images/logo_macaw.jpg - 443 domain\username

    : goes on and on and on for all stylesheets, javascript files and pictures
    2005-01-01 00:52:26 SRV-P-INTRA-3 10.10.4.15 GET /_vti_bin/owssvr.dll - 443 domain\username

    STS log (stripped down a bit to save space):

    01:52:22,1,200,2758144,1,0BAD41D9-D7D6-4892-A42F-61E4BB7AAEED,domain\username,https://servername,,default.aspx
    01:52:27,1,200,1670913,1,040D5AB9-3072-45E3-975F-40C6B28CF132,domain\username,https://servername/sites/test,,docs/test.doc
    https://servername/sites/test,,docs/test.doc

    So in the IIS log the access to the page and the access to all it’s embedded and linked content is logged, while in the STS log only the access to the page is logged.
    In the IIS log accessing a document is logged as  /_vti_bin/owssvr.dll, while the STS log exactly specifies wchich document is loaded from which document library in which site.

    For more information on the STS log format, have a look at the MSDN article: Usage Event Logging in Windows SharePoint Services.

    Looking at the IIS and STS logs, there are some important observations to make (some directly visible, others from the literature):

    • IIS logs have a log timestamp in GMT time
    • STS logs have a log time stamp in local server time (honouring daylight saving time)
    • IIS log files don’t look at daylight saving time
    • STS logs are in a binary format, and must be converted to a usable format before processing
    • IIS logs write “header lines” on each IISRESET, sospecial processing is needed
    • After each page access information is directly written tot the IIS log
    • STS uses caching in writing to the log file, do an IISRESET during investigating to make sure the cached log entries are written
    • The timestamp written to the IIS and STS logs can be different for the same page access. See last line in example above for both IIS log and STS log. IIS log entry is written on 00:52:26 (so at 26 seconds), while STS log entry is written on 1:52:27 (so at 27 seconds)
    • In the STS log only succesful requests are logged (information streamed back to the client)
    • In the IIS log ALL requests are logged, request for the /_layouts “in site context” pages but also requests for missing pages
    • The STS log only logs requests for pages and documents in sites, not information in for example the /_layouts directory
    • The STS log entries only have a time, no date. The date is given by the folder structure where the STS log files are stored
    • The available fields in STS log files is different to the avialable fields in the IIS log files

    Where to go from here? I save that for my next post!

     

  • SharePoint statistics: diving into SqlServer 2005 datawarehousing...

    I have got a new project to dive into: statistics and click-stream analysis on a SharePoint intranet for 30.000 users for one of our large customers.

    After years of development on a custom build classic ASP based portal for this customer, our company (Macaw) did a new implementation of their intranet portal based on SharePoint Portal Server 2003. I was part of this development team and created most of the tooling around the automatic build proces. We are currently code complete on the new implementation.

    Important in a large intranet is statistics. One part of our company is Macaw Business Solutions (MBS), specialised in Business Intelligence. MBS got the project to implement the statitics and click-stream analysis part on the new intranet.

    Due to my knowledge on SharePoint I am now part of the project team, and I’m now diving into the new world of Business Intelligence. I already got a “steam course” into BI from Jack Klaassen (Director of MBS) and Ralf van Gellekom, and it sounds like fun stuff!

    In their wisdom Jack and Ralf, together with the customer, decided to go for SQL server 2005 and all the BI functionality it has available, instead of using SQL server 2000 now, and when the project is up and running SQL server 2005 comes available with much more powerful capabilities and tooling, and a migration project is needed.

    In my blog I will try to report on some of the steps and issues we are encountering is this adventurous project. I will keep you posted!

     

  • SharePoint custom site definitions... I'm lost...

    Microsoft did release a knowledge base article on which scenario's are supported and not supported with respect to SharePoint site definitions: http://www.kbalertz.com/Feedback_898631.aspx

    One of the things that makes me really sad is the following statement:

    "Microsoft does not support modifying a custom site definition or a custom area definition after you create a new site or a new portal area by using that site definition or area definition. Additionally, Microsoft does not support modifying the .xml files or the .aspx files in the custom site definition or in the custom area definition after you deploy the custom site definition or the custom area definition."

    Besides ghosting I thought that exactly this point was the powerful thing about site definitions!! Back to the simple site templates... if you may not make any modifications afterwards so all instances of your custom site definition instantly reflect those changes, site definitions are useless!!!

  • NAnt task xmllist, way more powerful than xmlpeek (source provided)

    UPDATE: See http://weblogs.asp.net/soever/archive/2006/12/01/nant-xmllist-command-updated.aspx for an updated version of the NAnt XmlPeek command. 

    I have a love-hate relationship with the <xmlpeek> command in NAnt.

    The problems I have with it are:

    • It report an error when the XPath expression does not resolve into a node, there is NO WAY to test if a node or attribute exists (to my knowledge)
    • It’s logging level is set to Level.Info, so there is always output. This should have been Level.Verbose, I don’t want output for every xmlpeek I perform
    • It is not possible to return the contents of multiple nodes selected in the XPath expression

    Especially the problem that I can’t test for the existance of a node or attribute bothers me. I can set failonerror to false, ant test afterwards if the property exist, but that means that there is still an error that is reported in my buildserver report, while it is expected behaviour!

    Based on an implementation by Richard Case I wrote the same version of his <xmllist> task, but a bit more powerful and using the standard naming for the attributes. Using this task you can extract text from an XML file at the locations specified by an XPath expression, and return those texts separated by a delimiter string. If the XPath expression specifies multiple nodes the node are seperated by the delimiter string, if no nodes are matched, an empty string is returned.

    See the comments in the code for an extensive example.

    I will try to post this code to the NAnt developers mailing list, but it’s here for you to get you starget if you need this kind of functionality.

    // NAnt - A .NET build tool
    // Copyright (C) 2001-2003 Gerry Shaw
    //
    // This program is free software; you can redistribute it and/or modify
    // it under the terms of the GNU General Public License as published by
    // the Free Software Foundation; either version 2 of the License, or
    // (at your option) any later version.
    //
    // This program is distributed in the hope that it will be useful,
    // but WITHOUT ANY WARRANTY; without even the implied warranty of
    // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    // GNU General Public License for more details.
    //
    // You should have received a copy of the GNU General Public License
    // along with this program; if not, write to the Free Software
    // Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
    //
    // Serge van den Oever (serge@macaw.nl)
    // Based on idea from weblog entry: http://blogs.geekdojo.net/rcase/archive/2005/01/06/5971.aspx combined with the code of xmlpeek.
    

    using System; using System.Globalization; using System.IO; using System.Text; using System.Xml; using System.Collections.Specialized;

    using NAnt.Core; using NAnt.Core.Attributes; using NAnt.Core.Types;

    namespace Macaw.MGDE { /// <summary> /// Extracts text from an XML file at the locations specified by an XPath /// expression, and return those texts separated by a delimiter string. /// </summary> /// <remarks> /// <para> /// If the XPath expression specifies multiple nodes the node are seperated /// by the delimiter string, if no nodes are matched, an empty string is returned. /// </para> /// </remarks> /// <example> /// <para> /// The example provided assumes that the following XML file (xmllisttest.xml) /// exists in the current build directory. /// </para> /// <code> /// <![CDATA[ /// <?xml version="1.0" encoding="utf-8" ?> /// <xmllisttest> /// <firstnode attrib="attrib1">node1</firstnode> /// <secondnode attrib="attrib2"> /// <subnode attrib="attribone">one</subnode> /// <subnode attrib="attribtwo">two</subnode> /// <subnode attrib="attribthree">three</subnode> /// <subnode attrib="attribtwo">two</subnode> /// </secondnode> /// </xmllisttest> /// ]]> /// </code> /// </example> /// <example> /// <para> /// The example reads numerous values from this file: /// </para> /// <code> /// <![CDATA[ /// <?xml version="1.0" encoding="utf-8" ?> /// <project name="tests.build" default="test" basedir="."> /// <target name="test"> /// <!-- TEST1: node exists, is single node, get value --> /// <xmllist file="xmllisttest.xml" property="prop1" delim="," xpath="/xmllisttest/firstnode"/>
    /// <echo message="prop1=${prop1}"/> /// <fail message="TEST1: Expected: prop1=node1" unless="${prop1 == 'node1'}"/> /// /// <!-- TEST2: node does not exist --> /// <xmllist file="xmllisttest.xml" property="prop2" delim="," xpath="/xmllisttest/nonexistantnode" />
    /// <echo message="prop2='${prop2}'"/> /// <fail message="TEST2: Expected: prop2=<empty>" unless="${prop2 == ''}"/> /// /// <!-- TEST3: node exists, get attribute value --> /// <xmllist file="xmllisttest.xml" property="prop3" delim="," xpath="/xmllisttest/firstnode/@attrib" />
    /// <echo message="prop3=${prop3}"/> /// <fail message="TEST3: Expected: prop3=attrib1" unless="${prop3 == 'attrib1'}"/> /// /// <!-- TEST4: nodes exists, get multiple values --> /// <xmllist file="xmllisttest.xml" property="prop5" delim="," xpath="/xmllisttest/secondnode/subnode" />
    /// <echo message="prop5=${prop5}"/> /// <fail message="TEST4: Expected: prop5=one,two,three,two" unless="${prop5 == 'one,two,three,two'}"/> /// /// <!-- TEST5: nodes exists, get multiple attribute values --> /// <xmllist file="xmllisttest.xml" property="prop5" delim="," xpath="/xmllisttest/secondnode/subnode/@attrib" />
    /// <echo message="prop5=${prop5}"/> /// <fail message="TEST5: Expected: prop5=attribone,attribtwo,attribthree,attribtwo" unless="${prop5 == 'attribone,attribtwo,attribthree,attribtwo'}"/> /// /// <!-- TEST6: nodes exists, get multiple values, but only unique values --> /// <xmllist file="xmllisttest.xml" property="prop6" delim="," xpath="/xmllisttest/secondnode/subnode" unique="true"/>
    /// <echo message="prop6=${prop6}"/> /// <fail message="TEST4: Expected: prop6=one,two,three" unless="${prop6 == 'one,two,three'}"/> /// /// <!-- TEST7: nodes exists, get multiple attribute values --> /// <xmllist file="xmllisttest.xml" property="prop7" delim="," xpath="/xmllisttest/secondnode/subnode/@attrib" unique="true"/>
    /// <echo message="prop7=${prop7}"/> /// <fail message="TEST7: Expected: prop7=attribone,attribtwo,attribthree" unless="${prop7 == 'attribone,attribtwo,attribthree'}"/> /// /// <!-- TEST8: node exists, is single node, has namespace http://thirdnodenamespace, get value --> /// <xmllist file="xmllisttest.xml" property="prop8" delim="," xpath="/xmllisttest/x:thirdnode">
    /// <namespaces> /// <namespace prefix="x" uri="http://thirdnodenamespace" /> /// </namespaces> /// </xmllist> /// <echo message="prop8=${prop8}"/> /// <fail message="TEST8: Expected: prop8=namespacednode" unless="${prop8 == 'namespacednode'}"/> /// </target> /// </project> /// ]]> /// </code> /// Result when you run this code: /// <code> /// <![CDATA[ /// test: /// /// [echo] prop1="node1" /// [echo] prop2="''" /// [echo] prop3="attrib1" /// [echo] prop5="one,two,three,two" /// [echo] prop5="attribone,attribtwo,attribthree,attribtwo" /// [echo] prop6="one,two,three" /// [echo] prop7="attribone,attribtwo,attribthree" /// [echo] prop8="namespacednode" /// /// BUILD SUCCEEDED /// ]] /// </code> /// </example> [TaskName ("xmllist")] public class XmlListTask : Task { #region Private Instance Fields

    	private FileInfo _xmlFile;
    	private string _xPath;
    	private string _property;
    	private string _delimiter = &quot;,&quot;;
    	private bool _unique = false; // assume we return all values
    	private XmlNamespaceCollection _namespaces = new XmlNamespaceCollection();
    
    	#endregion Private Instance Fields
    
    	#region Public Instance Properties
    	/// &lt;summary&gt;
    	/// The name of the file that contains the XML document
    	/// that is going to be interrogated.
    	/// &lt;/summary&gt;
    	[TaskAttribute(&quot;file&quot;, Required=true)]
    	public FileInfo XmlFile 
    	{
    		get
    		{
    			return _xmlFile;
    		}
    		set
    		{
    			_xmlFile = value;
    		}
    	}
    
    	/// &lt;summary&gt;
    	/// The XPath expression used to select which nodes to read.
    	/// &lt;/summary&gt;
    	[TaskAttribute (&quot;xpath&quot;, Required = true)]
    	[StringValidator (AllowEmpty = false)]
    	public string XPath
    	{
    		get
    		{
    			return _xPath;
    		}
    		set
    		{
    			_xPath = value;
    		}
    	}
    
    	/// &lt;summary&gt;
    	/// The property that receives the text representation of the XML inside 
    	/// the nodes returned from the XPath expression, seperated by the specified delimiter.
    	/// &lt;/summary&gt;
    	[TaskAttribute (&quot;property&quot;, Required = true)]
    	[StringValidator (AllowEmpty = false)]
    	public string Property
    	{
    		get
    		{
    			return _property;
    		}
    		set
    		{
    			_property = value;
    		}
    	}
    
    	/// &lt;summary&gt;
    	/// The delimiter string.
    	/// &lt;/summary&gt;
    	[TaskAttribute (&quot;delim&quot;, Required = false)]
    	[StringValidator (AllowEmpty = false)]
    	public string Delimiter
    	{
    		get
    		{
    			return _delimiter;
    		}
    		set
    		{
    			_delimiter = value;
    		}
    	}
    
    	/// &lt;summary&gt;
    	/// If unique, no duplicate vaslues are returned. By default unique is false and all values are returned.
    	/// &lt;/summary&gt;
    	[TaskAttribute (&quot;unique&quot;, Required = false)]
    	[BooleanValidator()]
    	public bool Unique
    	{
    		get
    		{
    			return _unique;
    		}
    		set
    		{
    			_unique = value;
    		}
    	}
    
    	/// &lt;summary&gt;
    	/// Namespace definitions to resolve prefixes in the XPath expression.
    	/// &lt;/summary&gt;
    	[BuildElementCollection(&quot;namespaces&quot;, &quot;namespace&quot;)]
    	public XmlNamespaceCollection Namespaces 
    	{
    		get
    		{
    			return _namespaces;
    		}
    		set
    		{
    			_namespaces = value;
    		}
    	}
    
    	#endregion Public Instance Properties
    
    	#region Override implementation of Task
    
    	/// &lt;summary&gt;
    	/// Executes the XML reading task.
    	/// &lt;/summary&gt;
    	protected override void ExecuteTask() 
    	{
    		Log(Level.Verbose, &quot;Looking at &#39;{0}&#39; with XPath expression &#39;{1}&#39;.&quot;, 
    			XmlFile.FullName,  XPath);
    
    		// ensure the specified xml file exists
    		if (!XmlFile.Exists) 
    		{
    			throw new BuildException(string.Format(CultureInfo.InvariantCulture, 
    				&quot;The XML file &#39;{0}&#39; does not exist.&quot;, XmlFile.FullName), Location);
    		}
    		try 
    		{
    			XmlDocument document = LoadDocument(XmlFile.FullName);
    			Properties[Property] = GetNodeContents(XPath, document);
    		} 
    		catch (BuildException ex) 
    		{
    			throw ex; // Just re-throw the build exceptions.
    		} 
    		catch (Exception ex) 
    		{
    			throw new BuildException(string.Format(CultureInfo.InvariantCulture,
    				&quot;Retrieving the information from &#39;{0}&#39; failed.&quot;, XmlFile.FullName), 
    				Location, ex);
    		}
    	}
        
    	#endregion Override implementation of Task
        
    	#region private Instance Methods
    
    	/// &lt;summary&gt;
    	/// Loads an XML document from a file on disk.
    	/// &lt;/summary&gt;
    	/// &lt;param name=&quot;fileName&quot;&gt;The file name of the file to load the XML document from.&lt;/param&gt;
    	/// &lt;returns&gt;
    	/// A &lt;see cref=&quot;XmlDocument&quot;&gt;document&lt;/see&gt; containing
    	/// the document object representing the file.
    	/// &lt;/returns&gt;
    	private XmlDocument LoadDocument(string fileName)  
    	{
    		XmlDocument document = null;
    
    		try 
    		{
    			document = new XmlDocument();
    			document.Load(fileName);
    			return document;
    		} 
    		catch (Exception ex) 
    		{
    			throw new BuildException(string.Format(CultureInfo.InvariantCulture,
    				&quot;Can&#39;t load XML file &#39;{0}&#39;.&quot;, fileName), Location, 
    				ex);
    		}
    	}
    
    	/// &lt;summary&gt;
    	/// Gets the contents of the list of nodes specified by the XPath expression.
    	/// &lt;/summary&gt;
    	/// &lt;param name=&quot;xpath&quot;&gt;The XPath expression used to determine the nodes.&lt;/param&gt;
    	/// &lt;param name=&quot;document&quot;&gt;The XML document to select the nodes from.&lt;/param&gt;
    	/// &lt;returns&gt;
    	/// The contents of the nodes specified by the XPath expression, delimited by 
    	/// the delimiter string.
    	/// &lt;/returns&gt;
    	private string GetNodeContents(string xpath, XmlDocument document) 
    	{
    		XmlNodeList nodes;
    
    		try 
    		{
    			XmlNamespaceManager nsMgr = new XmlNamespaceManager(document.NameTable);
    			foreach (XmlNamespace xmlNamespace in Namespaces) 
    			{
    				if (xmlNamespace.IfDefined &amp;&amp; !xmlNamespace.UnlessDefined) 
    				{
    					nsMgr.AddNamespace(xmlNamespace.Prefix, xmlNamespace.Uri);
    				}
    			}
    			nodes = document.SelectNodes(xpath, nsMgr);
    		} 
    		catch (Exception ex) 
    		{
    			throw new BuildException(string.Format(CultureInfo.InvariantCulture,
    				&quot;Failed to execute the xpath expression {0}.&quot;, xpath), 
    				Location, ex);
    		}
    
    		Log(Level.Verbose, &quot;Found &#39;{0}&#39; nodes with the XPath expression &#39;{1}&#39;.&quot;,
    			nodes.Count, xpath);
    
    		// collect all strings in a string collection, skip duplications if Unique is true
    		StringCollection texts = new StringCollection();
    		foreach (XmlNode node in nodes)
    		{
    			string text = node.InnerText;
    			if (!Unique || !texts.Contains(text))
    			{
    				texts.Add(text);
    			}
    		}
    		
    		// Concatenate the strings in the string collection to a single string, delimited by Delimiter
    		StringBuilder builder = new StringBuilder();
    		foreach (string text in texts)
    		{
    			if (builder.Length &gt; 0)
    			{
    				builder.Append(Delimiter);
    			}
    			builder.Append(text);
    		}
    
    		return builder.ToString();
    	}
    	#endregion private Instance Methods
    }
    

    }

  • Macaw Discussion Board for Russian SharePoint

    Denis Fayruzov (dfayruzov [at] gmail.com) wanted to use the Macaw Discussion Board, but the standard provided template only works on English versions of SharePoint (LCID 1033). He used the provided zip file (MacawDiscussionBoard1.0r18Pages.zip) with aspx pages (used for the views and edit pages) and applied those to a standard discussion board. It worked! He created a list template from it, and the Russian Macaw Discussion Board (LCID 1049) is now available on http://spsutil.sourceforge.net. I could not try it out myself, I don’t have a Russian SharePoint installation;-) For any information on the Russian version contact Denis.

    If people want to make list templates available for other languages, mail them to me and I will put them on http://spsutil.sourceforge.net.

    If someone knows how to make a list template that works on any language version of SharePoint, let me know! That would make updates a lot easier!