Tuesday, October 20, 2009

Section 27.1.  Main Data Structures










27.1. Main Data Structures





















To understand the code for the neighboring infrastructure, we first need to describe a few data structures used heavily in the neighboring subsystem, and see how they interact with each other.


Most of the definitions for these structures can be found in the file include/net/neighbour.h. Note that the Linux kernel code uses the British spelling neighbour for data structures and functions related to this subsystem. When speaking generically of neighbors, this book sticks to the American spelling, which is the spelling found in RFCs and other official documents.




struct neighbour


Stores information about a neighbor, such as the L2 and L3 addresses, the NUD state, the device through which the neighbor can be reached, etc. Note that a neighbour enTRy is associated not with a host, but with an L3 address. There can be more than one L3 address for a host. For example, routers, among other systems, have multiple interfaces and therefore multiple L3 addresses.



struct neigh_table


Describes a neighboring protocol's parameters and functions. There is one instance of this structure for each neighboring protocol. All of the structures are inserted into a global list pointed to by the static variable neigh_tables and protected by the lock neigh_tbl_lock. This lock protects the integrity of the list, but not the content of each entry.



struct neigh_parms


A set of parameters that can be used to tune the behavior of a neighboring protocol on a per-device basis. Since more than one protocol can be enabled on most interfaces (for instance, IPv4 and IPv6), more than one neigh_parms structure can be associated with a net_device structure.



struct neigh_ops


A set of functions that represents the interface between the L3 protocols such as IP and dev_queue_xmit, the API introduced in Chapter 11 and described briefly in the upcoming section "Common Interface Between L3 Protocols and Neighboring Protocols." The virtual functions can change based on the context in which they are used (that is, on the status of the neighbor, as described in Chapter 26).



struct hh_cache


Caches link layer headers to speed up transmission. It is faster to copy a cached header into a buffer in one shot than to fill in its fields one by one. Not all device drivers implement header caching. See the section "L2 Header Caching."



struct rtable




struct dst_entry


When a host needs to route a packet, it first consults its cache and then, in the case of a cache miss, it queries the routing table. Every time the host queries the routing table, the result is saved into the cache. The IPv4 routing cache is composed of rtable structures. Each instance is associated with a different destination IP address. Among the fields of the rtable structure are the destination address, the next hop (router), and a structure of type dst_entry that is used to store the protocol-independent information. dst_entry includes a pointer to the neighbour structure associated with the next hop. I cover the dst_entry data structure in detail in Chapter 36. In the rest of this chapter, I will often refer to dst_entry structures as elements of the routing table cache, even though dst_entry is actually only a field of the rtable structure.


Figure 27-1 shows how dst_entry structures are linked to hh_cache and neighbour structures.


The neighboring code also uses some other small data structures. For instance, struct pneigh_entry is used by destination-based proxying, and struct neigh_statistics is used to collect statistics about neighboring protocols. The first structure is described in the section "Acting As a Proxy," and the second one is described in the section "Statistics" in Chapter 29. Figure 27-2 also includes the following data structure types, described in greater detail in Chapters 22 and 23:



Figure 27-1. Relationship among dst_entry, neighbour, and hh_cache structures





in_device, inet6_dev


Used to store the IPv4 and IPv6 configurations of a device, respectively.



net_device


There is one net_device structure for each network device recognized by the kernel. See Chapter 8.


Figure 27-2 shows the relationships between the most important data structures. Right now it might seem a big mess, but it will make much more sense by the end of this chapter.


Here are the main points shown in Figure 27-2:


  • In the central part of the figure, you can see that each network device has a pointer to a data structure that holds the configuration for each L3 protocol configured on the device. In the example shown in the figure, IPv6 is configured on one device and IPv4 is configured on both. Both the in_device structure (IPv4 configuration) and inet6_dev structure (IPv6 configuration) include a pointer to the configuration used by their neighboring protocols, respectively ARP and ND.

    All of the neigh_parms structures used by any given protocol are linked together in a unidirectional list whose root is stored in the protocol's neigh_table structure.

  • The top and bottom of the figure show that each protocol keeps two hash tables. The first one, hash_buckets, caches the L3-to-L2 mappings resolved by the protocol or statically configured. The second one, phash_bucket, stores those IP addresses that are proxied, as described in the section "Per-Device Proxying and Per-Destination Proxying." Note that phash_bucket is not a cache, so its elements do not expire and don't need confirmation. Each pneigh_entry structure


    Figure 27-2. Data structures' relationships

    includes a pointer (not depicted in Figure 27-2) to its associated net_device structure. Figure 27-6 gives more detail on the structure of the cache hash_buckets.

  • Each neighbour instance is associated with one or more hh_cache structures, if the device supports header caching. The section "L2 Header Caching," and Figures 27-1 and 27-10, give more details about the relationship between neighbour and hh_cache structures.












10.2 Conditional Compilation




I l@ve RuBoard










10.2 Conditional Compilation





One problem programmers have is
writing code that can work on many different machines. In theory, C++
code is portable; in practice, many machines have little quirks that
must be accounted for. For example, this book covers Unix, MS-DOS,
and Windows compilers. Although they are almost the same, there are
some differences.



Through the use of conditional
compilation, the preprocessor allows you great
flexibility in changing the way code is generated. Suppose you want
to put debugging code in the program while you are working on it and
then remove the debugging code in the production version. You could
do this by including the code in an #ifdef-#endif section, like this:



#ifdef DEBUG 
std::cout << "In compute_hash, value " << value << " hash " << hash << "\n";
#endif /* DEBUG */









You do not have to put the /* DEBUG */ after the
#endif, but it is very useful as a
comment.




If the beginning of the program contains the following directive, the
std::cout is included:



#define DEBUG       /* Turn debugging on */ 


If the program contains the following directive, the
std::cout is omitted:



#undef DEBUG        /* Turn debugging off */ 


Strictly speaking, the #undef DEBUG is
unnecessary. If there is no #define DEBUG
statement, DEBUG is undefined. The #undef
DEBUG
statement is used to indicate explicitly to anyone
reading the code that DEBUG is used for
conditional compilation and is now turned off.



The directive #ifndef causes the
code to be compiled if the symbol is not
defined:



#ifndef STACK_SIZE /* Is stack size defined? */
#define STACK_SIZE 100 /* It's not defined, so define it here */
#endif /* STACK_SIZE */


#else reverses the sense of the
conditional. For example:



#ifdef DEBUG 
std::cout << "Test version. Debugging is on\n";
#else /* DEBUG */
std::cout << "Production version\n";
#endif /* DEBUG */


A programmer may wish to temporarily remove a section of code. A
common method of doing this is to comment out the code by enclosing
it in /* */. This can cause problems, as shown by the following
example:



/***** Comment out this section 
section_report( );
/* Handle the end-of-section stuff */
dump_table( );
**** End of commented out section */


This generates a syntax error for the fifth line. Why? Because the */
on the third line ends the comment that started on the first line,
and the fifth line :



**** End of commented out section */ 


is not a legal C++ statement.



A better method is to use the #ifdef
construct to remove the code.



#ifdef UNDEF 
section_report( );
/* Handle the end-of-section stuff */
dump_table( );
#endif /* UNDEF */


(Of course the code will be included if anyone defines the symbol
UNDEF; however, anyone who does so should be
shot.)



The compiler switch
-Dsymbol allows symbols to be defined on the
command line. For example, the command:



CC -DDEBUG -g -o prog prog.cc 


compiles the program prog.c and includes all the
code in #ifdef DEBUG/#endif /* DEBUG */ pairs,
even though there is no #define DEBUG in the
program. The Borland-C++ equivalent is:



bcc32 -DDEBUG -g -N -eprog.exe prog.c 


The general form of the option is -Dsymbol or
-Dsymbol=value. For
example, the following sets MAX to 10:



CC -DMAX=10 -o prog prog.c 


Most C++ compilers automatically define some system-dependent
symbols. For example, Borland-C++ defines the symbol _
_BORLANDC_ _,
and Windows-based compilers define _
_WIN32
. The ANSI standard compiler C defines the symbol
_ _STDC_ _. C++ compilers define the symbol
_ _cplusplus. Most Unix compilers define a name
for the system (e.g., Sun, VAX, Linux, etc.); however, they are
rarely documented. The symbol unix is always
defined for all Unix machines










Command-line options specify the initial value of a symbol only. Any
#define and #undef directives
in the program can change the symbol's value. For
example, the directive #undef DEBUG results in
DEBUG being undefined whether or not you use
-DDEBUG .










    I l@ve RuBoard



    Chapter 8.&nbsp; Process Control










    Chapter 8. Process Control




      Section 8.1. 
      Introduction


      Section 8.2. 
      Process Identifiers


      Section 8.3. 
      fork Function


      Section 8.4. 
      vfork Function


      Section 8.5. 
      exit Functions


      Section 8.6. 
      wait and waitpid Functions


      Section 8.7. 
      waitid Function


      Section 8.8. 
      wait3 and wait4 Functions


      Section 8.9. 
      Race Conditions


      Section 8.10. 
      exec Functions


      Section 8.11. 
      Changing User IDs and Group IDs


      Section 8.12. 
      Interpreter Files


      Section 8.13. 
      system Function


      Section 8.14. 
      Process Accounting


      Section 8.15. 
      User Identification


      Section 8.16. 
      Process Times


      Section 8.17. 
      Summary

      Exercises








    Section 26.11.&nbsp; Data Areas










    26.11. Data Areas


    Applications often need to read or store data. Depending on the use case, this data may be stored in one of many locations. Consider preferences as an example.


    Typical products use at least some preferences. The preferences themselves may or may not be defined in the product's plug-ins. For example, if you are reusing plug-ins from different products, it is more convenient to manage the preferences outside the plug-in.


    In addition, applications often allow users to change preference values or use preferences to store recently opened files, recent chat partners, and so on. These values might be stored uniquely for each user or shared among users. In scenarios where applications operate on distinct datasets, some of the preferences may even relate to the particular data and should be stored or associated with that data.


    Preferences are just one example, but they illustrate the various scopes and lifecycles that applications have for the data they read and write. Eclipse defines four data areas that capture these characteristics and allows application writers to properly control the scope of their data:


    Install The install area is where Eclipse itself is installed. The install area is generally read-only. The data in the install area is available to all instances of all configurations of Eclipse running on the install. See also Platform.getInstallLocation() and osgi.install.area.

    Configuration The configuration area is where the running configuration of Eclipse is defined. Configuration areas are generally writable. The data in a configuration area is available to all instances of the configuration. Chapter 25, "The Last Mile," contains additional detail on the configuration area. See also Platform.getConfigurationLocation() and osgi.configuration.area.

    Instance The instance area is the default location for user-defined data (e.g., a workspace). The instance area is typically writable. Applications may allow multiple sessions to have concurrent access to the instance area, but must take care to prevent lost updates, etc. See also Platform.getInstanceLocation() and osgi.instance.area.

    Note



    The Eclipse IDE's workspace is an example of the use of instance locations. The Resources plug-in implementers chose to make the default location for projects be in the instance area defined by the Runtime. Eclipse IDE users commonly think they are setting the Resource's workspace location, but actually they are setting the Runtime's instance location.



    User The user area is where Eclipse manages data specific to a user, but independent of the configuration or instance. The user area is typically based on the Java user.home system property and the initial value of the osgi.user.area system property. See also Platform.getUserLocation() and osgi.user.area.


    In addition to these Eclipse wide areas, the Runtime defines two locations specifically for each installed plug-in:


    State location This is a location within the instance area's metadata. See Plugin.getStateLocation().

    Data location This is a location within the configuration's metadata. See Bundle.getDataFile().


    Each of these locations is controlled by setting the system properties described before Eclipse starts (e.g., in the config.ini). Locations are URLs. For simplicity, file paths are also accepted and automatically converted to file: URLs. For better control and convenience, there are also a number of predefined symbolic locations that can be used. Note that not all combinations of location type and symbolic value are valid. Table 26-1 details which combinations are possible.


    Table 26-1. Location Compatibilities

    Location/Value

    Supports default?

    File/URL

    @none

    @noDefault

    @user.home

    @user.dir

    Install

    No

    Yes

    No

    No

    Yes

    Yes

    Configuration

    Yes

    Yes

    Yes[*]

    Yes[*]

    Yes

    Yes

    Instance

    Yes

    Yes

    Yes

    Yes

    Yes

    Yes (default)

    User

    Yes

    Yes

    Yes

    Yes

    Yes

    Yes


    [*] Indicates that this setup is technically possible, but pragmatically quite difficult to manage. In particular, without a configuration location, the Eclipse Runtime may only get as far as starting the OSGi framework.


    @none Indicates that the corresponding location should never be set either explicitly or to its default value. For example, an RCP-style application that has no instance data may use osgi.instance.area=@none to prevent extraneous files being written to disk. @none must not be followed by any path segments.

    @noDefault Forces a location to be undefined or explicitly defined (i.e., Eclipse does not automatically compute a default value). This is useful when you want to allow for data in the corresponding location, but the Eclipse default value is not appropriate. @noDefault must not be followed by any path segments.

    @user.home Directs Eclipse to compute a location value relative to the user's home directory. @user.home can be followed by path segments. In all cases, the string "@user.home" is replaced with the value of the Java user.home system property. For example, setting

    osgi.instance.area=@user.home/myWorkspace

    results in a value of

    file:/users/fred/myWorkspace

    @user.dir Directs Eclipse to compute a location value relative to the current working directory. @user.dir can be followed by path segments. In all cases, the string "@user.dir" is replaced with the value of the Java user.dir system property. For example, setting

    osgi.instance.area=@user.dir/myWorkspace

    results in a value of

    file:/usr/local/eclipse/myWorkspace


    Since the default case is for all locations to be set, valid, and writable, some plug-ins may fail in other setups, even if they are listed as possible. For example, it is unreasonable to expect a plug-in focused on instance data, such as the Resources plug-in, to do much if the instance area is not defined. It is up to plug-in developers to choose the setups they support and design their functions accordingly.


    Note that each of the locations can be statically marked as read-only by setting the corresponding property osgi.AAA.area.readonly=true, where "AAA" is one of the area names.












    Recipe&nbsp;11.11.&nbsp;Configuring Actions to Require SSL










    Recipe 11.11. Configuring Actions to Require SSL




    Problem



    You want to control if HTTPS is required on a page-by-page basis.





    Solution



    Use the SSLEXT Struts extension.





    Discussion



    The Struts SSL Extension (SSLEXT), an open source Struts plug-in,
    enables you to indicate if an action requires the secure
    (https) protocol. Steve Ditlinger created and
    maintains this project (with others), hosted at
    http://sslext.sourceforge.net.



    SSLEXT enables fine-grained secure protocol control by
    providing:



    • The ability to specify in the struts-config.xml file
      if an action should require a secure protocol.
      This feature essentially allows your application to switch actions
      and JSP pages from http to
      https.

    • Extensions of the Struts JSP tags that generate URLs that include the
      https protocol.


    The SSLEXT distribution consists of a plug-in class for
    initialization (SecurePlugIn), a custom request
    processor (SecureRequestProcessor), and a custom
    action mapping class (SecureActionMapping).





    If you have been using custom RequestProcessor or
    ActionMapping classes and you want to use SSLEXT,
    you will need to change these classes to extend the corresponding
    classes provided by SSLEXT.





    For JSP pages, SSLEXT provides custom extensions of Struts tags for
    generating protocol-specific URLs. A custom JSP allows you to
    indicate if a JSP page requires https. SSLEXT
    depends on the Java Secure Socket Extension (JSSE). JSSE is included
    with JDK 1.4 or later. If you're using an older JDK,
    you can download JSSE from Sun's Java site. Finally,
    you'll need to enable SSL for your application
    server. For Tomcat, this can be found in the Tomcat SSL
    How-To
    documentation.



    SSLEXT works by intercepting the request in its
    SecureRequestProcessor. If the request is directed
    toward an action that is marked as secure, the
    SecureRequestProcessor will generate a redirect.
    The redirect will change the protocol to https and
    the port to a secure port (e.g., 443 or 8443). Switching protocols
    sounds simple; however, a request in a Struts application usually
    contains request attributes, and these attributes are lost on a
    redirect. SSLEXT solves this problem by temporarily storing the
    request attributes in the session.



    You can download the SSLEXT distribution from the project web site.
    SSLEXT doesn't include a lot of documentation, but
    it comes with sample applications that demonstrate its use and
    features. If all your requests go through Struts actions, you can
    apply SSLEXT without modifying any Java code or JSP pages.
    Here's how you would apply SSLEXT to a Struts
    application:



    1. Copy the sslext.jar file into your
      application's WEB-INF/lib
      folder.

    2. If you need to use the custom JSP tags, copy the sslext.tld
      file into the WEB-INF/lib folder.


    Make the following changes to the
    struts-config.xml file:



    1. Add the type attribute to the
      action-mappings element to specify the custom
      secure action mapping class:

      <action-mappings type="org.apache.struts.config.SecureActionConfig">


    2. Add the controller element for the secure request
      processor:

      <controller processorClass="org.apache.struts.action.SecureRequestProcessor" />


    3. Add the plug-in declaration to load the SSLEXT
      code:

      <plug-in className="org.apache.struts.action.SecurePlugIn">
      <set-property property="httpPort" value="80"/>
      <set-property property="httpsPort" value="443"/>
      <set-property property="enable" value="true"/>
      <set-property property="addSession" value="true"/>
      </plug-in>


    4. Set the secure property to TRue
      for any action you want to be accessed using
      https:

      <action    path="/reg/Main"
      type="com.oreilly.strutsckbk.ch11.ssl.MainMenuAction">
      <!-- Force this action to run secured -->
      <set-property property="secure" value="true"/>
      <forward name="success" path="/reg/main.jsp"/>
      </action>


    5. Set the secure property to
      false for any action that you
      only want to run under an unsecured protocol
      (http):

      <action    path="/Welcome"
      type="com.oreilly.strutsckbk.ch11.ssl.WelcomeAction">
      <!-- Force this action to run unsecured -->
      <set-property property="secure" value="false"/>
      <forward name="success" path="/welcome.jsp"/>
      </action>



    If you have accessible JSP pages you want to specify as secured (or
    unsecured), use the SSLEXT pageScheme custom JSP
    tag:



    <%@ taglib uri="http://www.ebuilt.com/taglib" prefix="sslext"%>
    <sslext:pageScheme secure="true"/>




    Now rebuild and deploy the application. When you click on a link to a
    secured action, the protocol will switch to
    https and the port to the
    secure port (e.g., 8443 or
    443). If you go to an action
    marked as unsecured, the protocol and port should switch back to
    http and the port to the standard port (e.g.,
    8080 or
    80). If you access an action without a specified
    value for the secure property or the value is set to
    any, then the protocol won't
    switch when you access the action. If you're under
    http, the protocol will remain
    http; if you're under
    https, the protocol will remain
    https.





    Be careful if you switch from a secured to unsecured protocol (https
    to http). Critical user-specific data, such as the current session
    ID, can be snooped by a hacker. The hacker could use this data to
    hijack the session and imposter the user. Here is a good rule to
    follow: Once you switch to https, stay in https.





    You can use SSLEXT alongside container-managed
    security mechanisms for specifying secure transport. The
    container-managed security approach works well when you want to
    secure entire portions of your application:



    <security-constraint>
    <web-resource-collection>
    <web-resource-name>AdminPages</web-resource-name>
    <description>Administrative pages</description>
    <url-pattern>/admin/*</url-pattern>
    </web-resource-collection>
    <auth-constraint>
    <role-name>jscAdmin</role-name>
    </auth-constraint>
    <!-- Switch to HTTPS for the admin pages -->
    <user-data-constraint>
    <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
    </security-constraint>




    You can then use SSLEXT for fine-grained control of the protocol at
    the action level.





    See Also



    Enabling an application server to support https
    varies. Tomcat provides a how-to for this. For Tomcat 5.0, the
    relevant documentation can be found at http://jakarta.apache.org/tomcat/tomcat-5.0-doc/ssl-howto.html.



    SSLEXT is hosted on SourceForge at http://sslext.sourceforge.net.



    Craig McClanahan presents a good argument against switching back to
    http from https. His comments
    can be found in a struts-user mailing list
    thread archived at http://www.mail-archive.com/struts-user@jakarta.apache.org/msg81889.html.



    Recipe 11.9 shows how you can specify the
    protocol in the web.xml file. This approach,
    presented as part of the J2EE tutorial, can be found at http://java.sun.com/j2ee/1.4/docs/tutorial/doc/Security4.html.












      Section 3.5. Prototype







      3.5. Prototype

      Every object is linked to a prototype object from which it can inherit properties. All objects created from object literals are linked to Object.prototype, an object that comes standard with JavaScript.

      When you make a new object, you can select the object that should be its prototype. The mechanism that JavaScript provides to do this is messy and complex, but it can be significantly simplified. We will add a beget method to the Object function. The beget method creates a new object that uses an old object as its prototype. There will be much more about functions in the next chapter.

      if (typeof Object.beget !== 'function') {
      Object.beget = function (o) {
      var F = function () {};
      F.prototype = o;
      return new F();
      };
      }
      var another_stooge = Object.beget(stooge);


      The prototype link has no effect on updating. When we make changes to an object, the object's prototype is not touched:

      another_stooge['first-name'] = 'Harry';
      another_stooge['middle-name'] = 'Moses';
      another_stooge.nickname = 'Moe';


      The prototype link is used only in retrieval. If we try to retrieve a property value from an object, and if the object lacks the property name, then JavaScript attempts to retrieve the property value from the prototype object. And if that object is lacking the property, then it goes to its prototype, and so on until the process finally bottoms out with Object.prototype. If the desired property exists nowhere in the prototype chain, then the result is the undefined value. This is called delegation.

      The prototype relationship is a dynamic relationship. If we add a new property to a prototype, that property will immediately be visible in all of the objects that are based on that prototype:

      stooge.profession = 'actor';
      another_stooge.profession // 'actor'


      We will see more about the prototype chain in Chapter 6.








      22.10 ''Virtual'' methods



      [ Team LiB ]










      22.10 Virtual methods


      The keyword virtual
      in front of a parent class method tells the C++ compiler that the class has a child class which has a method which has the same name but which acts differently in the child class. If (a) a method is declared as virtual, and (b) the object which calls the method is referred to via a pointer, then (c) the compiled program will, even while it is running, be able to decide which implementation of the virtual
      method to use. It is worth stressing that this 'runtime binding' only works if you the programmer fulfill both conditions: (a) you use virtual
      in your method declaration, and (b) you use a pointer to your object.


      Except in the case of a destructor, corresponding virtual functions have the same name. You don't put the word virtual
      in front of the actual function implementation code in the *.cpp. You can put virtual
      in front of the child class function declaration in the child class's header file or not, as you like. In other words, to start with, you really only need to put virtual
      in one place: in front of the parent class's declaration of the function.


      But, in order to make our code more readable, when we derive off child classes from a parent class with a virtual function, we usually do put virtual
      in front of the child method declaration as well as in the parent method declaration. The child does need to have a declaration for the method in order to override it in any case.


      One slightly weird thing is that a parent class destructor like ~cProgrammer()
      needs to be declared virtual
      even though a child class destructor like ~cTeacherProgrammer()
      seems to have a different name. But you don't in fact call the destructor by name. In a program where we have a cProgrammer *_ptextbookauthor, we might be calling either the cProgrammer
      or the cTeacher
      destructor with a line like delete _ptextbookauthor;. The thing is, it's possible that _ptextbookauthor got initialized as new cTeacher, so we just don't know.


      The delete
      operator calls the destructor without referring to the destructor method by name. So the compiled code needs to actually look at the type of the _textbookauthor pointer to find out whether it's really a cProgrammer
      * or a cTeacherProgrammer*, so it knows which destructor to use. And unless you fulfilled the 'virtual condition' by making the destructor virtual, then the code won't know to do runtime binding and choose between using the parent or the child method as appropriate. The destructors are different here because the cTeacherProgrammer
      has more stuff to destroy, in particular, the cGollywog *_pimaginaryfriend.



      Slogan for a class: If your child is richer than you, you need a virtual destructor.



      One final point should be mentioned here. Ordinarily, when you have a method virtual void somemethod()
      in a base class called, say, cParentClass, then when you override the method in a child class called, say, cChildClass, the child class somemethod()
      will call the parent class method if we explicitly ask it to with code like this.



      void cChildClass::somemethod()
      {
      cParentClass::somemethod();
      //Your extra childclass code goes here....
      }

      But in the case of a virtual
      destructor, the parent class's destructor method will be automatically called when the object is deleted. This is in accord with the standard C++ execution order of constructors and destructors mentioned in the last subsection.



      cChildClass::~cChildClass()
      {
      //Your extra child class destructor code goes here...
      ...
      /* The cParentClass::~cParentClass destructor will be
      automatically called
      here at the end of the ~cChildClass destructor call. */
      }





        [ Team LiB ]



        Hour 20








         

         



        Hour 20








        1:

        While your application is running and a connection has been made to a data source, what happens if you delete the data source from the file system while you are navigating records? What about inserting or deleting records?


        A1:

        Nothing happens. The database file is not locked, so you can delete it. You can add and delete records because you are just interfacing with a DataSet. However, once you attempt to update the data source with the new DataSet, an exception will be thrown because the file cannot be found.


        2:

        Do you need a database to create datasets, tables, and records within the .NET Framework?


        A2:

        No, you can create an entire database during runtime using the .NET Framework classes.


        3:

        What is a table relationship and how is it handled within a DataSet?


        A3:

        A table relationship is an association of two tables in a parent-child relationship. Table relationships are contained within the DataRelationCollection object in the DataSet.













         

         





        Top

        14.2 Bitmap Fonts




        < BACK  NEXT >

        [oR]

        14.2
        BITMAP FONTS


        Glyphs in a font may be represented using different methods. The simplest way is using a bitmap to represent the pixels making up each glyph. Such fonts are called bitmap fonts. Another way is to use straight lines to represent the outline of each glyph, which gives vector fonts. Most of the fonts we use in Windows today are TrueType fonts or OpenType fonts. These fonts use a much more sophisticated method to represent the outline of glyphs and control the rendering of these outlines. We will discuss bitmap fonts in this section, Vector fonts in Section 14.3, and TrueType fonts in Section 14.4.



        Bitmap fonts have a long history in computer display. In the old DOS days, BIOS ROM contained several bitmap fonts for different display resolutions. When an application issues a software interrupt to display a character in graphics mode, BIOS fetches the glyph data and displays it in a specified position. For the initial Windows operating systems before Windows 3.1, bitmap fonts were the only font type supported. Even today, bitmap fonts are still used as stock fonts which are heavily employed in user inter face displays like menus, dialog boxes, and tool-tip messages, not to mention DOS boxes.



        Even the latest Windows operating systems still use dozens of bitmap fonts. Different display resolutions use a different set of bitmap fonts to match the display resolution. For example, sserife.fon is MS Sans Serif font for 96-dpi display mode with 100% aspect ratio, while sseriff.fon is for 120-dpi display mode. When you switch display mode from small font (96-dpi) to large font (120-dpi), sseriff.fon gets enabled instead of sserife.fon. System font change affects the base units used in converting dialog box design-time coordinates to screen coordinates, so all your carefully crafted dialog boxes are messed up by the simple font change. Some bitmap fonts are so critical to system operation that they are marked as hidden files to avoid accidental deletion.



        A bitmap font file usually has the .FON file extension. It uses the 16-bit NE executable file format initially used in 16-bit Windows. Within a FON file, a text string is embedded which describes the font characteristics. For example, the description for courf.fon is �FONTRES 100,120,120 : Courier 10,12,15 (8514/a res),� which contains font name, design aspect ratio (100), DPI (120x120), and point sizes supported (10, 12, 15).



        For each point size supported by a bitmap font, there is one raster font resource, usually stored in a file with .FNT file extension. Multiple raster font resources can be added as a FONT type resource to the final bitmap font file. Platform SDK provides a FONTEDIT utility to modify an existing font resource file, with full source code.



        Although old-fashioned, a bitmap font resource is still quite an interesting place to understand how fonts are designed and used. There are two versions of raster font resources, version 2.00 used for Windows 2.0 and version 3.00 originally designed for Windows 3.00. You may not believe that even Windows 2000 is using the version 2.00 raster font format. The fancy features provided by version 3.00 are well covered by TrueType fonts.



        Each font resource starts with a fixed-size header, which contains version, size, copyright, resolution, character set, and font metrics information. For version 2.00 fonts, the Version field will be 0x200. For raster fonts, the LSB of Type is 1. Each font resource is designed for one normal resolution and a certain DPI resolution. Modern display monitors normally use square resolution�for example, 96 dpi by 96 dpi. A 10-point font for 96-dpi display will roughly be 13 (10*96/72) pixels in height. The bitmap font resource only supports one single-byte character set. It contains glyphs for all the characters within the range specified by FirstChar and LastChar. Each font resource defines a default char, which will be used to display characters not in the provided range. The BreakChar is the word break character.





        typedef struct
        {
        WORD Version; // 0x200 for version 2.0, 0x300 for version 3.00
        DWORD Size; // Size of whole resource
        CHAR Copyright[60];
        WORD Type; // Raster font if Type & 1 == 0
        WORD Points; // Nominal point size
        WORD VertRes; // Nominal vertical resolution
        WORD HorizRes; // Nominal horizontal resolution
        WORD Ascent;
        WORD IntLeading;
        WORD ExtLeading;
        BYTE Italic;
        BYTE Underline;
        BYTE StrikeOut;
        WORD Weight;
        BYTE CharSet;
        WORD PixWidth; // 0 for variable width
        WORD PixHeight;
        BYTE Family; // Pitch and family
        WORD AvgWidth; // Width of character 'x'
        WORD MaxWidth; // Maximum width
        BYTE FirstChar; // First character defined in font
        BYTE LastChar; // Last character defined in font
        BYTE DefaultChar; // Sub. for out-of-range chars.
        BYTE BreakChar; // Word break character
        WORD WidthBytes; // No. bytes/row of bitmap
        DWORD Device; // Offset to device name string
        DWORD Face; // Offset to face name string
        DWORD BitsPointer; // Loaded bitmap address
        DWORD BitsOffset; // Bitmap offset
        BYTE Reserved; // 1 byte, not used
        } FontHeader20;


        The character table, or rather glyph table, comes after the font resource header. For the version 2.00 raster font, the character table entry for each character in the supported range contains two 16-bit integers: one for the glyph width and one for the offset to the glyph. Now you can see the serious design limitation of the version 2.00 font resource: Each font resource is limited to 64 KB in size because of the 16-bit offset. The character table contains (LastChar-FirstChar+2) entries. The extra entry is guaranteed to be blank.





        typedef struct
        {
        SHORT GIwidth;
        SHORT GIoffset;
        } GLYPHINFO_20;


        Version 2.00 supports only monochrome glyphs. Although version 3.00 is designed to support 16-color, 256-color, or even true color glyphs, no such fonts are actually found to exist in the real world. For monochrome glyphs, each pixel only needs one bit. But the order of these bits in these glyphs is nothing like the bitmap formats we've encountered. The first byte of the glyph is the first 8 pixels of the first scan line, the second byte is the first 8 pixels of the second scan line, until the first 8-pixel column of the glyph is completed. This is followed by the second 8-pixel column, the third 8-pixel column, and so on, until the width of the glyph is fully covered. Such design was quite a common optimization technique to speed up character display.



        Here is a routine to display a single glyph as a bitmap. The routine locates the GLYPHINFO table after the header, calculates the glyph index in the glyph table, and then converts the glyph to a monochrome DIB, which is then displayed using DIB function.





        int CharOut(HDC hDC, int x, int y, int ch, FontHeader20 * pH,
        int sx=1, int sy=1)
        {
        GLYPHINFO_20 * pGlyph = (GLYPHINFO_20 *) ((BYTE *)&pH->BitsOffset+5);

        if ( (ch<pH->FirstChar) || (ch>pH->LastChar) )
        ch = pH->DefaultChar;

        ch -= pH->FirstChar;

        int width = pGlyph[ch].GIwidth;
        int height = pH->PixHeight;
        struct { BITMAPINFOHEADER bmiHeader; RGBQUAD bmiColors[2]; } dib =
        {
        { sizeof(BITMAPINFOHEADER), width, -height, 1, 1, BI_RGB },
        { { 0xFF, 0xFF, 0xFF, 0 }, { 0, 0, 0, 0 } }
        };

        int bpl = ( width + 31 ) / 32 * 4;
        BYTE data[64/8*64]; // enough for 64x64
        const BYTE * pPixel = (const BYTE *) pH + pGlyph[ch].GIoffset;

        for (int i=0; i<(width+7)/8; i++)
        for (int j=0; j<height; j++)
        data[bpl * j + i] = * pPixel ++;

        StretchDIBits(hDC, x, y, width * sx, height * sy, 0, 0, width, height,
        data, (BITMAPINFO *) & dib, DIB_RGB_COLORS, SRCCOPY);

        return width * sx;
        }


        If we can convert the font resource into GDI supported bitmaps, we can display characters ourselves without GDI's text function. Figure 14-10 shows all the glyphs in a raster font for 8-point and 10-point MS Serif font resource at 96 dpi.




        Figure 14-10. Glyphs in a bitmap font.


        Use the raster font as an example, we're seeing what a font really is. A raster font, as defined by the Windows version 2.00 font format, is a set of font resources designed at different point sizes; each font resource is a set of monochrome bitmap glyphs that are mapped one-to-one with a character in a single-byte character. The raster font supports a simple mapping from a character in the character set to glyph indexes in a glyph table through a range of supported characters. The glyphs for the characters can be easily converted to GDI-supported bitmap formats and displayed on a graphics device. Bitmap fonts also provide extra simple text metrics information.



        Bitmap fonts are great for displaying small characters on the screen, both in terms of quality and performance, which is the main reason they still survive today. For different point sizes, bitmap fonts have to provide different sets of font resources. For example, a bitmap font used in Windows today normally provides 8-, 10-, 12-, 14-, 18-, and 24-point font resources. For other point sizes, or for a device with different resolution, glyphs need to be scaled to the required size. Bitmap scaling is always a problem, especially upscaling that requires new pixels to be generated. Figure 14-11 illustrates the result of scaling a glyph from a 24-point bitmap font, using a simple duplication method.




        Figure 14-11. Scaling a raster font glyph.


        Figure 14-11 shows the result of integer ratio scaling in both directions, where each pixel in the glyph is duplicated the same number of times. The rough edge is clearly visible. If the scaling is a noninteger ratio, the character displayed can have strokes with uneven thickness, as some pixels are scaled n times and others n + 1 times. Clearly, the scaling raster fonts does not provide good enough display/print quality; we have to find other ways to encode fonts for continuous and smooth scaling.






        < BACK  NEXT >





        Section 19.1.&nbsp; Presentations










        19.1. Presentations


        The Workbench uses the term presentation to define the set of Workbench classes that is responsible for managing and displaying editors and views. Presentations do more than paint widgetsthey are not just skins for the application. They also provide behavior for widgets. Presentations control the look of tabsthe very fact that tabs are used at allas well as toolbars, menus, and how parts are dragged from place to place.


        Presentations manage stacks of presentable parts such as views and editors. They allow collections of like parts to be stacked together and control the presentation and behavior of the stack. The Workbench may instantiate several presentations for a given page depending on the perspective layout. In essence, each hole that you define in your perspective is filled with a presentation that stacks views or editors in the hole.


        Figure 19-1 shows what Hyperbola would look like if you could remove all presentations from the Workbench. This isn't a mock-up; it is using a presentation that does not do much. The look resembles a perspective in which all views and editors are standalone. The most obvious quirk is that the chat editors and Console views no longer show their tabs. From the example, you can see that presentations play an important role in the Workbench and in defining the overall look and feel of your application.



        Figure 19-1. Hyperbola without a presentation

















        Section 14.9. Related Modules











        14.9. Related Modules


        In Table 14.9 you will find a list of modules other than os and sys that relate to the execution environment theme of this chapter.


        Table 14.9. Execution Environment Related Modules

        Module

        Description

        atexit[a]

        Registers handlers to execute when Python interpreter exits

        popen2

        Provides additional functionality on top of os.popen(): provides ability to communicate via standard files to the other process; use subprocess for Python 2.4 and newer)

        commands

        Provides additional functionality on top of os.system(): saves all program output in a string which is returned (as opposed to just dumping output to the screen); use subprocess for Python 2.4 and newer)

        getopt

        Processes options and command-line arguments in such applications

        site

        Processes site-specific modules or packages

        platform[b]

        Attributes of the underlying platform and architecture

        subprocess[c]

        Subprocess management (intended to replace old functions and modules such as os.system(), os.spawn*(), os.popen*(), popen2.*, commands.*)


        [a] New in Python 2.0.

        [b] New in Python 2.3.

        [c] New in Python 2.4.













        Dreamweaver MX-PHP Web Development







































        Dreamweaver MX-PHP Web Development



        Gareth Downes-Powell

        Tim Green

        Bruno Mairlot








        Published by

        Wiley Publishing, Inc.

        10475 Crosspoint Boulevard
        Indianapolis, IN 46256

        www.wiley.com





        Published simultaneously in Canada


        Library of Congress Card Number: 2003107089



        ISBN: 0-7645-4387-3



        Manufactured in the United States of America


        10 9 8 7 6 5 4 3 2 1


        1M/QU/QW/QT/IN


        No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8700. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4447, E-mail: <permcoordinator@wiley.com.>



        LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: WHILE THE PUBLISHER AND AUTHOR HAVE USED THEIR BEST EFFORTS IN PREPARING THIS BOOK, THEY MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS BOOK AND SPECIFICALLY DISCLAIM ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES REPRESENTATIVES OR WRITTEN SALES MATERIALS. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR YOUR SITUATION. YOU SHOULD CONSULT WITH A PROFESSIONAL WHERE APPROPRIATE. NEITHER THE PUBLISHER NOR AUTHOR SHALL BE LIABLE FOR ANY LOSS OF PROFIT OR ANY OTHER COMMERCIAL DAMAGES, INCLUDING BUT NOT LIMITED TO SPECIAL, INCIDENTAL, CONSEQUENTIAL, OR OTHER DAMAGES.


        For general information on our other products and services or to obtain technical support, please contact our Customer Care Department within the U.S. at (800) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002.


        Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.



        Trademarks: Wiley, the Wiley Publishing logo, Wrox, the Wrox logo, the Wrox Programmer to Programmer logo and related trade dress are trademarks or registered trademarks of Wiley in the United States and other countries, and may not be used without written permission. Dreamweaver is a registered trademark of Macromedia, Inc. All other trademarks are the property of their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book.



        Trademark Acknowledgements


        glasshaus has endeavored to provide trademark information about all the companies and products mentioned in this book by the appropriate use of capitals. However, glasshaus cannot guarantee the accuracy of this information.



        Credits



        Authors


        Gareth Downes-Powell
        Tim Green
        Bruno Mairlot



        Technical Reviewers


        Aaron Brown
        Allan Kent
        Martina Kosloff
        Jason Lotito
        Dan Maharry
        Aaron Richmond
        Wendy Robb
        Murray Summers



        Commissioning Editor


        Simon Mackie



        Technical Editors


        Amanda Kay
        Matt Machell
        Simon Mackie
        Dan Walker



        Managing Editor


        Liz Toy



        Project Manager


        Sophie Edwards



        Production Coordinators


        Rachel Taylor
        Pip Wonson



        Cover


        Dawn Chellingworth



        Indexer


        Adrian Axinte



        Proofreader


        Agnes Wiggers



        The cover image for this book was created by Don Synstelien of http://www.synfonts.com, co-author of the glasshaus book, "Usability: The Site Speaks For Itself". You can find more of Don's illustration work online at http://www.synstelien.com.




        About the Authors



        Gareth Downes-Powell





        Gareth Downes-Powell has been working in the computer industry for the last twelve years, primarily building and repairing PCs, and writing custom databases. He branched out onto the Internet five years ago, and started creating web sites and custom web applications. This is now his main area of expertise, and he uses a variety of languages including ASP and PHP, with SQL Server or MySQL backend databases.


        A partner in Buzz inet, http://www.buzzinet.co.uk/, an Internet company specializing in web design and hosting, he uses a wide range of Macromedia products, from Dreamweaver MX through to Flash and Director, for custom multimedia applications. Gareth maintains http://ultradev.buzzinet.co.uk/ as a way of providing support for the whole Macromedia UltraDev and MX Community. There, he regularly adds new tutorials and custom-written extensions to this rapidly expanding site.


        Gareth enjoys keeping up with the latest developments, and has been providing support to many users, to help them use UltraDev and Dreamweaver MX with ASP or PHP on both Linux and Windows servers. Rarely offline, Gareth can always be found in the Macromedia forums (news: forums.macromedia.com), where he helps to answer many users' questions on a daily basis.



        Tim Green





        Tim Green is a full-time IT Manager and an eBusiness/B2B Advisor based in the North West of England. Beginning his working life as a COBOL and Assembly Language programmer, he moved into web application development in 1996, after dabbling in numerous other careers, from acting to being a chef.


        A contributing developer to PHAkT, an implementation of PHP for UltraDev 4, he was contracted by PHAkT's creators to work on their other PHP implementation, ImpAKT, and the NeXTensio toolkit, and became the first developer to release additional extensions for UltraDev PHP, including a shopping cart management system called IntelliCART.



        Writing this book has both been an honor and a great experience, but it really wouldn't have been possible without the help and support of a number of key people. I would like to thank (in no particular order) Bruno Mairlot, Gareth Downes-Powell, Simon Mackie, Matt Machell, the whole of the glasshaus team, Massimo Foti, Jag Sidhu, Tom Muck, Waldo Smeets, George Petrov, the UDZone.com team, and the Dreamweaver Extension Development Community as a whole.



        A very special thanks goes to my wife Becky. I don't know how I would have done it without her; she's my best friend and my rock. Thanks babe.




        Bruno Mairlot





        Bruno Mairlot works for a network security and Internet solutions company based in Luxembourg. He specialises in developing implementations of network and Internet protocols with PHP and MySQL. He began his working life as the founder of a web site development and network services company four years ago, then moved on to work with other companies, but always working mainly as a web site developer and security consultant for the Web.


        Along with Tim and Gareth, Bruno is a contributor to the Dreamweaver and PHP community, and is part of the management team of the community site http://www.udzone.com. He is the author of a project that aims to give users a powerful MySQL administration console in Dreamweaver as an extension.



        Writing this book has been a tremendous and exciting experience, but wouldn't have been possible without the help of many people. First and foremost I would like to thank is my friend Tim Green. His support and enthusiasm on this project has helped me more than I could say. Thanks Tim. Next, my thanks go to my soul mate, Pascale. I couldn't have written any of this book without her being at my side, encouraging me and supporting me. I would also like to thank Simon Mackie from glasshaus, who did a tremendously good management job.



        Special thanks go to my colleague and friend, Con Dorgan, who helped me during my working hours and gave me a lot of suggestions for the book.





























        3.9 Declarative Security Policies











         < Day Day Up > 





        3.9 Declarative Security Policies



        Security policies associated with URIs and enterprise beans include the following:



        • Login configurations associated with URIs: for example, use of form-based login

        • Authorization policies associated with URIs and enterprise beans based on J2EE security roles

        • Principal-delegation policies that apply to Web applications and enterprise beans

        • Connection policies associated with JCA connectors that dictate how applications access EIS in a secure manner



        Such authorization and delegation policies can be specified declaratively within the relevant deployment descriptors.



        3.9.1 Login-Configuration Policy



        Authentication is the process of proving the identity of an entity. Authentication generally is performed in two steps: (1) acquiring the authentication data of a principal and (2) verifying the authentication data against a user (principal) registry.



        J2EE security authenticates a principal on the basis of the authentication policy associated with the resource the principal has requested. When a user requests a protected resource from a Web application server, the server authenticates the user. J2EE servers use authentication mechanisms based on validating credentials, such as digital certificates (see Section 10.3.4 on page 372), and user ID and password pairs. Credentials are verified against a user registry that supports the requested authentication scheme. For example, authentication based on user ID and password can be performed against an LDAP user registry, where authentication is performed using an LDAP bind request.



        A Web server is responsible for servicing HTTP requests. In a typical J2EE environment, a Web server is a component of a J2EE WAS. In this case, the WAS hosts servlets, JSP files, and enterprise beans. The login�authentication�configuration is managed by the WAS, which drives the authentication challenges and performs the authentication. Similarly, if the Web server is independent of the WAS and the Web server is the front end for the WAS, the Web server acts as a proxy for J2EE requests. Again, the authentication is typically performed by the WAS.



        The authentication policy for performing authentication among a user, a Web server, and a WAS can be specified in terms of the J2EE login configuration elements of a Web application's deployment descriptor. The authentication policy can specify the requirement for a secure channel and the authentication method.The requirement to use a secure channel when accessing a URI is specified through the user-data-constraint descriptor.



        The authentication method is specified through the auth-method element in the login-config descriptor. There are three types of authentication methods:





        1. HTTP authentication method.

          The credentials that the client must submit to authenticate are user ID and password, sent to the server as part of an HTTP header and typically retrieved through a browser's dialog window. The two modes of HTTP authentication are basic and digest. In both cases, the user ID is sent as cleartext.[2] In basic authentication, the password is transmitted in cleartext as well; in digest authentication, only a hash value of the password is transmitted to the server (see Section 10.2.2.4 on page 356).

          [2] More precisely, the cleartext is encoded in base64 format, a commonly used Internet standard. Binary data can be encoded in base64 format by rearranging the bits of the data stream in such a way that only the six least significant bits are used in every byte. Encoding a string in base64 format does not add security; the algorithm to encode and decode is fairly simple, and tools to perform encoding and decoding are publicly available on the Internet. Therefore, a string encoded in base64 format is still considered to be in cleartext.



        2. Form-based authentication method.

          The credentials that the client must submit to authenticate are user ID and password, which are retrieved through an HTML form.



        3. Certificate-based authentication method.

          The credential that the client must submit is the client's digital certificate, transmitted over an HTTPS connection.



        3.9.1.1 Authentication Method in Login Configuration


        The auth-method element in the login-config element specifies how a server challenges and retrieves authentication data from a user. As noted previously, there are three possible authentication methods: HTTP (user ID and password), form based (user ID and password), and certificate based (X.509 certificate).



        With the HTTP authentication method, the credentials provided by the user consist of a user ID and password pair, transmitted as part of an HTTP header. When HTTP authentication is specified, a user at a Web client machine is challenged for a user ID and password pair. The challenge usually occurs in the following way:













        1. A WAS issues an HTTP unauthorized client error code (401) and a WWW_Authenticate HTTP header.

        2. The Web browser pops up a dialog window.

        3. The user enters a user ID and password pair in this dialog window.

        4. The information is sent to the Web server.

        5. The WAS extracts the information and authenticates the user, using the authentication mechanism with which it has been configured.



        With HTTP authentication, a realm name also needs to be specified. Realms are used to determine the scope of security data and to provide a mechanism for protecting Web application resources. For example, a user defined as bob in one realm is treated as different from bob in a second realm, even if these two IDs represent the same human user, Bob Smith.



        Once specified, the realm name is used in the HTTP 401 challenge to help the Web server inform the end user of the name of the application domain. For example, if the realm is SampleAppRealm, the dialog window prompting the user for a user ID and password pair during authentication will include that the user ID and password are to be supplied for the SampleAppRealm realm.



        HTTP authentication can be either basic or digest. In basic authentication, the credentials requested of the user are user ID and password, and both are transmitted as cleartext. In order for the authentication method to be basic, the auth-method element in the login-config descriptor must be set to BASIC. Listing 3.4 is a deployment descriptor fragment showing an example of login configuration requiring basic authentication.



        Listing 3.4. Login Configuration for Basic Authentication






        <login-config>

        <auth-method>BASIC</auth-method>

        <realm-name>SampleAppRealm</realm-name>

        </login-config>



        This scheme is not considered to be a secure method of user authentication, unless used in conjunction with some external secure systems, such as SSL.



        In digest authentication, the user ID and a hash value of the password are transmitted to the server as part of an HTTP header. Therefore, the password does not appear in cleartext, which is the biggest weakness of basic authentication.



        When digest authentication is specified, the Web server responds to the client's request by requiring digest authentication. A one-way hash of the password (see Section 10.2.2.4 on page 356), as specifed by the Request for Comments (RFC) 2617,[3] is computed by the client, based on a random number, called nonce, uniquely generated by the server each time a 401 response is made. The hash value of the password is sent to the server, which computes the digest of the password for the user ID and compares the resulting hash value with the one submitted by the client. The requesting user is considered to be authenticated if the hash values are identical.

        [3] See http://www.ietf.org/rfc/rfc2617.txt.



        This mode of authentication assumes that the server has access to the user's password in cleartext�a necessary requirement in order for the server to compute the hash of the password. However, this is rarely the case in most enterprise environments, as the password in cleartext is not retrievable from a user repository containing the user ID and password information. Rather, the server typically delegates responsibility to the user repository to validate a user's password. Therefore, digest authentication is not widely adopted in enterprise environments and hence is not required to be supported by a J2EE container.



        J2EE servers that do support digest authentication can be configured to issue a digest authentication challenge by setting the value of the auth-method element in the login-config descriptor to DIGEST. Listing 3.5 is a deployment descriptor fragment illustrating how a J2EE server can be configured to require digest authentication.



        Listing 3.5. Login Configuration for Digest Authentication






        <login-config>

        <auth-method>DIGEST</auth-method>

        <realm-name>SampleAppRealm</realm-name>

        </login-config>



        The second authentication method is form based. With this method, the auth-method element in the login-config element must be set to FORM. The form-based authentication method assumes that the server is configured to send the client an HTML form to retrieve the user ID and password from the Web user, as opposed to sending a 401 HTTP unauthorized client error code as in the basic challenge type.



        The configuration information for a form-based authentication method is specified through the form-login-config element in the login-config element. This element contains two subelements: form-login-page and form-error-page.



        • The Web address to which a user requesting the resource is redirected is specified by the form-login-page subelement in the Web module's deployment descriptor. When the form-based authentication mode is specified, the user will be redirected to the specified form-login-page URL. An HTML form on this page will request a user ID and password.

        • If the authentication fails, the user is redirected to the page specified by the form-error-page subelement.



        Listing 3.6 is a sample HTML page for the login form.



        Listing 3.6. Login Page Contents






        <HTML>

        <HEAD>

        <TITLE>Sample Login page.</TITLE>

        </HEAD>

        <BODY>

        <TR><TD>

        <HR><B>Please log in!</B><BR><BR>

        </TD></TR>

        <CENTER>

        Please enter the following information:<BR>

        <FORM METHOD=POST ACTION="j_security_check">

        Account <INPUT TYPE=text NAME="j_username"

        SIZE=20><BR>

        Password <INPUT TYPE=password

        NAME="j_password" SIZE=20><BR>

        <INPUT TYPE=submit NAME=action

        VALUE="Submit Login">

        </FORM><HR>

        </CENTER>

        </BODY>

        </HTML>



        Listing 3.7 is a deployment descriptor fragment showing an example of login configuration that requires form-based authentication.



        Listing 3.7. Login Configuration for Form-Based Authentication






        <login-config>

        <auth-method>FORM</auth-method>

        <form-login-config>

        <form-login-page>/login.html</form-login-page>

        <form-error-page>

        /login-failed.html

        </form-error-page>

        </form-login-config>

        </login-config>



        The third type of authentication method is certificate based (X.509 certificate). In order for the authentication method to be certificate based, the auth-method element in the login-config descriptor must be set to CLIENT-CERT. The certificate-based authentication method implies that the Web server is configured to perform mutual authentication over SSL. The client is required to present a certificate to establish the connection. When the CLIENT-CERT mode is specified, the client will be required to submit the request over an HTTPS connection. If the request is not already over HTTPS, the J2EE product will redirect the client over an HTTPS connection. Successful establishment of an SSL connection implies that the client has presented its own certificate and not anyone else's. The details of how the server ensures that the client certificate really belongs to the client are explained in Section 10.3.4 on page 372 and Section 13.1.2 on page 452. The certificate used by the client is then mapped to an identity in the user registry the J2EE product is configured to use.



        Listing 3.8 is a deployment descriptor fragment showing an example of login configuration that requires certificate-based authentication.



        Listing 3.8. Login Configuration for Certificate-Based Authentication






        <login-config>

        <auth-method>CLIENT-CERT</auth-method>

        </login-config>



        Note that the user registry is not specified in this XML deployment descriptor fragment because it is not part of the J2EE specification.



        3.9.1.2 Secure-Channel Constraint


        Establishing an HTTPS session between the client and the Web server is often a necessary requirement to provide data confidentiality and integrity for the information flowing between the HTTP client and the server. In a J2EE environment, the security policy can require the use of a secure channel, specified through the user-data-contraint deployment descriptor element. When the requirement for a secure channel is specified, the request to the URI resource should be initiated over an HTTPS connection. If access is not already via a HTTPS session, the request is redirected over an HTTPS connection.



        Specifying INTEGRAL or CONFIDENTIAL as the value for the transport-guarantee element in the user-data-constraint descriptor will be treated as a requirement for the HTTP request to be over SSL. This requirement can be specified as part of the user-data-constraint element in a Web application's login configuration. In theory, INTEGRAL should enforce communitcation integrity, whereas CONFIDENTIAL should enforce communication confidentiality, and it could be possible to select different cipher suites to satisfy these requirements. However, a J2EE server typically does not differentiate INTEGRAL from CONFIDENTIAL but instead treats both of these values to indicate the need to require an SSL connection with a particular cipher suite, not based on whether INTEGRAL or CONFIDENTIAL was specified.



        Listing 3.9 is a deployment descriptor fragment showing an example of login configuration that contains the user-data-constraint element. More details are provided in Section 4.6.6 on page 132.



        Listing 3.9. Specifying the Requirement for a Secure Channel






        <user-data-constraint>

        <transport-guarantee>CONFIDENTIAL</transport-guarantee>

        </user-data-constraint>



        3.9.2 Authorization Policy



        The role-permission interpretation of the J2EE security model treats a security role to be a set of permissions. The security role uses the role-name label defined in the method-permission element of an EJB module's deployment descriptor and in the security-constraint element of a Web module's deployment descriptor as the name of the set of permissions. The set of permissions defines a number of resources�the enterprise beans and the Web resources to which the method-permission and security-constraint elements refer, respectively�and a set of actions�the methods listed by the method-permission and the security-constraint descriptors. For example, in Listing 3.2 on page 74, the security role Teller is associated with the permissions to invoke the getBalance() and getDetails() methods on the AccountBean enterprise bean. Similarly, in Listing 3.3 on page 75, the security role Teller is associated with the permission to perform a GET invocation over HTTP to the /finance/account/ URI. If multiple method-permission and security-constraint descriptors refer to the same security role, they are all taken to contribute to the same role permission. In other words, the sets of permissions associated with that security role are merged to form a single set.



        This model has the advantage of dramatically reducing the number of objects in a security object space�a set of pairs (subject, <target, operation>), where the subject is an entity requesting to perform a security-sensitive operation on a given target. The Deployer and the System Administrator can define authorization policies, associated with EJB or URI targets and the operations of enterprise bean methods and HTTP methods, respectively, for the security roles in their applications. Then, they associate subjects to security roles; by extension, those subjects are granted the permissions to perform the operations permitted by the security roles.



        Based on the J2EE security model, a protected action can be performed by a subject who has been granted at least one of the security roles associated with the action. The security roles associated with a protected action are the required security roles�the permissions necessary to perform the action itself. The roles associated with a subject are the granted security roles�the permissions that have been given to that subject. This means that the subject will be allowed to perform an action if the subject's granted security roles contain at least one of the required security roles to perform that action. For example, if the action consisting of accessing the EJB method getDetails() on the AccountBean enterprise bean can be performed only by the security roles Teller and Supervisor and if subject Bob has been granted the security role of Teller, Bob will be allowed to perform that action, even if Bob has not been granted the security role of Supervisor.



        The table that represents the association of security roles to sets of permissions is called the method-permission table. A method-permission table (see Table 3.1) can be used to deduce the set of required security roles. The rows in the table represent security roles; the columns represent protected actions.



        It can be inferred from Table 3.1 that in order to access the getBalance() method on AccountBean, the required security roles are Teller and Supervisor. In order to access any URI that matches the pattern /public/*, a PublicRole is required.



        Table 3.1. Example of Method-Permission Table
         

        /finance/

        accountGET

        /finance/

        accountPUT

        /public/*

        AccountBean.

        getBalance()

        AccountBean

        .getDetails()

        Teller

        Yes

        No

        No

        Yes

        Yes

        Supervisor

        Yes

        Yes

        No

        Yes

        Yes

        PublicRole

        No

        No

        Yes

        No

        No



        The table that represents the association of roles to subjects is called the authorization table, or protection matrix. In such a table, the security role is defined as the security object, and users and groups are defined as security subjects. An authorization table (see Table 3.2) can be used to deduce the set of granted security roles. The rows in the table refer to the users and user groups that are security subjects in the protection matrix; the columns represent the J2EE security roles that are security objects in the protection matrix.



        Table 3.2. Example of Authorization Table
         

        Teller

        Supervisor

        PublicRole

        TellerGroup

        Yes

        No

        No

        ManagerGroup

        No

        Yes

        No

        Everyone

        No

        No

        Yes

        Bob

        Yes

        No

        No



        The method-permission table and the protection matrix reflect the configuration specified in the deployment descriptors. For example, the first row in Table 3.1 reflects the deployment descriptor obtained from the deployment descriptor fragments of Listing 3.2 on page 74 and Listing 3.3 on page 75. It can be inferred from Table 3.2 that user Bob and group TellerGroup are granted the security role of Teller, everyone is granted the PublicRole, and only users in the ManagerGroup are granted the security role of Supervisor.



        Combining Table 3.1 and Table 3.2, it follows that Bob can access the getBalance() and getDetails() methods on the AccountBean enterprise bean and can issue an HTTP GET request on the /finance/account/ URI. Bob cannot, however, issue an HTTP PUT request on the /finance/account/ URI. Note that Bob will be able to access any URI that matches /public/*, as everyone has been granted the role PublicRole, which is the role necessary to get access to /public/*.



        In the J2EE security model, the Application Assembler defines the initial mapping of actions on the protected resources to the set of the required security roles (see Section 3.7.2 on page 67). This can be done using the application assembly tool. Subsequently, the Deployer will refine the policies specified by the Application Assembler when installing the application into a J2EE environment (see Section 3.7.3 on page 70). The Deployer also can use the application assembly tool to redefine the security policies, when necessary, and then install the application into the J2EE container. The method-permission table is formed as a result of the required security roles getting specified through the process of application assembly and refinement during deployment.



        Authorization policies can be broadly categorized into application policies, which are specified in deployment descriptors and map J2EE resources to roles, and authorization bindings, which reflect role to user or group mapping. As discussed in Section 3.7.2 on page 67, a set of security roles is associated with actions on J2EE protected resources. These associations are defined in the J2EE deployment descriptors when an application is assembled and deployed. The security roles specified in this way are the required security roles�the sets of permissions that users must be granted in order to be able to perform actions on protected resources. Pragmatically, before a user is allowed to perform an action on a protected resource, either that same user or one of the groups that user is a member of should be granted at least one of the required security roles associated with that protected resource. The authorization table that relates the application-scoped required security roles to users and user groups is managed within the J2EE Product Provider using the J2EE Product Provider configuration tools.



        3.9.3 Delegation Policy



        Earlier in this chapter, we defined delegation as the process of forwarding a principal's credentials with the cascaded downstream requests. Enforcement of delegation policies affects the identity under which the intermediary will perform the downstream invocations on other components. By default, the intermediary will impersonate the requesting client when making the downstream calls. The downstream resources do not know about the real identity, prior to impersonation, of the intermediary. Alternatively, the intermediary may perform the downstream invocations using a different identity. In either case, the access decisions on the downstream objects are based on the identity at the outbound call from the intermediary. To summarize, in a J2EE environment, the identity under which the intermediary will perform a task can be either





        • The client's identity

          the identity under which the client is making the request to the intermediary



        • A specified identity

          an identity in terms of a role indicated via deployment descriptor configuration



        The application deployment environment determines whether the client or a specified identity is appropriate.



        The Application Assembler can use the security-identity element to define a delegation identity for an enterprise bean's method in the deployment descriptor. Consider an example in which a user, Bob, invokes methods on a SavingsAccountBean enterprise bean. SavingsAccountBean exposes three methods�getBalance(), setBalance(), and transferToOtherBank()�and its delegation policy is defined as in Table 3.3. Figure 3.5 shows a possible scenario based on the delegation policy specified in Table 3.3.



        Figure 3.5. Delegation Policy Scenario




        The method setBalance() will execute under the client's identity because the delegation mode is set to use-caller-identity. The method getBalance() will execute under the client's identity as well because no delegation mode is specified, and the default is use-caller-identity. Therefore, if Bob invokes the method getBalance() on AccountBean, the method will execute under Bob's identity, bob. Suppose that the getBalance() method invokes a lookup() method on SavingsAccountBean. This invocation will still be executed under Bob's identity and will succeed only if Bob has been granted the permission to invoke lookup() on SavingsAccountBean.



        Table 3.3. SavingsAccountBean Enterprise Bean's Delegation Policy

        Method

        Delegation Mode

        Specified Role

        getBalance()

          

        setBalance()

        use-caller-identity

         

        transferToOtherBank()

        run-as

        Supervisor



        Any downstream call from transferToOtherBank() will perform method calls on a TransferBean enterprise bean. These invocations will need to execute under a principal that has been granted the Supervisor role. The Deployer or the System Adminstrator needs to map the Supervisor role to a principal that has been granted the Supervisor role. This can be done by specifying a valid user ID and password pair corresponding to a user who has been granted that role. For example, if user Alice has been granted the Supervisor role and if the user ID and password pair for Alice is associated with the Supervisor role, the calls to transferToOtherBank() will occur under Alice's identity.



        3.9.4 Connection Policy



        Information in any EIS must be protected from unauthorized access. An EIS system is likely to have its own authorization model. At a minimum, most of these systems have facilities to accept some form of authentication data representing an identity connecting to the EIS. The JCA is designed to extend the end-to-end security model for J2EE-based applications to include integration with EISs. A WAS and an EIS collaborate to ensure the proper authentication of a resource principal when establishing a connection to a target EIS. As discussed in Section 3.4 on page 61, the JCA allows for two ways to sign on to an EIS: container-managed sign-on and component-managed sign-on.



        With container-managed sign-on, the connection to an EIS is obtained through declarative security. In order for a connection to be container managed, the deployment descriptor will indicate that the res-auth element associated with a resource definition is declared as Container. If the connection is obtained by passing the identity information programmatically, the value for res-auth should be set to Application. Details of component-managed sign-on are discussed in Section 3.10.3 on page 96.



        A deployment descriptor fragment that declares that the authentication facilitated by the resource adapter should be set to be Container is shown in Listing 3.10.



        Listing 3.10. An XML res-auth Element in a Deployment Descriptor






        <resource-ref>

        <description>Connection to myConnection</description>

        <res-ref-name>eis/myConnection</res-ref-name>

        <res-type>javax.resource.cci.ConnectionFactory</res-type>

        <res-auth>Container</res-auth>

        </resource-ref>



        The container is responsible for obtaining appropriate user authentication information needed to access the EIS. The connection to the EIS is facilitated by the specified resource adapter. The JCA allows specifying the authentication mechanism. The authentication-mechanism-type element in the deployment descriptor is used to specify whether a resource adapter supports a specific authentication mechanism. This XML element is a subelement of the authentication-mechanism element. The JCA specification supports the following authentication mechanisms:





        • Basic authentication.

          The authentication mechanism is based on user ID and password. In this case, the authentication-mechanism-type XML element in the deployment descriptor is set to BasicPassword.



        • Kerberos V5.

          The authentication mechanism is based on Kerberos V5. In this case, the authentication-mechanism-type element in the deployment descriptor is set to Kerbv5.



        Other authentication mechanisms are outside the scope of the JCA specification.



        In a secure environment, it is likely that a J2EE application component, such as an enterprise bean, and the EIS system that is accessed through the component are secured under different security domains, where a security domain is a scope within which certain common security mechanisms and policies are established. In such cases, the identity under which the J2EE component is accessed should be mapped to an identity under which the EIS is to be accessed. Figure 3.6 depicts a possible scenario.



        Figure 3.6. Credential Mapping when Accessing an EIS from a J2EE Container




        In this scenario, an enterprise bean in a J2EE container is accessed by a user, Bob Smith. The enterprise bean is protected in a way that it allows only users from a specified LDAP directory to access it. Therefore, the identity under which Bob Smith will access the enterprise bean must be registered in that LDAP directory. Bob Smith uses the identity of bsmith when he accesses the enterprise bean.



        In a simplistic case, where the run-as policy of the enterprise bean is set to be the caller identity, the connections to the EIS will be obtained on behalf of Bob Smith. If the connections are obtained through user ID and password, when the enterprise bean obtains a connection to a back-end system, such as a CICS system, the J2EE container will retrieve a user ID and password to act on behalf of user bsmith. The application invokes the getConnection() method on the javax.resource.cci.ConnectionFactory instance (see Listing 3.10 on page 89) with no security-related parameters, as shown in Listing 3.11, a fragment of Java code.



        The application relies on the container to manage the sign-on to the EIS instance. This is possible in simple deployment scenarios in which the identity under which the EIS system is accessed is specified by the Deployer. This effectively means that all identities accessing the application are mapped to a single identity to access the EIS system: a many-to-one identity mapping.



        Listing 3.11. Getting a Connection to an EIS with Container-Managed Sign-On






        // Construct the InitialContext

        Context initctx = new InitialContext();



        // Perform a JNDI lookup to obtain a ConnectionFactory

        javax.resource.cci.ConnectionFactory cxf =

        (javax.resource.cci.ConnectionFactory) initctx.lookup

        ("java:comp/env/eis/MyEIS");



        // Invoke the ConnectionFactory to obtain a connection.

        // The security information is not passed to the

        // getConnection() method

        javax.resource.cci.Connection cx = cxf.getConnection();



        In more sophisticated deployment scenarios, a many-to-one identity mapping may not be sufficient for security policy reasons. For example, it may be necessary for the EIS system to log all the identities that accessed it. For this logging facility to be useful, the identities accessing a J2EE application must not all be mapped to the same identity on the EIS system. A one-to-one or many-to-many identity mapping is recommended in this case. In particular, the container may use a credential mapping facility whereby bsmith is mapped to user ID bobsmith and password db2foobar, as shown in Figure 3.6.



        If connections require Kerberos credentials or other generic credentials to be passed, the mapping facility is responsible for mapping one form of the credential to another that can be used by the target security domain. The manner in which these mappings happen and the level of sophistication in mapping available in J2EE application servers are server specific and not dictated by the J2EE specification.



        In enterprise environments consisting of multiple departments, organizations, and even acquired companies, it is typical for systems to be interconnected and the applications shared. In such environments in which J2EE applications are deployed, it is a good architectural approach to design the application integration in a way that applications use JCA to obtain connections to other applications and to follow the declarative approach to define connection sign-on, as explained in this section. The use of JCA will make applications unaware of cross-security domains when accessing non-J2EE systems, and the use of declarative security will enhance application flexibility and portability. JCA with declarative security will also help manage the mapping of credentials and identities outside the application as enforced and facilitated by the enterprise-level mapping infrastructure.













           < Day Day Up >