Saturday, November 7, 2009

Management documents














10.1 Management documents


Every software development project is going to be managed in some way. A plan will be prepared, in one form or another, that lays out the expected schedule and resources. Effort will be expended to review and test the software, at least at the end before delivery, and the components of the software will be identified so that the delivered software and its components are known.


The following management documents are common to all software development projects:





  • Software development plan;




  • SQS plan;




  • CM plan.




These documents are the overall software development process control documents. Their size and depth of detail will vary with the size and complexity of the system being developed. They may even be merged into a single document for small projects. Even so, the content, describing how the development project will be managed and controlled, must be present for each software development project. It should not be surprising that the more formal the planning and its documentation, the more complete and effective it will be. Thus, the creation of the software development plan (SDP), the software quality system plan (SQSP), and the configuration management plan (CMP) is a necessary part of each software development project.


These plans for a project leading to 500,000 or more lines of code would probably cost more than a whole 500-line project. Therefore, the level of detail and control included in each plan must reflect the size and complexity of the project at hand.




10.1.1 Software development plan


The SDP is the document that lays out the management approach to the software project. In its most basic form, the SDP will include the schedule and resource needs for the project. The milestones for tracking the progress of the project will be specified, probably as a pictorial of the SDLC. The personnel loading will also be shown so that the required expertise and skills can be available when they are needed. The SDP should also specify hardware and environmental needs such as computer time, special test facilities, compilers and linkers, other systems, and the like.


For simple systems, the material covering the SQS and CM may also be included as separate sections in the SDP. As system complexity grows, so does the SDP. More and more detail is required to cover the larger scale of the software development activity. Schedules must contain intermediate checkpoints or milestones, and personnel loading will become more varied and complicated. Test support facilities will become more elaborate. The SDP will also begin to address software quality and CM to a level of detail that precludes their inclusion as SDP sections.


The more elaborate the software system, the more it probably interfaces with other systems and the outside world. While these interfaces are presented in requirements documentation, provision for their involvement in testing must be ensured and scheduled in the SDP.


Larger systems may require enough people or facilities to justify special offices or test laboratories. If so, these must be presented in the SDP so that their availability is ensured.


Budget control becomes more important as the size of the system grows. The SDP is the appropriate place to present budget considerations and to specify control mechanisms in support of the normal, companywide costs accounting system.


While the software quality practitioner obviously does not generate the SDP, the practitioner has the responsibility for reviewing it against SDP standards and ensuring that all appropriate information is present. Deficiencies detected both by the software quality practitioner and any other SDP reviews should be corrected before the project is permitted to commence.


The software quality practitioner also monitors the software development activities against the SDP. Deviations are reported so that corrective action may be taken by management.



Most corrective action will be to correct the software development process where it has strayed from the plan. Some corrections will be made to the SDP to keep it current with changes in the project. The software quality practitioner will review these changes to ensure that contracted requirements are not being violated and that the plan still complies with the standards for it.


See Appendix A for a sample SDP outline.






10.1.2 SQS plan


The SQSP addresses the activities to be performed on the project in support of the quest for quality software. Being careful not to exceed the requirements of the customer, company standards, or the SDP, the SQSP will discuss all of the activities to be performed on all of the various SLC products. A sample format for an SQSP is shown in Appendix B.


Remember that a software quality group is not necessary for the SQS functions be performed. Thus, the various software quality functions will be assigned, through the SQSP, to be the organizational entities that will perform these functions. All activities to be accomplished in the software quality area should receive the same personnel, resource, and schedule discussion as in the overall SDP, and any special tools and methodologies should be discussed. The SQSP may be combined with the CMP (see Section 10.1.3) for medium-sized efforts.


Whatever the format of the SQSP, it is important that the document (or its information if in another document) be complete and approved by management and the producers. The SQSP becomes the charter for the SQS functions for the particular project when approved by management. It lays out the entire SQS and how it will be implemented.


Without the involvement of and approval by the software developers, the SQSP can be a recipe for ineffectiveness and frustration on the part of the software quality practitioners. Without the cooperation of the developers, software quality practitioners can be severely hampered in their attempts to conduct the review and monitoring activities for which they are responsible. Involving the development organizations in the generation and approval of the SQSP can encourage their cooperation as the project progresses.


The software quality practitioners must also monitor their own plan and their activities according to the plan. Any deviation from the plan or any indication that it is inadequate must be corrected. The software quality practitioner will monitor all the software management and development activities. It is certain that management and the developers will be watching the software quality practitioners to be sure they perform according to the SQS plan, the whole plan, and nothing but the plan.






10.1.3 CM plan


CM, as discussed in Chapter 6, is a threefold discipline. Each of the three activities should be discussed in its own section of the CMP. The methods, requirements levied on the producers, contracted requirements, and tools to be used for software CM all should be spelled out. (See Appendix C for a sample format for a CM plan.)


If the project is small, the necessary information may be included in the SDP. On medium-sized projects, it may be appropriate to combine the CMP information with the SQS plan in a single, dual-purpose document.


While some of the information may be in the personnel and resource sections of the SDP, CM-specific information must be presented in the CMP. Schedules for baselining, major reviews, and auditing should be shown either on the overall project schedule or on the CM schedule.


Any special tools or resources needed to support CM must be called out in the CMP. Another topic that may appear in the CMP is the operation of the software development library, which is the depository of all software product master copies. If not discussed elsewhere (e.g., SDP or SQSP), the library, its responsibilities, functions, and so forth should be presented in the CMP.


As with the SDP, the software quality practitioner has the responsibility to review the CMP before its release and adoption. The software quality practitioner should ensure that the CMP is complete and appropriate for the project and that it meets any specified format and content standards.


Software quality practitioners will also review the CM activities on an ongoing basis. The reviews will ascertain whether the activities described in the plan are being performed and if they are still appropriate for the project.






10.1.4 Additional plans


As software becomes an increasingly critical part of our lives, additional plans may be required for some software system development efforts. Such plans might include the software safety plan (Appendix K) and the risk management plan (Appendix L). These plans are certainly not required for all development projects. It is the responsibility of the quality practitioner to evaluate their necessity for each new project and to recommend their preparation when appropriate.

















Section 1.9.  Signals










1.9. Signals


Signals are a technique used to notify a process that some condition has occurred. For example, if a process divides by zero, the signal whose name is SIGFPE (floating-point exception) is sent to the process. The process has three choices for dealing with the signal.


  1. Ignore the signal. This option isn't recommended for signals that denote a hardware exception, such as dividing by zero or referencing memory outside the address space of the process, as the results are undefined.

  2. Let the default action occur. For a divide-by-zero condition, the default is to terminate the process.

  3. Provide a function that is called when the signal occurs (this is called "catching" the signal). By providing a function of our own, we'll know when the signal occurs and we can handle it as we wish.


Many conditions generate signals. Two terminal keys, called the interrupt key often the DELETE key or Control-Cand the quit keyoften Control-backslashare used to interrupt the currently running process. Another way to generate a signal is by calling the kill function. We can call this function from a process to send a signal to another process. Naturally, there are limitations: we have to be the owner of the other process (or the superuser) to be able to send it a signal.



Example

Recall the bare-bones shell example (Figure 1.7). If we invoke this program and press the interrupt key, the process terminates because the default action for this signal, named SIGINT, is to terminate the process. The process hasn't told the kernel to do anything other than the default with this signal, so the process terminates.


To catch this signal, the program needs to call the signal function, specifying the name of the function to call when the SIGINT signal is generated. The function is named sig_int; when it's called, it just prints a message and a new prompt. Adding 11 lines to the program in Figure 1.7 gives us the version in Figure 1.10. (The 11 new lines are indicated with a plus sign at the beginning of the line.)


In Chapter 10, we'll take a long look at signals, as most nontrivial applications deal with them.




Figure 1.10. Read commands from standard input and execute them



#include "apue.h"
#include <sys/wait.h>

+ static void sig_int(int); /* our signal-catching function */
+
int
main(void)
{
char buf[MAXLINE]; /* from apue.h */
pid_t pid;
int status;

+ if (signal(SIGINT, sig_int) == SIG_ERR)
+ err_sys("signal error");
+
printf("%% "); /* print prompt (printf requires %% to print %) */
while (fgets(buf, MAXLINE, stdin) != NULL) {
if (buf[strlen(buf) - 1] == "\n")
buf[strlen(buf) - 1] = 0; /* replace newline with null */

if ((pid = fork()) < 0) {
err_sys("fork error");
} else if (pid == 0) { /* child */
execlp(buf, buf, (char *)0);
err_ret("couldn't execute: %s", buf);
exit(127);
}

/* parent */
if ((pid = waitpid(pid, &status, 0)) < 0)
err_sys("waitpid error");
printf("%% ");
}
exit(0);
}
+
+ void
+ sig_int(int signo)
+ {
+ printf("interrupt\n%% ");
+ }












    7.2 FREQUENCY DIVISION MULTIPLEXING











     < Day Day Up > 











    7.2 FREQUENCY DIVISION MULTIPLEXING


    In frequency division multiplexing (FDM), the signals are translated into different frequency bands and sent over the medium. The communication channel is divided into different frequency bands, and each band carries the signal corresponding to one source.


    Consider three data sources that produce three signals as shown in Figure 7.2. Signal #1 is translated to frequency band #1, signal #2 is translated into frequency band #2, and so on. At the receiving end, the signals can be demultiplexed using filters. Signal #1 can be obtained by passing the multiplexed signal through a filter that passes only frequency band #1.






    Figure 7.2: Frequency division multiplexing.


    FDM is used in cable TV transmission, where signals corresponding to different TV channels are multiplexed and sent through the cable. At the TV receiver, by applying the filter, a particular channel's signal can be viewed. Radio and TV transmission are also done using FDM, where each broadcasting station is given a small band in the frequency spectrum. The center frequency of this band is known as the carrier frequency.



    Figure 7.3 shows how multiple voice channels can be combined using FDM.






    Figure 7.3: FDM of voice channels.

    Each voice channel occupies a bandwidth of 3.4kHz. However, each channel is assigned a bandwidth of 4kHz. The second voice channel is frequency translated to the band 4–8kHz. Similarly, the third voice channel is translated to 8–12 kHz and so on. Slightly higher bandwidth is assigned (4kHz instead of 3.4kHz) mainly because it is very difficult to design filters of high accuracy. Hence an additional bandwidth, known as guard band, separates two successive channels.










    In FDM, the signals from different sources are translated into different frequency bands at the transmitting side and sent over the transmission medium. In cable TV, FDM is used to distribute programs of different channels on different frequency bands. FDM is also used in audio/video broadcasting.















    FDM systems are used extensively in analog communication systems. The telecommunication systems used in telephone networks, broadcasting systems, etc. are based on FDM.



















     < Day Day Up > 



    Polishing the Badge



    [ Team LiB ]





    Polishing the Badge


    After a professional has obtained initial education, gained some experience, and, possibly, received a license, most professions impose a continuing education requirement. The specific requirements for each profession vary from state to state. In Washington State, certified public accountants are required to earn 80 continuing professional education (CPE) credits during the two years preceding renewal of their certificates.[11] Attorneys must obtain 15 continuing legal education (CLE) credits each year. Physicians in New Mexico must obtain 150 hours of continuing education every three years. Engineers in Washington State do not have any continuing education requirements; engineers in some other states do.[12]


    Continuing education helps to ensure that professionals stay current in their fields, which is especially important in fields with rapidly changing knowledge such as medicine and software engineering. If professionals stop learning after completing their initial education, time will render their education less and less meaningful.


    Continuing-education requirements can be focused so that professionals are required to learn about important developments in their fields. If software engineering ever does discover a new silver bullet, continuing-education requirements can ensure that all licensed or certified software engineers learn about it.





      [ Team LiB ]



      The Last Great Frontier



      [ Team LiB ]





      The Last Great Frontier


      For a typical business-investment decision, the desirability of the investment is determined by weighing the return on investment against the cost of capital. An investment that produces a return greater than the cost of capital�all things considered�will be a good investment.[17] (This is a simplified explanation. See the citations for more complete explanations.)


      Cost of capital is typically around 10 percent. In many business contexts, an investment with a return of 15 percent or 20 percent would be considered compelling. Improved software practices, however, do not offer returns of 15 percent or 20 percent. According to the examples in Table 13-2 (as well as studies cited at the beginning of the chapter), improved software practices provide returns ranging from 300 percent to 1, 900 percent and average about 500 percent. Investments with these levels of returns are extraordinary�virtually unprecedented in business. These returns are higher than Internet stocks in the late 1990s. They're higher than successful speculation in the commodities markets. They're almost as good as winning the lottery, and they represent an unrivaled opportunity for any business that isn't already using these practices.


      The reason for these exceptionally high returns is tied directly to the discussions in Chapters 1 and 2�improved practices have been available for decades, but most organizations aren't taking advantage of them. Risk of adopting these practices is low; payoff is high. All that's needed is the organizational resolve to use them.





        [ Team LiB ]



        Refactor to Simplify Code




        [ Team LiB ]









        Refactor to Simplify Code


        Refactoring is the art of reworking your code to a more simplified or efficient form in a disciplined way. Refactoring is an iterative process:








        1. Write correct, well-commented code that works.



        2. Get it debugged.



        3. Streamline and refine by refactoring the code to replace complex sections with shorter, more efficient code.



        4. Mix well, and repeat.



        Refactoring clarifies, refines, and in many cases speeds up your code. Here's a simple example that replaces an assignment with an initialization. So instead of this:



        function foo() {
        var i;
        // ....
        i = 5;
        }

        Do this:



        function foo() {
        var i = 5;
        // ....
        }


        For More Information


        Refactoring is a discipline unto itself. In fact, entire books have been written on the subject. See Martin Fowler's book, Refactoring: Improving the Design of Existing Code (Addison-Wesley, 1999). See also his catalog of refactorings at http://www.refactoring.com/.









          [ Team LiB ]



          Section 6.1.&nbsp; Storing Metadata









          6.1. Storing Metadata


          Properties are associated with a file or directory by using the svn propset command. The simplest way to set a property is by passing to the svn propset command the property key and value, along with the file to set the property on.



          $ svn propset property_key "property value" repos/trunk/foo.h


          The property key is a string of your choosing, which will be used later for retrieving the associated data from the file. Property keys are handled internally by Subversion as XML, and therefore the property keys themselves are restricted by valid XML NAMES, which is basically any string that contains letters, digits, ., -, and _. For a more formal definition, see the XML standard, available from the World Wide Web Consortium (www.w3.org).


          When choosing property names, it is a good idea to use some sort of naming convention. The naming scheme used by the built-in Subversion properties is to begin each property key with svn:. This might seem like a good convention to adopt for your own property names, but alas, the colon isn't really a valid character in property names. It can only be used reliably in the svn: prefix. It is, however, a sound idea to use prefixes to categorize your properties. You just can't categorize them with a colon. Instead, I suggest using a period to separate a category prefix from a property name. This allows you to assign properties to different categories, and name them accordingly, making it easy to quickly identify broad purposes for the properties, which makes specific meaning easier to discern and remember. You are also able to selectively search for all of the properties in a given category, using the svn proplist command, as I will discuss later in Section 6.2.1, "Listing Properties."


          As an example, let's say that you use Subversion properties to store automated tests and file ownership information. You can then standardize on two property categories, named test and ownership. A property containing a unit-testing script could be named test.unit, and properties containing the file's author and copyright could be named ownership.author and ownership.copyright.


          Subversion property values can be of any form, either text or binary. If the property is short, it is easy to provide it as a parameter on the command line (remember to enclose it with quotation marks if it has spaces though). If the property is long, or if it is a binary file, entering the property value on the command line is impractical. In that case, you can direct svn propset to read the property from a file using the --file (-F) option, which directs Subversion to read the property's value from a file, as in the following example.



          $ svn propset property_key --file ~/property_val.txt repos/trunk/foo.h



          6.1.1. Editing Properties


          Sometimes, you don't want to change an entire property, but would rather make a small change to an existing one. In these cases, Subversion provides you with the svn propedit command, which opens the current property in the current editor.[1] After you have finished editing the property's value, you can save the file and quit. As soon as you quit, Subversion will apply the modified property value to the file or directory's property.

          [1] See Section 7.2.1, "The config File.".


          Subversion does not require that a property must exist prior to calling svn propedit. If you have a long property to add to a file or directory, it is often easier to call svn propedit instead of svn propset to add the initial property value to the file. All you need to do is just run the propedit command and type in the property to the new document that is opened in the editor. When you're done, save and quit.




          6.1.2. Automatically Setting Properties


          If you have a property that needs to be set for every file of a certain type that's added, it's almost a guarantee that you will forget at least once if you need to set the property manually every time. Fortunately, if the value of the property is static for every file of a certain filename pattern, you can tell Subversion to set the value automatically. All you have to do is set up the Subversion configuration file with the appropriate patterns and values (see Section 7.2.1).




          6.1.3. Committing Properties


          When you run svn propset or svn propedit, Subversion sets the new property value in the working copy, but does not contact the repository. Instead, the property changes are scheduled to be committed to the repository on the next svn commit. You can tell which files and directories will have properties committed on the next svn commit by running svn status. The status command will show all files with modified properties by placing an M in the second column of its output.


          When you commit a file or directory property to the repository, it is handled just like file data. It is applied to the new revision, but doesn't affect any previous revisions.




          6.1.4. Storing Revision Properties


          Revision properties are stored using the --revprop option to either svn propset or svn propedit. They must be set on a particular revision, so you also need to use the --revision (-r) option when setting or editing a revision property. Be careful when using svn propset, because changes are applied immediately and are not undoable. Any previous data in the revision property will be irretrievably lost. It is almost always better to use svn propedit when working with revision properties, as it is much harder to accidentally delete important data that way.


          As an example, the following command will invoke an editor to edit a property that stores which issue-tracking issue is fixed in the last revision that you committed.



          $ svn status --show-updates
          Status against revision: 2225
          $ svn propedit --revprop --revision 2225 issues.fixes


          You'll notice that I didn't run svn propedit with the HEAD revision label, but instead used svn status -show-updates to get the number of the HEAD revision. I do that to ensure that I am setting the revision property on the revision that I think I am. If another user were to commit a new revision while I was editing the property, the HEAD would be changed to point to the new head of the repository, which is likely not the revision that I want to edit. It's always safer to get the revision number and then use that explicitly.










            Customizing the User Environment










             < Free Open Study > 











            Customizing the User Environment


            This section will discuss how to tailor the login shell configuration scripts on a Slackware Linux system. In Chapter 4, Red Hat's login script framework was discussed. Readers should consult that chapter for comparison, but to put it briefly, Red Hat Linux makes use of the config.d drop-in configuration file model. Actually, this section is going to be very simple, since Slackware's user shell scripts are substantially similar to Red Hat's.


            The basic model is the same as that used by Red Hat Linux. The main scripts (for both csh and sh shells) set some basic parameters, and then look in the directory /etc/profile.d for additional "drop-in" files for the shell. The sh-based shells (usually bash, ash, ksh, and zsh on Slackware) read files whose names end with .sh; the csh shell (which is almost always tcsh on Linux systems) look for files whose names end with .csh. This is the same mechanism used by Red Hat Linux, so readers should definitely consult Chapter 4.


            However, there are some differences. First and probably most noticeably, Slackware's scripts are quite a bit simpler than Red Hat's, in keeping with the Slackware philosophy. Red Hat's, on the other hand, are arguably more sophisticated and do a bit more. It's the classic tradeoff of simplicity versus functionality, and we see yet again that Slackware leans toward simplicity.


            The other major shell configuration difference between Slackware Linux and Red Hat Linux is in the use of the files. Red Hat uses the files /etc/bashrc and /etc/profile to manage bash and other sh-based shells, respectively. By using a separate /etc/bashrc file, Red Hat is able to configure the environments of bash users to take advantage of extended bash features. Slackware, in contrast, has no /etc/bashrc and simply "reuses" /etc/profile for all sh-derived shells. Also, Slackware curiously places all csh configuration in /etc/login—/etc/cshrc is empty.



            Red Hat uses /etc/csh.cshrc for basic configuration and /etc/lcsh.login for configuration that's only appropriate for login shells (as opposed to shells that are started to run scripts).




            Changing a User's X Window Desktop Environment



            Chapter 4 mentioned the switchdesk program on Red Hat Linux that is used to change a user's preferred X Window desktop environment; this section will discuss how to accomplish the same task on Slackware Linux. Slackware doesn't have the switchdesk program (or any equivalent) and relies on a more traditional way of accomplishing the task.


            By default, Slackware Linux uses the desktop manager from the KDE project, called KDM. KDM allows the user to select one of the installed desktop environments, and will use that environment for the duration of the user's session.






            Note 

            A desktop manager is the graphical program into which you enter your username and password to log into X. The other alternative is to start X from the command line via the xinit command.



            When the user logs in, KDM executes the script /etc/X11/xdm/Xsession and gives it the name of the desktop the user selected as its first argument. The Xsession script then starts up the desktop environment corresponding to the user's choice. However, if the user has an executable program in her home directory called xsession, that program (which can be a script) will be executed instead, bypassing any selection the user made in KDM.


            When the user selects a desktop from KDM, the selection is recorded in a file in the user's home directory called .wmrc. This file contains a single word indicating the user's most recently selected desktop. This file is used by KDM to "remember" the user's choice the next time he accesses KDM.


            All that's fine and well, but it doesn't really answer the question of how a user changes her desktop environment. Well, given the mechanism just described, there are two ways. First, the user can simply edit the file ~/.wmrc and change the word in that file to another environment. However, this is rather pointless since it's probably easier to simply select a new desktop from the pull-down menu in the desktop manager. The second way is to create a ~/.xsession file that contains (or more frequently points to, via a symbolic link) a program to start the user's environment. This setting will always take priority over anything the user might select in the desktop manager. For example, the following link command will designate GNOME as the user's desktop environment.



            ln -s /opt/gnome/bin/gnome-session ~/.xsession

            If the user selected default and has no ~/.xsession file, the Xsession script simply executes the default environment by invoking /etc/X11/xinit/xinitrc (which itself is a symbolic link to one of the other files in /etc/X11/xini/xinitrc, each of which handles a different environment). It's probably becoming clear that the scripts that handle all this are fairly complicated. Other distributions, such as Red Hat Linux, are arguably even more complicated and elaborate. This complexity is why programs like Red Hat's switchdesk are written. Slackware keeps things comparatively simple, and users can change desktops manually by either creating a ~/.xsession file or selecting the desired environment from the desktop manager during the login process.






            Adding New Hardware


            One of the common complaints about Linux systems is that hardware support is occasionally spotty and sometimes difficult to configure. This section will discuss how to install and configure new hardware in Slackware Linux. In Chapter 4, Red Hat Linux's kudzu tool for automatically detecting and configuring new hardware was discussed; unfortunately, Slackware Linux has no equivalent tool, and the procedure is more manually intensive.


            By now, we've seen several examples of the Slackware philosophy of simplicity and self-sufficiency. Perhaps the most explicit example of this is in its support for hardware. Recall that Slackware ships with stock Linux kernels. There's no easy way around it: the process for adding support for new hardware to a Slackware system is simply to configure the Linux kernel itself to support the new hardware (typically by configuring support for the device driver in kernel module form) and then load the module into Slackware.


            In most cases, once you have the Linux kernel either compiled with the device driver built-in or compiled as a module, the kernel will either detect the device on startup or else automatically load it on demand. Slackware relies on this behavior for setting up hardware, and generally it's pretty straightforward to get the driver loaded. The main difficulty lies in configuring the rest of the system to make use of the rest of the new device; for example, adding a USB compact-flash card reader involves not only compiling and loading the kernel modules, but also creating a mount point and modifying the /etc/fstab file appropriately.


            Red Hat's kudzu tool automates most of these tasks. Unfortunately, there's no easy way to do this on Slackware, and so administrators have to roll up their sleeves and dig into the system configuration. If a mount point needs to be created, it'll have to be done manually; if a module needs to be loaded in a certain order, it'll have to be done from /etc/rc.d/rc.local or rc.M; if a new block device (i.e. a disk) is installed, it will need to be configured in the /etc/fstab file.


            This is all fine and well, of course, but it doesn't help the administrator much. There's no checklist that an administrator can go through to install new hardware; when it comes right down to it, there are just too many cases to consider to make a truly comprehensive checklist. What's an administrator to do, then? Hopefully this book will provide the tools and knowledge that are required to do this.




















             < Free Open Study > 



            Section 8.5. Copying Databases and Starting Replication







            8.5. Copying Databases and Starting Replication

            If you're setting up replication with an existing server that already
            contains data, you will need to make an initial backup of the databases and copy the backup to the slave server. I'll list
            the recommended method first, followed by some alternatives and their
            limitations.

            To get a snapshot of the database in a consistent state, you need to
            shut down the server while you make a copy of the data, or at least
            prevent users from changing data. Considering that once you set up
            replication you may never have to shut down your master server for backups
            again, explain to management that it's worth inconveniencing the users
            this one time to get a clean, consistent backup. The following sections
            will explain how to lock the tables. Note that you can allow users to make
            changes as soon as your copy is made. If they make changes before
            replication starts, MySQL can easily recognize and incorporate those
            changes into the slave.

            8.5.1. Using mysqldump

            This utility, described in Chapter 16,
            creates a file of SQL statements that can later be executed to recreate
            databases and their contents. For the purposes of setting up
            replication, use the following options while running the utility from
            the command line on the master server:

            mysqldump --user=root --password=my_pwd \
            --extended-insert --all-databases \
            --ignore-table=mysql.users --master-data > /tmp/backup.sql


            The result is a text file (backup.sql)
            containing SQL statements to create all of the master's databases and
            tables and insert their data. Here is an explanation of some of the
            special options shown:


            --extended-insert

            This option creates multiple-row INSERT
            statements and thereby makes the resulting dump file smaller. It also
            allows the backup to run faster.


            --ignore-table

            This option is used here so that the usernames and passwords
            won't be copied. This is a good security precaution if the slave
            will have different users, and especially if it will be used only
            for backups of the master. Unfortunately, there is no easy way to
            exclude the entire mysql database containing
            user information. You could list all the tables in that database
            to be excluded, but they have to be listed separately, and that
            becomes cumbersome. The only table that contains passwords is the
            users table, so it may be the only one that
            matters. However, it depends on whether you set security on a
            database, table, or other basis, and therefore want to protect
            that user information.


            --master-data

            This option locks all of the tables during the dump to
            prevent data from being changed, but allows users to continue
            reading the tables. This option also adds a few lines like the
            following to the end of the dump file:

            -- --
            Position to start replication from --

            CHANGE MASTER TO MASTER_LOG_FILE='bin.000846';
            CHANGE MASTER TO MASTER_LOG_POS=427;


            When the dump file is executed on the slave server, these
            lines will record the name of the master's binary log file and the
            position in the log at the time of the backup, while the tables
            were locked. When replication is started, these lines will provide
            this information to the master so it will know the point in the
            master's binary log to begin sending entries to the slave. This is
            meant to ensure that any data that changes while you set up the
            slave server isn't missed.

            To execute the dump file and thereby set up the databases and data
            on the slave server, copy the dump file generated by
            mysqldump to the slave server. The MySQL server needs
            to be running on the slave, but not replication. Run the mysql client through a command
            such as the following on the slave:

            mysql --user=root --password=my_pwd < /tmp/backup.sql


            This will execute all of the SQL statements in the dump file,
            creating a copy of the master's databases and data on the slave.

            8.5.2. Alternative Methods for Making Copies

            If you peruse MySQL documentation, you might get the idea
            that the [click here] statement is ideal for making a
            copy, but it is actually not very feasible. First, it works only on
            MyISAM tables. Second, because it performs a global read lock on the
            master while it is making a backup, it prevents the master from serving
            users for some time. Finally, it can be very slow and depends on good
            network connectivity (so it can time out while copying data). Basically,
            the statement is a nice idea, but it's not very practical or dependable
            in most situations. It has been deprecated by MySQL AB and will be
            removed from future releases.

            A better alternative is to drop down to the operating system level
            and copy the raw files containing your schemas and data. To leave the
            server up but prevent changes to data before you make a copy of the
            MySQL data directory, you could put a read-only lock on the tables by entering the following
            command:

            FLUSH TABLES WITH READ LOCK;


            This statement will commit any transactions that may be occurring
            on the server, so be careful and make sure the lock is actually in place
            before you continue. Then, without disconnecting the client that issued
            the statement, copy the data directory to an alternative directory. Once
            this is completed, issue an UNLOCK TABLES statement
            in the client that flushed and locked the tables. After that, the master
            responds to updates as usual, while you need only transfer the copy of
            the data directory to the slave server, putting it into the slave
            server's data directory. Be sure to change the ownership of all of the
            files and directories to mysql. In Linux, this is
            done by entering the following statement as
            root:

            chown -R mysql:mysql /path_to_data


            You will run into a complication with this method of copying the
            data directory if you have InnoDB tables in your databases, because they
            are not stored in the data directory. Also, if you don't have
            administrative access to the filesystem to be able to manually copy the
            data directory, you won't be able to use this method. This is why
            mysqldump remains the recommended method for copying
            the master's data.








            11.4 Siemens Four Views











            Team-Fly

             

             

















            Documenting Software Architectures: Views and Beyond
            By
            Paul Clements, Felix Bachmann, Len Bass, David Garlan, James Ivers, Reed Little, Robert Nord, Judith Stafford
            Table of Contents

            Chapter 11. 
            Other Views and Beyond







            11.4 Siemens Four Views


            The Siemens approach uses four views to document an architecture. The four views and their associated design tasks are shown in Figure 11.1. The first task for each view is global analysis. The second and third groups of tasks are the central and final design tasks, which define the elements of the architecture view, the relationships among them, and important properties.


            Figure 11.1. The Siemens Four Views approach to software architecture (Adapted from Hofmeister, Nord, and Soni 2000, p. 20)



            11.4.1 Global Analysis


            In global analysis, you identify the factors that influence the architecture and analyze them to derive strategies for designing the architecture. This provides supporting documentation that captures the analysis of the factors that influence the architecture and the rationale for why the design decisions reflected in the view were made.


            11.4.2 Conceptual Architecture View


            The conceptual architecture view explains how the system's functionality is mapped to components and connectors. This view is closest to the application domain because it is the least constrained by the software and hardware platforms. Documenting the conceptual architecture view can be done by using the C&C viewtype. There is a close correspondence between the Siemens terminology and our terminology (see Table 11.7).























































            Table 11.7. Siemens Four Views conceptual architecture view
             Siemens Four ViewsOur Term
            ElementsCComponentComponent
             CPortPort
             CConnectorConnector
             CRoleRole
             ProtocolProtocol
            RelationsCompositionDecomposition
             CbindingBinding
             CconnectionAttachment
             Obeys, obeys congugateElement property


            11.4.3 Module Architecture View


            The module architecture view explains how the components, connectors, ports, and roles are mapped to abstract modules and their interfaces. The system is decomposed into modules and subsystems. A module can also be assigned to a layer, which then constrains its dependencies on other modules.


            Documenting the module architecture view can be done by using the module viewtype. There is a close correspondence between the Siemens terminology and our terminology. To describe the relationships between elements of the conceptual view and the module view, the mapping, as discussed in Section 6.3, should be documented. See Table 11.8.




























































            Table 11.8. Siemens Four Views module architecture view
             Siemens Four ViewsOur Term
            ElementsModuleModule
             InterfaceInterface
             SubsystemSubsystem
             LayerLayer
            RelationsContainAggregation
             CompositionDecomposition
             UseUses, allowed to use
             Require, provideElement property
             Implement (module: conceptual element)Cross-view mapping
             Assigned to (module: layer)Property of a layer


            11.4.4 Execution Architecture View


            The execution architecture view explains how the system's functionality is mapped to runtime platform elements, such as processes and shared libraries. Platform elements consume platform resources that are assigned to a hardware resource.


            Documenting the execution architecture view can be done by using the communicating-processes style of the C&C viewtype and the deployment style of the allocation viewtype. To describe an execution configuration in the execution architecture view, start with the components in the communicating-processes style�task, process, thread�and connectors, based on the communication connector. Add or refine existing component types for runtime entities: queue, shared memory, DLL, socket, file, and shared library. The communication connector is extended to include a use-mechanism relation to possible communication mechanisms, such as IPC, RPC, or DCOM. Use the deployment style as a guide to describe the execution configuration mapped to hardware devices. To describe the relationships between elements of the module view and the execution view, the mapping, as discussed in Section 6.3, should be documented. See Table 11.9.



































            Table 11.9. Siemens Four Views execution architecture view
             Siemens Four ViewsOur Term
            ElementsRuntime entityConcurrent units: task, process, thread
             Communication pathCommunication: data exchange, control
            RelationsUse mechanism 
             Communicate overAttachment relation
             Assigned to (module: runtime entity)Cross-view mapping


            11.4.5 Code Architecture View


            The code architecture view explains how the software implementing the system is organized into source and deployment components. Documenting the code architecture view can be done by using the implementation style of the allocation viewtype. To describe the code architecture view, start with the packaging units, such as files and directories, in the implementation style to describe the source components and their allocation in the development environment. You will need to create new styles in the module and allocation viewtypes for describing the other elements for intermediate and deployment components, their relations, and how they are organized and packaged in the development environment. To describe the relationships between elements of the execution view and the executable elements in the code view, the mapping, as discussed in Section 6.3, should be documented.


            11.4.6 Summary


            If you wish to use views prescribed by the Siemens Four Views approach, you can do so as shown in the following list:

























            To Achieve This Siemens Four Views ViewUse This Approach
            Conceptual architectureOne or more styles in the C&C viewtype
            Module architectureOne or more styles in the module viewtype
            Execution architectureDeployment style in the allocation viewtype; for processes, communicating-processes style in the C&C viewtype
            Code architectureImplementation style in the allocation viewtype


            Like RUP, the Siemens Four Views approach does not preclude additional information, and so you are free to�and should�consider what other views may be helpful to your project. And, as with RUP, these views form the kernel of the architecture only; you should complete the package by adding the supporting documentation for each view and the documentation beyond views, as discussed in Chapter 10.












              Team-Fly

               

               





              Top



              11.6 Background Processes in 'ush'



              [ Team LiB ]






              11.6 Background Processes in ush


              The main operational properties of a background process are that the shell does not wait for it to complete and that it is not terminated by a SIGINT sent from the keyboard. A background process appears to run independently of the terminal. This section explores handling of signals for background processes. A correctly working shell must prevent terminal-generated signals and input from being delivered to a background process and must handle the problem of having a child divorced from its controlling terminal.


              Program 11.11 shows a modification of ush4 that allows a command to be executed in the background. An ampersand (&) at the end of a command line specifies that ush5 should run the command in the background. The program assumes that there is at most one & on the line and that, if present, it is at the end. The shell determines whether the command is to be executed in the background before forking the child, since both parent and child both must know this information. If the command is executed in the background, the child calls setpgid so that it is no longer in the foreground process group of its session. The parent shell does not wait for background children.



              Program 11.11 ush5.c

              A shell that attempts to handle background processes by changing their process groups.



              #include <limits.h>
              #include <setjmp.h>
              #include <signal.h>
              #include <stdio.h>
              #include <string.h>
              #include <unistd.h>
              #include <sys/types.h>
              #include <sys/wait.h>
              #define BACK_SYMBOL '&'
              #define PROMPT_STRING "ush5>>"
              #define QUIT_STRING "q"

              void executecmd(char *incmd);
              int signalsetup(struct sigaction *def, sigset_t *mask, void (*handler)(int));

              static sigjmp_buf jumptoprompt;
              static volatile sig_atomic_t okaytojump = 0;

              /* ARGSUSED */
              static void jumphd(int signalnum) {
              if (!okaytojump) return;
              okaytojump = 0;
              siglongjmp(jumptoprompt, 1);
              }

              int main (void) {
              char *backp;
              sigset_t blockmask;
              pid_t childpid;
              struct sigaction defhandler;
              int inbackground;
              char inbuf[MAX_CANON];
              int len;

              if (signalsetup(&defhandler, &blockmask, jumphd) == -1) {
              perror("Failed to set up shell signal handling");
              return 1;
              }

              for( ; ; ) {
              if ((sigsetjmp(jumptoprompt, 1)) && /* if return from signal, \n */
              (fputs("\n", stdout) == EOF) )
              continue;
              okaytojump = 1;
              printf("%d",(int)getpid());
              if (fputs(PROMPT_STRING, stdout) == EOF)
              continue;
              if (fgets(inbuf, MAX_CANON, stdin) == NULL)
              continue;
              len = strlen(inbuf);
              if (inbuf[len - 1] == '\n')
              inbuf[len - 1] = 0;
              if (strcmp(inbuf, QUIT_STRING) == 0)
              break;
              if ((backp = strchr(inbuf, BACK_SYMBOL)) == NULL)
              inbackground = 0;
              else {
              inbackground = 1;
              *backp = 0;
              }
              if (sigprocmask(SIG_BLOCK, &blockmask, NULL) == -1)
              perror("Failed to block signals");
              if ((childpid = fork()) == -1)
              perror("Failed to fork");
              else if (childpid == 0) {
              if (inbackground && (setpgid(0, 0) == -1))
              return 1;
              if ((sigaction(SIGINT, &defhandler, NULL) == -1) ||
              (sigaction(SIGQUIT, &defhandler, NULL) == -1) ||
              (sigprocmask(SIG_UNBLOCK, &blockmask, NULL) == -1)) {
              perror("Failed to set signal handling for command ");
              return 1;
              }
              executecmd(inbuf);
              return 1;
              }
              if (sigprocmask(SIG_UNBLOCK, &blockmask, NULL) == -1)
              perror("Failed to unblock signals");
              if (!inbackground) /* only wait for child not in background */
              wait(NULL);
              }
              return 0;
              }



              Exercise 11.26

              Execute the command ls & several times under ush5. Then, execute ps -a (still under this shell). Observe that the previous ls processes still appear as <defunct>. Exit from the shell and execute ps -a again. Explain the status of these processes before and after the shell exits.


              Answer:


              Since no process has waited for them, the background processes become zombie processes. They stay in this state until the shell exits. At that time, init becomes the parent of these processes, and since init periodically waits for its children, the zombies eventually die.



              The shell in Program 11.12 fixes the problem of zombie or defunct processes. When a command is to be run in the background, the shell does an extra call to fork. The first child exits immediately, leaving the background process as an orphan that can then be adopted by init. The shell now waits for all children, including background processes, since the background children exit immediately and the grandchildren are adopted by init.




              Program 11.12 ush6.c

              A shell that cleans up zombie background processes.



              #include <errno.h>
              #include <limits.h>
              #include <setjmp.h>
              #include <signal.h>
              #include <stdio.h>
              #include <string.h>
              #include <unistd.h>
              #include <sys/types.h>
              #include <sys/wait.h>
              #define BACK_SYMBOL '&'
              #define PROMPT_STRING ">>"
              #define QUIT_STRING "q"

              void executecmd(char *incmd);
              int signalsetup(struct sigaction *def, struct sigaction *catch,
              sigset_t *mask, void (*handler)(int));

              static sigjmp_buf jumptoprompt;
              static volatile sig_atomic_t okaytojump = 0;

              /* ARGSUSED */
              static void jumphd(int signalnum) {
              if (!okaytojump) return;
              okaytojump = 0;
              siglongjmp(jumptoprompt, 1);
              }

              int main (void) {
              char *backp;
              sigset_t blockmask;
              pid_t childpid;
              struct sigaction defhandler, handler;
              int inbackground;
              char inbuf[MAX_CANON+1];

              if (signalsetup(&defhandler, &handler, &blockmask, jumphd) == -1) {
              perror("Failed to set up shell signal handling");
              return 1;
              }

              for( ; ; ) {
              if ((sigsetjmp(jumptoprompt, 1)) && /* if return from signal, \n */
              (fputs("\n", stdout) == EOF) )
              continue;
              if (fputs(PROMPT_STRING, stdout) == EOF)
              continue;
              if (fgets(inbuf, MAX_CANON, stdin) == NULL)
              continue;
              if (*(inbuf + strlen(inbuf) - 1) == '\n')
              *(inbuf + strlen(inbuf) - 1) = 0;
              if (strcmp(inbuf, QUIT_STRING) == 0)
              break;
              if ((backp = strchr(inbuf, BACK_SYMBOL)) == NULL)
              inbackground = 0;
              else {
              inbackground = 1;
              *backp = 0;
              if (sigprocmask(SIG_BLOCK, &blockmask, NULL) == -1)
              perror("Failed to block signals");
              if ((childpid = fork()) == -1) {
              perror("Failed to fork child to execute command");
              return 1;
              } else if (childpid == 0) {
              if (inbackground && (fork() != 0) && (setpgid(0, 0) == -1))
              return 1;
              if ((sigaction(SIGINT, &defhandler, NULL) == -1) ||
              (sigaction(SIGQUIT, &defhandler, NULL) == -1) ||
              (sigprocmask(SIG_UNBLOCK, &blockmask, NULL) == -1)) {
              perror("Failed to set signal handling for command ");
              return 1;
              }
              executecmd(inbuf);
              perror("Failed to execute command");
              return 1;
              }
              if (sigprocmask(SIG_UNBLOCK, &blockmask, NULL) == -1)
              perror("Failed to unblock signals");
              wait(NULL);
              }
              return 0;
              }



              Exercise 11.27

              Execute a long-running background process such as rusers & under the shell given in Program 11.12. What happens when you enter Ctrl-C?


              Answer:


              The background process is not interrupted because it is not part of the foreground process group. The parent shell catches SIGINT and jumps back to the main prompt.




              Exercise 11.28

              Use the showid function from Exercise 11.25 to determine which of three processes in a pipeline becomes the process group leader and which are children of the shell in ush6. Do this for pipelines started both in the foreground and background.


              Answer:


              If the parent starts the pipeline in the foreground, all the processes have the same process group as the shell and the shell is the process group leader. The first process in the pipeline is a child of the shell and the others are grandchildren. If the shell starts the pipeline in the background, the first process in the pipeline is the process group leader. Its parent will eventually be init. The other processes are children or grandchildren of the first process in the pipeline.



              The zombie child problem is more complicated if the shell does job control. In this case, the shell must be able to detect whether the background process is stopped because of a signal (e.g., SIGSTOP). The waitpid function has an option for detecting children stopped by signals, but not for detecting grandchildren. The background process of Program 11.12 is a grandchild because of the extra fork call, so ush6 cannot detect it.


              Program 11.13 shows a direct approach, using waitpid, for handling zombies. To detect whether background processes are stopped for a signal, ush7 uses waitpid with the WNOHANG for background processes rather than forking an extra child. The �1 for the first argument to waitpid means to wait for any process. If the command is not a background command, ush7 explicitly waits for the corresponding child to complete.



              Program 11.13 ush7.c

              A shell that handles zombie background processes by using waitpid.



              #include <limits.h>
              #include <setjmp.h>
              #include <signal.h>
              #include <stdio.h>
              #include <string.h>
              #include <unistd.h>
              #include <sys/types.h>
              #include <sys/wait.h>
              #define BACK_SYMBOL '&'
              #define PROMPT_STRING "ush7>>"
              #define QUIT_STRING "q"

              void executecmd(char *incmd);
              int signalsetup(struct sigaction *def, sigset_t *mask, void (*handler)(int));
              static sigjmp_buf jumptoprompt;
              static volatile sig_atomic_t okaytojump = 0;

              /* ARGSUSED */
              static void jumphd(int signalnum) {
              if (!okaytojump) return;
              okaytojump = 0;
              siglongjmp(jumptoprompt, 1);
              }

              int main (void) {
              char *backp;
              sigset_t blockmask;
              pid_t childpid;
              struct sigaction defhandler;
              int inbackground;
              char inbuf[MAX_CANON];
              int len;

              if (signalsetup(&defhandler, &blockmask, jumphd) == -1) {
              perror("Failed to set up shell signal handling");
              return 1;
              }

              for( ; ; ) {
              if ((sigsetjmp(jumptoprompt, 1)) && /* if return from signal, newline */
              (fputs("\n", stdout) == EOF) )
              continue;
              okaytojump = 1;
              printf("%d",(int)getpid());
              if (fputs(PROMPT_STRING, stdout) == EOF)
              continue;
              if (fgets(inbuf, MAX_CANON, stdin) == NULL)
              continue;
              len = strlen(inbuf);
              if (inbuf[len - 1] == '\n')
              inbuf[len - 1] = 0;
              if (strcmp(inbuf, QUIT_STRING) == 0)
              break;
              if ((backp = strchr(inbuf, BACK_SYMBOL)) == NULL)
              inbackground = 0;
              else {
              inbackground = 1;
              *backp = 0;
              }
              if (sigprocmask(SIG_BLOCK, &blockmask, NULL) == -1)
              perror("Failed to block signals");
              if ((childpid = fork()) == -1)
              perror("Failed to fork");
              else if (childpid == 0) {
              if (inbackground && (setpgid(0, 0) == -1))
              return 1;
              if ((sigaction(SIGINT, &defhandler, NULL) == -1) ||
              (sigaction(SIGQUIT, &defhandler, NULL) == -1) ||
              (sigprocmask(SIG_UNBLOCK, &blockmask, NULL) == -1)) {
              perror("Failed to set signal handling for command ");
              return 1;
              }
              executecmd(inbuf);
              return 1;
              }
              if (sigprocmask(SIG_UNBLOCK, &blockmask, NULL) == -1)
              perror("Failed to unblock signals");
              if (!inbackground) /* wait explicitly for the foreground process */
              waitpid(childpid, NULL, 0);
              while (waitpid(-1, NULL, WNOHANG) > 0); /* wait for background procs */
              }
              return 0;
              }



              Exercise 11.29

              Repeat Exercise 11.28 for Program 11.13.


              Answer:


              The results are the same as for Exercise 11.28 except that when started in the background, the first process in the pipeline is a child of the shell.




              Exercise 11.30

              Compare the behavior of ush6 and ush7 under the following scenario. Start a foreground process that ignores SIGINT. While that process is executing, enter Ctrl-C.


              Answer:


              The shell of ush6 jumps back to the main loop before waiting for the process. If this shell executes another long-running command and the first command terminates, the shell waits for the wrong command and returns to the prompt before the second command completes. This difficulty does not arise in ush7 since the ush7 shell waits for a specific foreground process.







                [ Team LiB ]



                References










                References



                (Microsoft 2005) "PREfast Step-by-Step," http://www.microsoft.com/whdc/DevTools/tools/PREfast_steps.mspx. April 2005.


                (GotDotNet 2006a) Microsoft Corporation. FxCop download, http://www.gotdotnet.com/team/fxcop.


                (Microsoft 2003) "Microsoft Application Verifier," http://www.microsoft.com/technet/prodtechnol/windows/appcompatibility/AppVerif.mspx. TechNet, May 2003.


                (GotDotNet 2006b) Microsoft Corporation. FxCop Documentation, "Security Rules," http://gotdotnet.com/team/fxcop/Docs/Rules/Security.html.


                (Microsoft 2004) Gregory, Kate, Gregory Consulting. "Security Checks at Runtime and Compile Time," http://msdn.microsoft.com/library/en-us/dv_vstechart/html/securitychecks.asp. MSDN, April 2004.


                (Microsoft 2006) Visual C++ Linker Options, "/SafeSEH (Image Safe Exception Handlers," http://msdn.microsoft.com/library/en-us/vccore/html/vclrfsafesehimagehassafeexceptionhandlers.asp. TechNet, 2006.













                17.10 Measuring Process Compliance




                I l@ve RuBoard










                17.10 Measuring Process Compliance


                Organizations that are using the CMM or CMMI to guide their process improvement activities have typically established a Software Quality Assurance (SQA) group. SQA is responsible for verifying that project activities and work products conform to the project's designated processes, procedures, standards, and requirements. SQA conducts audits and reviews on various aspects of the project work and reports the results.


                SQA groups generally establish checklists of items to be verified. The checklist for verifying that a peer review was conducted properly may include items such as:



                • Was a qualified peer review moderator designated for the review?


                • Did the review team have the appropriate skills to review the work product adequately?


                • Was the work product to be reviewed published at least three days in advance of the review meeting?


                • Did the moderator verify that at least 80% of the invited reviewers participated in the review?


                • Did the scribe note the preparation time on the peer review data sheet?


                • Did the moderator verify that the reviewers were adequately prepared for the review?


                • Did the scribe note the defects, their location in the work product, and the responsible person on the peer review data sheet?


                • Was the peer review data entered into the peer review database?


                • Were the defects resolved?


                • Did the moderator (or designee) verify the resolution of the defects?



                Verifying and measuring compliance can identify:



                • Areas where compliance to the process is degrading


                • Process steps in which additional training or coaching is necessary


                • Process elements that may warrant additional tailoring guidelines (or replacement)


                • Elements of the process that are deemed administratively burdensome


                • Areas where tool support may be beneficial



                Project personnel are rarely noncompliant just to be belligerent. Unless this is a relatively new and therefore unproved process element, noncompliance usually indicates a change in the work pattern from that which was in place when the process element was introduced. Monitoring process compliance trends can detect shifts in project behavior and can result in initiation of corrective action in a timely manner.


                If the SEPG established a process capability baseline, process results at each step should be verified against these desired outcomes. This will help keep management informed and engaged based upon expectations that were established when the project was first begun. It will also serve as a means to maintain commitment and focus to the project.







                  I l@ve RuBoard



                  File Locking









                  File Locking


                  An important issue in any system with multiple processes is coordination and synchronization of access to shared objects, such as files.


                  Windows can lock files, in whole or in part, so that no other process (running program) can access the locked file region. File locks can be read-only (shared) or read-write (exclusive). Most important, the locks belong to the process. Any attempt to access part of a file (using ReadFile or WriteFile) in violation of an existing lock will fail because the locks are mandatory at the process level. Any attempt to obtain a conflicting lock will also fail even if the process already owns the lock. File locking is a limited form of synchronization between concurrent processes and threads; synchronization is covered in much more general terms starting in Chapter 8.


                  The most general function is LockFileEx. The less general function, LockFile, can be used on Windows 9x.


                  LockFileEx is a member of the extended I/O class of functions, so the overlapped structure, used earlier to specify file position to ReadFile and WriteFile, is required to specify the 64-bit file position and range of the file region that is to be locked.



                  BOOL LockFileEx (
                  HANDLE hFile,
                  DWORD dwFlags,
                  DWORD dwReserved,
                  DWORD nNumberOfBytesToLockLow,
                  DWORD nNumberOfBytesToLockHigh,
                  LPOVERLAPPED lpOverlapped)


                  LockFileEx locks a byte range in an open file for either shared (multiple readers) or exclusive (one reader-writer) access.


                  Parameters


                  hFile is the handle of an open file. The handle must have GENERIC_READ or both GENERIC_READ and GENERIC_WRITE file access.


                  dwFlags determines the lock mode and whether to wait for the lock to become available.


                  LOCKFILE_EXCLUSIVE_LOCK, if set, indicates a request for an exclusive, read-write lock. Otherwise, it requests a shared (read-only) lock.


                  LOCKFILE_FAIL_IMMEDIATELY, if set, specifies that the function should return immediately with FALSE if the lock cannot be acquired. Otherwise, the call blocks until the lock becomes available.


                  dwReserved must be 0. The two parameters with the length of the byte range are self-explanatory.


                  lpOverlapped points to an OVERLAPPED data structure containing the start of the byte range. The overlapped structure contains three data members that must be set (the others are ignored); the first two determine the start location for the locked region.


                  • DWORD Offset (this is the correct name; not OffsetLow).

                  • DWORD OffsetHigh.

                  • HANDLE hEvent should be set to 0.


                  A file lock is removed using a corresponding UnlockFileEx call; all the same parameters are used except dwFlags.



                  BOOL UnlockFileEx (
                  HANDLE hFile,
                  DWORD dwReserved,
                  DWORD nNumberOfBytesToLockLow,
                  DWORD nNumberOfBytesToLockHigh,
                  LPOVERLAPPED lpOverlapped)


                  You should consider several factors when using file locks.


                  • The unlock must use exactly the same range as a preceding lock. It is not possible, for example, to combine two previous lock ranges or unlock a portion of a locked range. An attempt to unlock a region that does not correspond exactly with an existing lock will fail; the function returns FALSE and the system error message indicates that the lock does not exist.

                  • Locks cannot overlap existing locked regions in a file if a conflict would result.

                  • It is possible to lock beyond the range of a file's length. This approach could be useful when a process or thread extends a file.

                  • Locks are not inherited by a newly created process.


                  Table 3-1 shows the lock logic when all or part of a range already has a lock. This logic applies even if the lock is owned by the same process that is making the new request.


                  Table 3-1. Lock Request Logic
                   

                  Requested Lock Type

                  Existing Lock

                  Shared Lock

                  Exclusive Lock

                  None

                  Granted

                  Granted

                  Shared lock (one or more)

                  Granted

                  Refused

                  Exclusive lock

                  Refused

                  Refused



                  Table 3-2 shows the logic when a process attempts a read or write operation on a file region with one or more locks, owned by a separate process, on all or part of the read-write region. A failed read or write may take the form of a partially completed operation if only a portion of the read or write record is locked.


                  Table 3-2. Locks and I/O Operation
                   

                  I/O Operation

                  Existing Lock

                  Read

                  Write

                  None

                  Succeeds

                  Succeeds

                  Shared lock (one or more)

                  Succeeds. It is not necessary for the calling process to own a lock on the file region.

                  Fails

                  Exclusive lock

                  Succeeds if the calling process owns the lock. Fails otherwise.

                  Succeeds if the calling process owns the lock. Fails otherwise.



                  Read and write operations are normally in the form of ReadFile and WriteFile calls or their extended versions, ReadFileEx and WriteFileEx. Diagnosing a read or write failure requires calling GetLastError.


                  Accessing memory that is mapped to a file is another form of file I/O, as will be discussed in Chapter 5. Lock conflicts are not detected at the time of memory reference; rather, they are detected at the time that the MapViewOfFile function is called. This function makes a part of the file available to the process, so the lock must be checked at that time.


                  The LockFile function is a limited, special case and is a form of advisory locking. It can be used on Windows 9x, which does not support LockFileEx. Only exclusive access is available, and LockFile returns immediately. That is, LockFile does not block. Test the return value to determine whether you obtained the lock.


                  Releasing File Locks


                  Every successful LockFileEx call must be followed by a single matching call to UnlockFileEx (the same is true for LockFile and UnlockFile). If a program fails to release a lock or holds the lock longer than necessary, other programs may not be able to proceed, or, at the very least, their performance will be negatively impacted. Therefore, programs should be carefully designed and implemented so that locks are released as soon as possible, and logic that might cause the program to skip the unlock should be avoided.


                  Termination handlers (Chapter 4) are a useful way to ensure that the unlock is performed.


                  Lock Logic Consequences


                  Although the file lock logic shown in Tables 3-1 and 3-2 is natural, it has consequences that may be unexpected and cause unintended program defects. Here are some examples.


                  • Suppose that process A and process B periodically obtain shared locks on a file, and process C blocks when attempting to gain an exclusive lock on the same file after process A gets its shared lock. Process B may now gain its shared lock even though C is still blocked, and C will remain blocked even after A releases the lock. C will remain blocked until all processes release their shared locks even if they obtained them after C blocked. In this scenario, it is possible that C will be blocked forever even though all the other processes manage their shared locks properly.

                  • Assume that process A has a shared lock on the file and that process B attempts to read the file without obtaining a shared lock first. The read will still succeed even though the reading process does not own any lock on the file because the read operation does not conflict with the existing shared lock.

                  • These statements apply both to entire files and to regions.

                  • A read or write may be able to complete a portion of its request before encountering a conflicting lock. The read or write will return FALSE, and the byte transfer count will be less than the number requested.


                  Using File Locks

                  File locking examples are deferred until Chapter 6, which covers process management. Program 4-2, 6-4, 6-5, and 6-6 use locks to ensure that only one process at a time can modify a file.



                  UNIX has advisory file locking; an attempt to obtain a lock may fail (the logic is the same as in Table 3-1), but the process can still perform the I/O. Therefore, UNIX can achieve locking between cooperating processes, but any other process can violate the protocol.


                  To obtain an advisory lock, use options to the fcntl function. The commands (the second parameter) are F_SETLK, F_SETLKW (to wait), and F_GETLK. An additional block data structure contains a lock type that is one of F_RDLCK, F_WRLCK, or F_UNLCK and the range.


                  Mandatory locking is also available in some UNIX systems using a file's set-group-ID and group-execute, both using chmod.


                  UNIX file locking behavior differs in many ways. For example, locks are inherited through an exec call.


                  The C library does not support locking, although Visual C++ does supply nonstandard extensions for locking.










                    Newer Posts Older Posts Home