Thursday, November 12, 2009

Up Front










































Up Front



Before even considering burning a flash device, make
sure you’ve made adequate preparations for the development process. The
following sections discuss some preliminary measures. Many of these
measures might seem obvious, but they are nevertheless important enough
to mention. The tips discussed in the following sections include:




  • Get involved with the hardware design




  • Get to know the hardware and be nice to the designer




  • Have local copies of all data sheets




  • Make sure the hardware is working




  • Start slowly




  • Look at what you’ve created




It is also important to consider what tools you need
for the development process and to ensure that the design is compatible
with those tools. Several hardware-based development and debugging
tools are available to assist with the embedded systems development
process, including emulators, logic analyzers, JTAG and BDM interfaces,
memulators, and logic probes. Many of these tools attach directly to
the target device.










Get Involved with the Hardware Design


First of all, make sure the boot device is
conveniently reprogrammable. This point might seem obvious, but it’s
not unusual to find systems that have the boot device soldered to the
board with no reprogramming mechanism except to unsolder the device.
Such a design can be painful, especially for the person writing the
boot firmware. Understandably, some cost-sensitive projects must avoid
sockets and other expensive components. Even so, at least one or two
early versions of the board can be built with boot device sockets or a
JTAG-like interface so that the boot device can be reprogrammed without
the need for a soldering iron.


The design should also include some mechanism that
allows the boot firmware to easily communicate with the boot firmware
designer. Ideally, this communication would be via a serial port and a
few LEDs. If the application is extremely cost sensitive and these
extra parts are out of the question for the deliverable hardware,
consider the possibility of including some expansion connector that is
not populated on the final product. During development, the connector
can provide additional interfaces for debugging. The only unit cost is
a small increase in board size. Once again, this decision hinges on
cost restrictions and other factors but providing some means of
connectivity can save a lot of time in the development process.


I mentioned JTAG in the preceding section. If the CPU
has some type of debug interface, then make sure that the associated
pins are accessible. These interfaces become quite useful, especially
if there is no other communication device tied to the processor. When
the hardware is laid out, find out what JTAG-like tools are available
for your CPU. Get the pinout for the tool you plan to use and make
certain the hardware has a connector for it.











Get to Know the Hardware and Be Nice to the Designer


Hey, I’m not kidding! A good mutual friendship
between the firmware and hardware folks can save a lot of frustration
and time over the lifetime of a project. Let me say from experience
that chances are, it’s not a hardware problem! A lot of small, sneaky
bugs might tempt you to be suspicious of the hardware but investigate
before you accuse! This advice amounts to common sense etiquette that
will improve relations in any development environment.


Getting to know the hardware doesn’t mean that you
should look over the shoulder of the hardware designers as they are
writing VHSIC Hardware Description Language (VHDL), but it certainly
does help if you are familiar with at least the CPU section of the
schematics. Take some time with the hardware designer and ask
questions. Establish a good working relationship with the designer and
the schematics. Get your own copy of the schematics and mark them up.
This is important for the target hardware as well as the target CPU
itself. You must also spend some time reading about the processor you
are trying to tame.











Have Local Copies of all Data Sheets


You must know more than just the schematic. Each
device on the schematic may come with a 200-page manual. As silicon
gets denser, more and more complexity is built into the devices. It is
vitally important for you, the firmware developer, to master the device
behavior. In this age of electronic paper, I still find it handy to
print the sections of the manual that I will be referring to the most.
Printing the manual also allows you to document errors or strange
behavior of a device.


This issue raises another point: make sure you
check with the device vendor to see if there are any errata
outstanding. It is not at all unusual to use a device that has bugs,
especially if your design uses some new device from a silicon
manufacturer. Worse than that, you may be the one that finds new
errata. This doesn’t happen often, so don’t be too quick to blame the
silicon, but it does happen.





Make Sure the Hardware is Working


If the hardware design is brand new and the board
is fresh from the factory, make certain the designer has blessed it
before you start assuming it’s valid. Our first run– time step makes
the assumption that the connection from the CPU to the flash device is
correct. If you’re using the board for the first time, make sure you
know how to connect the power supply properly. This point may sound
silly, but you sure won’t get on the good side of the hardware designer
if you toast the board on the first day by connecting the power
incorrectly.





Start Slowly


I can’t emphasize this point enough… TAKE BABY
STEPS!!! Don’t even consider testing a large program until you have
tested several small versions of the boot code. Consider the things you
haven’t proven yet:




  • Is your program mapped to the correct memory space?




  • Do you really understand how this CPU deals with a reset/powerup?




  • Is your conversion of the executable file to binary done correctly?




  • Are you sure you configured the device programmer properly?




  • If your boot memory is wider than eight bits and
    it involves more than one device, are you sure you inserted the bytes
    into the correct device? Is the odd byte the most significant byte
    (MSB) or least significant byte (LSB)?




  • Does the hardware work?




A little humility here is likely to save you a lot of
extra loader passes. Search the CPU manufacturer’s website for example
boot code. In almost all cases, you will find something. Check out user
groups. Do some web hunting. If possible, get some hardware assistance.
If you don’t know how to use an oscilloscope or logic analyzer, then
get cushy with someone who does. These are priceless tools at this
stage of the game.






Note 

While a logic probe is somewhat limited in
capability, its convenience and price make it worth mentioning. A logic
probe is an very inexpensive, hand-held instrument that allows you to
read the logic level (high or low) at the probed connection. Most logic
probes also support the ability to detect a clock. A logic probe is a
simple pencil-like gizmo (usually with power and ground connections)
that you clip onto appropriate sources. You read a pin’s logic level by
touching the pin with the tip of this “pencil.” A readout (often just
LEDs) on the probe will then indicate whether the pin is logical high
or low, high impedance, or changing. A logic probe is very handy if you
have already verified that the hardware is stable and you are just
writing code to wiggle some PIO pins or a chip select.






Look at What You’ve Created


The build tools allow you to dump a memory map.
See if the memory map makes sense for your target. Look at the actual
S-record or binary file before you write it to the flash device. Does
it make sense? Even the file size can give you a clue. If your program
consists of only a very tight busy loop in assembly language, the final
binary file should be very small.


Find some tool (I use elvis)
that allows you to visually display a binary file in some ASCII format.
You can use this tool to confirm certain aspects of the build process.
For example, to prove that flash-resident code is being placed
correctly, you can modify the source to insert some easily recognized
pattern at what should be the base of the flash memory (see Listing 2.8).
After converting the source to binary (using the normal build process),
use your dump tool to examine the file. You should find the marker
pattern at the offset corresponding to the beginning for your flash
memory.



Listing 2.8: “Marking” Code to Confirm Position.






coldstart:
.byte 0x31, 0x32, 0x33, 0x34
assembler code here














Listing 2.9 is a sample dump from the elvis vi
clone that displays the offset into the file, the data in
ASCII-coded-hexadecimal, and the data in regular ASCII (if printable).
Hence, the flash memory begins at offset 0x0000.




Listing 2.9: Sample Dump






OFFSET     ASCII_CODED_HEX_DATA                                ASCII_DATA
000000: 31 32 33 34 ff fd 78 14 38 60 00 30 4b fc 00 0e 1234..x.8`.0K"..
000010: 38 80 00 00 38 a0 00 00 38 c0 00 00 3c e0 00 04 8C..8a..*l..<x..














The only thing you need to see in Listing 2.9 is that the first four bytes of the file are as you expected (0x31, 0x32, 0x33, and 0x34).






Note 

This binary dump is also very useful if you have
to split your data into separate files so you can program multiple
devices that are in parallel in the hardware. Before the split, you
have what is shown in Listing 2.9,
and, after the split (assume a split into two files), one file is as
shown in the display labeled “Split A” and the other file is as shown
in “Split B”, clearly indicating that the single file was properly
split.



Split A



OFFSET   ASCII_CODED_HEX_DATA                              ASCII_DATA
000000: 31 33 ff 78 38 00 4b 00 38 00 38 00 38 00 3c 00 13.x8.K.8.8.*.<.


Split B



OFFSET   ASCII_CODED_HEX_DATA                              ASCII_DATA
000000: 32 34 fd 14 60 30 fc 0e 80 00 a0 00 c0 00 e0 04 24..`0".C.a.l.x.












Figure 2.4: Flash Relative vs. CPU Relative Address Space.


Since memory devices usually span only a portion of the
processor address space, the absolute addresses in an object format
(like S-records) might need to be adjusted to be interpreted correctly
by the device programmer. This requirement is because the device
programmer often knows only about the memory device’s address space not
the processor’s.


Referring to Figure 2.4,
in the case of Configuration 1, where the boot device resides at
location zero of CPU-relative memory, the S-record CPU-relative
addresses also correspond to flash device addresses, so all works well.


Configuration 2 however, does not work. Here the
CPU boots from some location other than zero, so offset zero within the
flash device no longer corresponds to physical address zero. Assume
that this CPU boots at 0x8F000000, so the S-record file has AAAA… fields starting at 0x8F000000
because that’s where the CPU sees the instructions. However, when I
step away from the hardware design and go program the flash device, I
must adjust the S-record address of 0x8F000000 to 0x0000000 because 0x8F000000 in CPU address space is the same as 0x00000000
in the flash device’s address space. This adjustment to the S-record
address can be performed in some post-processing step or in the
programmer if it supports the ability to adjust the base. My personal
preference is to avoid this complexity by using raw binary files
instead of S-records.





































User Stories











































User Stories


User stories have been popularized through the
agile software development methodology called Extreme Programming.
Agile methodologies advocate lightweight approaches to requirements
engineering, project management, and other aspects of software
development. Rather than developing a comprehensive set of use cases or
a detailed software requirements specification, the analyst (who is
often the developer) works with users to collect stories. Extreme
Programming defines a story as "one thing that the customer wants the
system to do" (Beck 2000). In Extreme Programming, users provide
stories that the analyst concisely writes on an index card in natural
language text using the business domain's terminology. The stories are
elaborated and modified throughout the project based on user input. The
entire story consists of this initial story card, plus all the
subsequent conversations that take place regarding that story among
project stakeholders and perhaps user acceptance tests.


The examples of user stories given in books on agile
development cover a wide range of requirement categories and
abstraction levels (for example, Beck and West 2004). They range from
individual functional requirements to scenarios, use cases, product
features, business objectives, constraints, quality attributes,
business rules, user interface issues, and desired characteristics of
the product. The analyst might break complex stories into multiple
smaller stories that can be understood better, estimated better, and
perhaps implemented independently.
But the examples of user stories I've seen for agile development don't
differentiate various types of requirements information. Anything the
customer "wants the system to do" constitutes a story.


I have a problem with this definition of the term story. The essence of user-centric and usage-centric requirements elicitation is to focus on what the user wants to do,
not what the user wants the system to do. Asking the latter question
takes us back to the shortcomings of the original system-focused
requirements exploration process. The "stories" generated in this
fashion can become near-random bits of information, all churned
together in the discussion between analyst and customers. They lack the
usage-centered organizing structure that use cases and scenarios
provide.


I think of stories somewhat differently. I consider a
story to be a specific, concrete instance of a user's interaction with
a system to achieve a goal. Stories lie at the low end of the
abstraction scale. Earlier in this chapter, Figure 9-2 illustrated a story for a package-shipping store's new software system:



Chris wants to send a 2.5-pound package by
second-day air from Clackamas, Oregon, to Elba, New York. She wants it
insured for $75 and she wants a return receipt. The package is marked
fragile.



During user requirements development, you can take a
top-down approach or a bottom-up approach. You can start at the high
abstraction level by having some users identify use cases and then
prioritizing them and elaborating them into further detail at the right
time. A story such as the previous one provides a good starting point
for a bottom-up strategy. You can say to the store's user
representative, "Please tell me about the last time someone came into
the store with a package to ship." The user might relate an experience
similar to the one about Chris. This is a very specific instance of how
a store employee might have to prepare a particular mailing label. If
you don't have access to real users who can tell you stories, consider
inventing stories for the user-substitute personas you've identified.
(See Chapter 6, "The Myth of the On-Site Customer.")


The analyst can abstract upward to generalize that one
story, or a set of similar stories or scenarios, to cover a variety of
mailing label possibilities within the same use case. If you were to
treat each of these specific user stories as a separate use case, you
would wind up with a vast number of use cases, many of which are
identical except for small variations, say in the package weight,
destination, or shipping method. Such a use case explosion provides a
clue that you need to climb farther up the abstraction scale.


Use cases, scenarios, and stories all provide
powerful ways to hear and understand the voice of the customer. Unless
you focus on the user's goals and vision, your team can easily
implement a stunning set of functionality that simply doesn't let users
get their jobs done in a way they find appealing. And that would be a
shame.






































Acknowledgments



[ Team LiB ]





Acknowledgments

I have been working on this book, in one form or another, for more than four years, and many people have helped and supported me along the way.

I thank those people who have read manuscripts and commented. This book would simply not have been possible without that feedback. A few have given their reviews especially generous attention. The Silicon Valley Patterns Group, led by Russ Rufer and Tracy Bialek, spent seven weeks scrutinizing the first complete draft of the book. The University of Illinois reading group led by Ralph Johnson also spent several weeks reviewing a later draft. Listening to the long, lively discussions of these groups had a profound effect. Kyle Brown and Martin Fowler contributed detailed feedback, valuable insights, and invaluable moral support (while sitting on a fish). Ward Cunningham's comments helped me shore up some important weak points. Alistair Cockburn encouraged me early on and helped me find my way through the publication process, as did Hilary Evans. David Siegel and Eugene Wallingford have helped me avoid embarrassing myself in the more technical parts. Vibhu Mohindra and Vladimir Gitlevich painstakingly checked all the code examples.

Rob Mee read some of my earliest explorations of the material, and brainstormed ideas with me when I was groping for some way to communicate this style of design. He then pored over a much later draft with me.

Josh Kerievsky is responsible for one of the major turning points in the book's development: He persuaded me to try out the "Alexandrian" pattern format, which became so central to the book's organization. He also helped me to bring together some of the material now in Part II into a coherent form for the first time, during the intensive "shepherding" process preceding the PLoP conference in 1999. This became a seed around which much of the rest of the book formed.

Also I thank Awad Faddoul for the hundreds of hours I sat writing in his wonderful café. That retreat, along with a lot of windsurfing, helped me keep going.

And I'm very grateful to Martine Jousset, Richard Paselk, and Ross Venables for creating some beautiful photographs to illustrate a few key concepts (see photo credits on page 517).

Before I could have conceived of this book, I had to form my view and understanding of software development. That formation owed a lot to the generosity of a few brilliant people who acted as informal mentors to me, as well as friends. David Siegel, Eric Gold, and Iseult White, each in a different way, helped me develop my way of thinking about software design. Meanwhile, Bruce Gordon, Richard Frey-berg, and Judith Segal, also in very different ways, helped me find my way in the world of successful project work.

My own notions naturally grew out of a body of ideas in the air at that time. Some of those contributions will be clear in the main text and referenced where possible. Others are so fundamental that I don't even realize their influence on me.

My master's thesis advisor, Dr. Bala Subramanium, turned me on to mathematical modeling, which we applied to chemical reaction kinetics. Modeling is modeling, and that work was part of the path that led to this book.

Even before that, my way of thinking was shaped by my parents, Carol and Gary Evans. And a few special teachers awakened my interest or helped me lay foundations, especially Dale Currier (a high school math teacher), Mary Brown (a high school English composition teacher), and Josephine McGlamery (a sixth-grade science teacher).

Finally, I thank my friends and family, and Fernando De Leon, for their encouragement all along the way.





    [ Team LiB ]



    #27 Adding a Local Dictionary to Spell













    #27 Adding a Local Dictionary to Spell

    Missing in both Script #25 and Script #26, and certainly missing in most spell-check implementations on stock Unix distributions, is the ability for a user to add words to a personal spelling dictionary so that they're not flagged over and over again. Fortunately, adding this feature is straightforward.




    The Code




    #!/bin/sh
    # spelldict - Uses the 'aspell' feature and some filtering to allow easy
    # command-line spell-checking of a given input file.

    # Inevitably you'll find that there are words it flags as wrong but
    # you think are fine. Simply save them in a file, one per line, and
    # ensure that the variable 'okaywords' points to that file.

    okaywords="$HOME/okaywords"
    tempout="/tmp/spell.tmp.$$"
    spell="aspell" # tweak as needed

    trap "/bin/rm -f $tempout" EXIT

    if [ -z "$1" ] ; then
    echo "Usage: spell file|URL" >&2; exit 1
    elif [ ! -f $okaywords ] ; then
    echo "No personal dictionary found. Create one and rerun this command" >&2
    echo "Your dictionary file: $okaywords" >&2
    exit 1
    fi

    for filename
    do
    $spell -a < $filename | \
    grep -v '@(#)' | sed "s/\'//g" | \
    awk '{ if (length($0) > 15 && length($2) > 2) print $2 }' | \
    grep -vif $okaywords | \
    grep '[[:lower:]]' | grep -v '[[:digit:]]' | sort -u | \
    sed 's/^/ /' > $tempout

    if [ -s $tempout ] ; then
    sed "s/^/${filename}: /" $tempout
    fi
    done

    exit 0





    How It Works


    Following the model of the Microsoft Office spell-checking feature, this script not only supports a user-defined dictionary of correctly spelled words that the spell-checking program would otherwise think are wrong, it also ignores words that are in all uppercase (because they're probably acronyms) and words that contain a digit.


    This particular script is written to use aspell, which interprets the -a flag to mean that it's running in pass-through mode, in which it reads stdin for words, checks them, and outputs only those that it believes are misspelled. The ispell command also requires the -a flag, and many other spell-check commands are smart enough to automatically ascertain that stdin isn't the keyboard and there-fore should be scanned. If you have a different spell-check utility on your system, read the man page to identify which flag or flags are necessary.





    Running the Script


    This script requires one or more filenames to be specified on the command line.





    The Results


    First off, with an empty personal dictionary and the excerpt from Alice in Wonderland seen previously, here's what happens:




    $ spelldict ragged.txt
    ragged.txt: herrself
    ragged.txt: teacups
    ragged.txt: Gryphon
    ragged.txt: clamour


    Two of those are not misspellings, so I'm going to add them to my personal spelling dictionary by using the echo command to append them to the okaywords file:




    $ echo "Gryphon" >> ~/.okaywords
    $ echo "teacups" >> ~/.okaywords


    Here are the results of checking the file with the expanded spelling dictionary:




    $ spelldict ragged.txt
    ragged.txt: herrself
    ragged.txt: clamour












    Legacy Code Migration









    Legacy Code Migration


    The Windows Uniform Data Model is designed to minimize source code changes, but it is impossible to avoid modification altogether. For example, functions that deal directly with memory allocation and memory block sizes, such as HeapCreate and HeapAlloc (Chapter 5), must use either a 32-bit or 64-bit size field, depending on the model. Similarly, you need to examine code carefully to ensure that there are no hidden assumptions about the sizes of pointers and size fields.


    API changes, primarily to the memory management functions, are described first.


    API Changes


    The most significant API changes are in the memory management functions introduced in Chapter 5. The new definitions use the SIZE_T data type (see Table 16-2) in the count field. For example, the definition of HeapAlloc is:



    LPVOID HeapAlloc (
    HANDLE hHeap,
    DWORD dwFlags,
    SIZE_T dwBytes);


    The third field, the number of bytes requested, is of type SIZE_T and is therefore either a 64-bit or 32-bit unsigned integer. Previously, this field was defined to be a DWORD (always 32 bits).


    SIZE_T is used as required in Chapter 5.


    Changes to Remove Assumptions about Data Item Size


    There are numerous potential problems based on assumptions about data size. Here are a few examples.


    • A DWORD is no longer appropriate for a memory block size. Use SIZE_T or DWORD64 instead.

    • Communicating processes, whether on the same system or on different systems, must be careful about field lengths. For instance, the socket messages in Chapter 12 were defined with LONG32 length fields to ensure that a port to UNIX or Win64 would not result in a 64-bit field. Memory block sizes should be limited to 2GB during communication between Windows processes that use different models.

    • Use sizeof to compute data structure and data type lengths; these sizes will differ between Win32 and Win64 if the data structure contains pointers or SIZE_T data items). Literal size constants should be removed (this, of course, is always good advice).

    • Unions that mix pointers with arithmetic data types should be examined for any assumptions about data item size.

    • Any cast or other conversion between a pointer and an arithmetic type should be examined carefully. For instance, see the code fragments in the Example: Using Pointer Precision Data Types section.

    • In particular, be wary of implicit casts of 32-bit integers to 64-bit integers in function calls. There is no assurance that the high-order 32 bits will be cleared, and the function may receive a very large 64-bit integer value.

    • Pointers are aligned on 8-byte boundaries, and additional structure padding caused by alignment can increase data structure size more than necessary and even impact performance. Moving pointers to the beginning of a structure will minimize this bloat.

    • Use the format specifier %p rather than %x to print a pointer, and use a specifier such as %ld when printing a platform scaled type such as SIZE_T.

    • setjmp and longjmp should use the <setjmp.h> ANSI C header rather than assuming anything about jmp_buf, which must contain a pointer.









      6.8 For Further Reading











      Team-Fly

       

       

















      Documenting Software Architectures: Views and Beyond
      By
      Paul Clements, Felix Bachmann, Len Bass, David Garlan, James Ivers, Reed Little, Robert Nord, Judith Stafford
      Table of Contents

      Chapter 6. 
      Advanced Concepts







      6.8 For Further Reading


      As noted in the Prologue, the notion of multiple views as a way to partition descriptions of complex systems has been around for some time. Recently there has been considerable interest from the software engineering community in identifying mechanisms for combining those concerns in systematic ways. One branch of this subarea is sometimes referred to as "aspect-oriented programming" or "multi-dimensional separation of concerns." Work in this area is represented by [Kiczales+ 97]. A good source of current information is http://www.aosd.net.


      In a similar vein, Michael Jackson's book on problem frames has a good chapter on combining multiple problems frames [Jackson 01]. Although it is cast in terms of the problem space, rather than the solution space of architectures, many of the ideas carry over.


      A number of researchers have considered the question of how to define architectural styles formally. One of the first papers to address the issue is [PerryWolf 92]. Chapters 6 and 8 in [ShawGarlan 96] also tackle the problem, using formal specifications languages like Z and CSP. For examples of defining architectural styles in an object-oriented framework, consider [Buschmann+ 96] and [Schmidt+ 00].












        Team-Fly

         

         





        Top



        Exercises



        [ Team LiB ]






        Exercises


        9.1

        In what situation would an application programmer be most likely to use the sctp_peeloff function?

        9.2

        We say "the server side will automatically close as well" in our discussion of the one-to-many style; why is this true?

        9.3

        Why must the one-to-many style be used to cause data to be piggybacked on the third packet of the four-way handshake? (Hint: You must be able to send data at the time of association setup.)

        9.4

        In what scenario would you find data piggybacked on both the third and fourth packets of the four-way handshake?

        9.5

        Section 9.7 indicates that the local address set may be a proper subset of the bound addresses. In what circumstance would this occur?







          [ Team LiB ]



          Acknowledgments




          I l@ve RuBoard









          Acknowledgments

          The acknowledgments are perhaps the hardest thing to write in a book, because many people go into making a book happen. From an early age, I have wanted to write a book, and I want to thank the people who helped make it happen.


          Let me begin with the staff at New Riders, particularly Theresa and Stephanie for getting the ball rolling. Thanks to Deborah for keeping the project moving along and for organizing me. Also thanks to Chris for catching my less-than-great English and for being patient with a first-time author. Also, thanks to my technical editors, Zak Daniel,Graeme, and Torben, for all their help in answering my questions and help in improving this book. My uttermost thanks to you all.


          Thanks to all the people who along the way have given me a great deal of help and support. Thanks to Vernon Viehe, Billy Ray, Winson Cheung, Tim Slater, Colin Cherot, Wayne Smith, Chrissy Rey, Tim Goss, David Shumate, Elizabeth Cherry, Gordon Bell, and anyone else I might have forgotten. You know who you are.


          No PHP book is complete without giving thanks to the community that helps develop PHP. Many thanks to Rasmus Lerdorf and the other PHP developers who continue to create and build PHP into one of the most astounding programming languages ever developed. Further thanks to Andres Otto and Daniel Bealshausen of the php4win project for all the work they do. Also, thanks to John Lim for taking the time to write the foreword and for all the help and information he provides. Also, thanks to Jim for creating and developing the great ADODB extension for PHP.


          Writing a book tends to swallow your time whole, so my thanks go to all the people who suffered because I was writing. Thanks to my mum, dad (thanks for the office, Dad), and sisters (Suzannah and Hayley) for all the love and support they have given me. Also, my thanks to two of the greatest people I have ever met, Mary and Terry, for their warmth, love, and support (not to mention the coffee).


          Finally, to the one person who has to suffer the most through my endeavors. This book is what it's all about, and it's all for you, my sweet. I love you with all my heart, Emma.







            I l@ve RuBoard



            6.2 CATEGORIES OF ENCODING SCHEMES











             < Day Day Up > 











            6.2 CATEGORIES OF ENCODING SCHEMES


            Encoding schemes can be divided into the following categories:




            • Unipolar encoding




            • Polar encoding




            • Bipolar encoding





            Unipolar encoding: In the unipolar encoding scheme, only one voltage level is used. Binary 1 is represented by positive voltage and binary 0 by an idle line. Because the signal will have a DC component, this scheme cannot be used if the transmission medium is radio. This encoding scheme does not work well in noisy conditions.



            Polar encoding: In polar encoding, two voltage levels are used: a positive voltage level and a negative voltage level. NRZ-I, NRZ-L, and Manchester encoding schemes, which we discuss in the following sections, are examples of this encoding scheme.



            Bipolar encoding: In bipolar encoding, three levels are used: a positive voltage, a negative voltage, and 0 voltage. AMI and HDB3 encoding schemes are examples of this encoding scheme.










            The encoding schemes are divided into three categories: (a) unipolar encoding; (b) polar encoding; and (c) bipolar encoding. In unipolar encoding, only one voltage level is used. In polar encoding, two voltage levels are used. In bipolar encoding, three voltage levels are used. Both polar encoding and bipolar encoding schemes are used in practical communication systems.


















            Note 

            The encoding scheme to be used in a particular communication system is generally standardized. You need to follow these standards when designing your system to achieve interoperability with the systems designed by other manufacturers.




















             < Day Day Up > 



            References for More Information










            Perl for System AdministrationSearch this book









            1.7. References for More Information








            http://dwheeler.com/secure-programs/Secure-Programs-HOWTO.html is a HOWTO document for secure programming under Linux, but the concepts and techniques are applicable to other situations as well.



            http://www.cs.ucdavis.edu/~bishop/secprog.html contains more good secure programming resources from security expert Matt Bishop.



            http://www.homeport.org/~adam/review.html lists security code review guidelines by Adam Shostack.



            http://www.dnaco.net/~kragen/security-holes.html is a good paper on how to find security holes (especially in your own code) by Kragen Sitaker.



            http://www.shmoo.com/securecode/ offers an excellent collection of articles on how to write secure code.



            Perl CGI Problems, by Rain Forrest Puppy (Phrack Magazine, 1999) can be found online at http://www.insecure.org/news/P55-07.txt or from the Phrack archives at http://www.phrack.com/archive.html.



            Perl Cookbook, by Tom Christiansen and Nathan Torkington (O'Reilly, 1998) contains many good tips on coding securely.
















            Copyright © 2001 O'Reilly & Associates. All rights reserved.







            Chapter 18. Component Object Model (COM)



            < BACK  NEXT >

            [oR]

            Chapter
            18. Component Object Model (COM)

            The discussion in
            Chapter 15
            showed how the COM Components are being used in the new approach to Windows 2000 administration and management. The distributed program-to-program communications that we showed in
            Chapter 10, can be handled nicely with Distributed COM (DCOM). The Component Object Model (COM) is a component software architecture that allows applications and systems to be built from components supplied by different software vendors. As we pointed out in
            Chapter 15, it is expected that a great many of the snap-ins will be produced by ISVs.


            The most fundamental question addressed by COM is, how can a system be designed so that binary executables from different vendors, written in different parts of the world and at different times, are able to interoperate? To solve this problem, we have to find solutions to four specific problems:



            • Basic interoperability. �

              How can developers create their own unique components, yet be assured that these components will interoperate with other components built by different developers?




            • Versioning. �

              How can one system component be upgraded without forcing all the system components to be upgraded?




            • Language independence. �

              How can components written in different languages communicate?




            • Transparent cross-process interoperability. �

              How can we give developers the flexibility to write components to run in-process or cross-process (and eventually cross-network), using one simple programming model?




            Additionally, high performance is a requirement for component software architecture. While cross-process and cross-network transparency are laudable goals, it is critical for the commercial success of a binary-component marketplace that components interacting within the same address space be able to utilize each other's services without any undue "system" overhead. Otherwise the components will not realistically be scalable down to very small, lightweight pieces of software equivalent to C++ classes or graphical userinterface (GUI) controls.




            < BACK  NEXT >


            Proprietary Software Development Methods










            Proprietary Software Development Methods


            Each commercial software company has its own development method; some follow a classic waterfall model (Wikipedia 2002a), some use a spiral model (Wikipedia 2002b), some use the Capability Maturity Model, now referred to as Capability Maturity Model Integration (CMMI) (Carnegie Mellon 2000), some use Team Software Process (TSP) and the Personal Software process (PSP) (Carnegie Mellon 2003), and others use Agile methods. There is no evidence whatsoever that any of these methods create more secure software than another internal development method, judging by the number of security bugs fixed by commercial software companies such as IBM, Oracle, Sun, and Symantec each year that require customers to apply patches or change configurations. In fact, many of these software development methods make no mention of the word "security" in their documentation. Some don't even mention the word "quality" very often, either.




            CMMI, TSP, and PSP


            The key difference between the SDL and CMMI/TSP/PSP processes is that SDL focuses solely on security and privacy, and CMMI/TSP/PSP is primarily concerned with improving the quality and consistency of development processes in generalwith no specific provisions or accommodations for security. Although certainly a worthy goal, this implicitly adopts the logic of "if the bar is raised on quality overall, the bar is raised on security quality accordingly." Although this may or may not be true, we don't feel that sufficient commercial development case study evidence exists to confirm or refute this either way. Our collective experiences from SDL are that adopting processes and tools specifically focused on demonstrably reducing security and privacy vulnerabilities have provided consistent examples of case study evidence testifying to improved security quality. Although we feel the verdict is still out on how effective CMMI/TSP/PSP are in improving security quality in software as compared to SDL, we'd assert that SDL is, at a minimum, a more optimized approach at improving security quality.


            There is information about TSP and security (Over 2002), but it lacks specifics and offers no hard data showing software is more secure because of TSP.













            Chapter 23: Automating Analysis and Test












            Chapter 23: Automating Analysis and Test



            Automation can improve the efficiency of some quality activities and is a necessity for implementing others. While a greater degree of automation can never substitute for a rational, well-organized quality process, considerations of what can and should be automated play an important part in devising and incrementally improving a process that makes the best use of human resources. This chapter discusses some of the ways that automation can be employed, as well as its costs and limitations, and the maturity of the required technology. The focus is not on choosing one particular set of "best" tools for all times and situations, but on a continuing rational process of identifying and deploying automation to best effect as the organization, process, and available technology evolve.




            Required Background





            • Chapter 20


              Some knowledge about planning and monitoring, though not strictly required, can be useful to understand the need for automated management support.





            • Chapter 17


              Some knowledge about execution and scaffolding is useful to appreciate the impact of tools for scaffolding generation and test execution.





            • Chapter 19


              Some knowledge about program analysis is useful to understand the need to automate analysis techniques.
















            Chapter 11. Oracle and Hardware Architecture



            [ Team LiB ]







            Chapter 11. Oracle and Hardware Architecture




            In Chapter 2 we discussed the
            architecture of the Oracle database, and in Chapter 6 we described
            how Oracle uses hardware resources. Although Oracle operates in the
            same way on many hardware platforms, different hardware architectures
            can ultimately determine the specific scalability, performance
            tuning, management, and reliability options available to you. Over
            the years, Oracle has developed new features to address specific
            platforms and, with Oracle Database 10g,
            continues this process with a commitment to grid computing. This
            chapter discusses the various hardware architectures to provide a
            basis for understanding how Oracle leverages each of these platforms.



            This chapter explains the following hardware systems and how Oracle
            takes advantage of the features inherent in each of the platforms:



            • Uniprocessors

            • Symmetric Multiprocessing (SMP) systems

            • Clusters

            • Massively Parallel Processing (MPP) systems

            • Non-Uniform Memory Access (NUMA) systems

            • Grid computing


            We'll also discuss the use of different disk
            technologies and how to choose the hardware system
            that's most appropriate for your purposes.








              [ Team LiB ]



              Data access/connection method











               < Day Day Up > 





              Inline MapServer Features

              Inline features refer to coordinates entered directly into the map file. They aren't a file or database format and don't require any DATA or CONNECTION parameters. Instead they use a FEATURE section to define the coordinates.

              Inline features can be used to define points, lines, and polygons as if taken from an external file; this requires direct entry of coordinate pairs in the map file using a particular syntax.



              Data access/connection method



              This is a native MapServer option that doesn't use any external libraries to support it.





              Map file example



              Each FEATURE..END section defines a feature.





              Points



              Multiple points can be defined in a FEATURE section. If multiple points are defined in the same layer, they have the same CLASS settings; for example, for colors and styles.



              Coordinates are entered in the units set in the layer's projection. In this case, it assumes the map file projection is using decimal degrees.





              LAYER

              NAME inline_stops

              TYPE POINT

              STATUS DEFAULT

              FEATURE

              POINTS

              72.36 33.82

              END

              TEXT "My House"

              END

              FEATURE

              POINTS

              69.43 35.15

              71.21 37.95

              72.02 38.60

              END

              TEXT "My Stores"

              END

              CLASS

              COLOR 0 0 250

              SYMBOL 'circle'

              SIZE 6

              END

              END










              Lines



              Lines are simply a list of points strung together, but the layer must be TYPE LINE instead of TYPE POINT.





              LAYER

              NAME inline_track

              TYPE LINE

              STATUS DEFAULT

              MAXSCALE 10000000

              FEATURE

              POINTS

              72.36 33.82

              70.85 34.32

              69.43 35.15

              70.82 36.08

              70.90 37.05

              71.21 37.95

              END

              END

              CLASS

              COLOR 255 10 0

              SYMBOL 'circle'

              SIZE 2

              END

              END










              Polygons



              Polygons are the same as the line example, just a list of points.



              They require the TYPE POLYGON parameter, and the final coordinate pair needs to be the same as the first, making it a closed polygon.

















                 < Day Day Up > 



                Chapter 12. Building a Trouble-Ticket System











                Chapter 12. Building a Trouble-Ticket System




                In this chapter


                12.1 Trouble-Ticketing System

                page 272

                12.2 AJAX Reliance Scale

                page 274

                12.3 Creating the Back End

                page 275

                12.4 Exporting the Back End

                page 282

                12.5 Building the JavaScript Application

                page 288

                12.6 Login Component

                page 299

                12.7 User-Registration Component

                page 305

                12.8 Account-Editing Component

                page 308

                12.9 Ticket-Creation Component

                page 310

                12.10 Ticket-Editor Component

                page 312

                12.11 My-Tickets Component

                page 318

                12.12 Assign-Tickets Component

                page 323

                12.13 Security Considerations with AJAX Applications

                page 328

                12.14 Comparing Our AJAX-Driven Application against a Standard MVC Model

                page 329

                12.15 Summary

                page 330



                When developing with AJAX, the first decision you need to make is how much you're going to rely on it. It's possible to use it as an optional HTML enhancement, as an integral part to specific features, or as the driver for an entire site. In this chapter's use case, we build a small trouble-ticket system using a design that is 100 percent AJAX powered. Moving to this extreme can be problematic on public sites, but it's a great choice for an internal application like this. Using this case, we can see the differences that having AJAX and JavaScript as the driving force of our development can make. We'll also see a number of techniques and design decisions that can be used in any applicationno matter how you're using AJAX.















                Enabling PF













                Enabling PF


                PF is enabled at system boot by the following two /etc/rc.conf variables:




                pf=YES
                pf_rules=/etc/pf.conf


                By changing the pf value to "NO," you disable the packet filter. Similarly, you can choose a different boot-time PF configuration file by changing the pf_rules variable. If something is wrong with your PF configuration file and it won't parse, the OpenBSD startup routine will install some basic PF rules that will block almost all traffic to the machine, with the exception of SSH. You'll be able to connect to the machine and correct your rules, but that's about it. (And, as anyone who administers firewalls remotely can tell you, this ability is enough to save a lot of pain.)


                If you want to forward packets between multiple interfaces (i.e., be a "firewall"), you need to tell OpenBSD to do this with the net.inet.ip.forwarding sysctl MIB. There's a commented-out entry for this in /etc/sysctl.conf.




                #net.inet.ip.forwarding=1


                Just remove the pound sign and reboot!


                If you want to have stop and start packet forwarding without rebooting your system, you can do this easily with sysctl(8), as discussed in Chapter 11. Setting this MIB to 0 stops packet forwarding; setting the MIB to 1 enables it. If you want to perform some basic system maintenance that may interfere with your network in some way you can stop packet forwarding, do your work, and restart forwarding.











                3.9 Data Loads



                [ Team LiB ]





                3.9 Data Loads


                Databases frequently have multiple information providers as shown in Figure 3-9.


                Figure 3-9. Database Information Providers.


                OLTP providers usually perform single-row inserts or updates�these transactions are usually just a few rows. Data can also be loaded in from other systems. These are batch data loads and occur when there is less OLTP activity.


                Data loads typically fall into two categories. One type is a schema initialization load. This process brings data into the database for the first time and coincides with the application development. The load may be from a legacy system that is being converted to Oracle. These loads require data "scrubbing" (e.g., the legacy may require string conversions to load time/day fields into an Oracle DATE type). Constraints and indexes can be built after the data is verified and loaded.


                Other batch loads occur on a periodic base. A load can initiate from a user, an operating system-scheduled job, or possibly a PL/SQL procedure scheduled through the Oracle DBMS_JOB queue.


                SQL*Loader is an Oracle utility for loading fixed-format or delimited fields from an ASCII file into a database. It has a conventional and direct load option. The default option is conventional.


                The direct load method disables constraints before the data load and enables them afterward. This incurs some overhead. For large amounts of data, the direct method is much faster than the conventional method. Postload processing includes not just the enabling of constraints, but the rebuilding of indexes for primary key and unique constraints.


                If a direct load contains duplicates, the post process of enabling constraints and rebuilding of indexes fails. For a duplicate primary key or unique constraint, the failed state leaves the index in a "direct load" state.


                Log messages in the SQL*Loader file will highlight this type of failure with the following:





                The following index(es) on table STUDENTS were processed:
                Index PK_STUDENTS was left in Direct Load State due to
                ORA-0145 cannot CREATE UNIQUE INDEX; duplicate keys found

                Following a direct load, you should check the SQL*Loader log file but also check the status of your indexes. A simple query for troubleshooting is the following:





                SELECT index_name, table_name
                FROM USER_INDEXES WHERE STATUS = 'DIRECT LOAD';

                If you have bogus data following a direct load, you need to remove all duplicates before you can enable constraints and rebuild the indexes.


                For a conventional SQL*Loader path, duplicate records are written to the SQL*Loader "bad" file with corresponding messages in the SQL*Loader "log" file. If no errors occur, then there is no "bad" file. SQL*Loader is a callable program and can be invoked in a client/server environment where the end user takes an action to load a file that is stored on the server. The mere existence of a bad file, following the load, will indicate errors during the load.


                You can use SQL*Loader as a callable program to implement daily loads using a conventional path. You can use this utility to load large files with millions of rows into a database with excellent results.


                Each SQL*Loader option (conventional and direct load) provides a mechanism to trap and resolve records that conflict with your primary key or any other constraint; however, direct load scenarios can be more time consuming.


                Alternatives to SQL*Loader are SQL*Plus scripts and PL/SQL. You can load the data with constraints on and capture failed records through exception handling. Bad records can be written to a file using the UTL_FILE package. Bad records can also be written to a temporary table that has no constraints.


                You also have the option to disable constraints, load data into a table, and then enable the constraint. If the data is bad you cannot enable the constraint. To resolve bad records, start with an EXCEPTIONS table. The exceptions table can have any name, but must have the following columns.





                CREATE TABLE EXCEPTIONS
                (row_id ROWID,
                owner VARCHR2(30),
                table_name VARCHAR2(30),
                constraint VARCHAR2(30));

                The SQL for this exceptions table is found in the ORACLE_HOME/RDBMS/ADMIN directory in the file utlecpt.sql. The RDBMS/ADMIN directory, under ORACLE_HOME, is the standard repository for many scripts including the SQL scripts to build the data dictionary catalog, scripts to compile the SYS packages, and scripts like the exceptions table.


                We use the exceptions table to capture rows that violate a constraint. This capturing is done as we attempt to enable our constraint. The following TEMP table is created with a primary key.





                CREATE TABLE TEMP
                (id VARCHAR2(5) CONSTRAINT PK_TEMP PRIMARY KEY,
                no NUMBER);

                Insert some good data:





                INSERT INTO temp VALUES ('AAA', 1);
                INSERT INTO temp VALUES ('BBB', 2);

                The following disables the constraint. This is done here prior to inserting new data.





                ALTER TABLE temp DISABLE CONSTRAINT PK_TEMP;

                Now we insert some data; in this example, this is one row that we know to be duplicate row.





                INSERT INTO temp VALUES ('AAA', 3);

                The following shows the error when we enable the constraint with SQL*Plus.





                SQL> ALTER TABLE temp ENABLE CONSTRAINT pk_temp;

                ORA-00001 cannot enable constraint. Unique constraint pk_temp violated.

                SQL>

                What if we had started with a million rows in TEMP and loaded another million. The task of identifying the offending rows can be tedious. Use an exceptions table when enabling constraints. The exceptions table captures the ROW ID of all offending rows.





                ALTER TABLE temp ENABLE CONSTRAINT pk_temp EXCEPTIONS INTO exceptions;

                The constraints are still off. All records are in TEMP, but you can identify the bad records.





                SELECT id, no
                FROM temp, exceptions
                WHERE exceptions.constraint='PK_TEMP'
                AND temp.rowid=exceptions.row_id;

                This works for all types of constraint violations. You may not be able to enable constraints after a load, but you can capture the ROWID and constraint type through an exceptions table.





                  [ Team LiB ]



                  Newer Posts Older Posts Home