Wednesday, November 4, 2009

Chapter 13: Add-On Software












Chapter 13: Add-On Software



Overview





Software repair steps:
retry, reboot, reinstall?
No need for that here!



Most people don't use an operating system — they use software, which needs an underlying operating system. No matter how robust OpenBSD is, it's completely useless if applications can't run on it. OpenBSD supports a wide range of software and has several tools to make software management quick and effective. One difference that is surprising between OpenBSD and other operating systems is how much software is not included in the system, but is instead available for adding on if necessary.



Many commercial operating systems include hundreds or thousands of small programs: from games, desktop toys, and fancy-looking clocks to disk scrubbers and Web browsers. Most users never touch most of the programs on their system, but they're there taking up disk space and possibly memory just the same. This makes it easy for the user, so long as everything works properly. Each program lugs along its own infrastructure, however, and this can cause problems. There's a reason that Windows became famous for "DLL Hell."


OpenBSD includes almost nothing. You get exactly what you need to provide the infrastructure for software and nothing more. While a traditional UNIX system includes compilers, games, and manual pages, you don't even have to install those when you install OpenBSD! Even if you install the full OpenBSD software, it will include far less than a Windows, Macintosh, or commercial Linux operating install. That's because almost everything is considered an add-on package. OpenBSD makes it very easy to install additional software through the ports and packages system.


The advantage to this sparseness is that you know exactly what is on your system. This makes debugging problems simpler and helps to ensure that some shared library or other chunk of code that you've never heard of won't break your programs. The downside is that you may need to do a bit of thinking to decide exactly what you do need, and you'll have to install those programs. OpenBSD makes installing software as easy as possible.











Embedded Linux Primer: A Practical, Real-World Approach








Embedded Linux Primer: A Practical, Real-World Approach
By
Christopher Hallinan
...............................................
Publisher: Prentice Hall
Pub Date: September 18, 2006
Print ISBN-10: 0-13-167984-8
Print ISBN-13: 978-0-13-167984-9

Pages: 576
 

Table of Contents
 | Index

Comprehensive Real-World Guidance for Every
Embedded Developer and Engineer

This book brings together indispensable
knowledge for building efficient, high-value, Linux-based embedded
products: information that has never been assembled in one place
before. Drawing on years of experience as an embedded Linux
consultant and field application engineer, Christopher Hallinan
offers solutions for the specific technical issues you're
most likely to face, demonstrates how to build an effective
embedded Linux environment, and shows how to use it as productively
as possible.

Hallinan begins by touring a typical
Linux-based embedded system, introducing key concepts and
components, and calling attention to differences between Linux and
traditional embedded environments. Writing from the embedded
developer's viewpoint, he thoroughly addresses issues ranging
from kernel building and initialization to bootloaders, device
drivers to file systems.

Hallinan thoroughly covers the increasingly
popular BusyBox utilities; presents a step-by-step walkthrough of
porting Linux to custom boards; and introduces real-time
configuration via CONFIG_RT--one of today's most exciting
developments in embedded Linux. You'll find especially
detailed coverage of using development tools to analyze and debug
embedded systems--including the art of kernel debugging.


  • Compare leading embedded Linux
    processors


  • Understand the details of the Linux kernel
    initialization process


  • Learn about the special role of
    bootloaders in embedded Linux systems, with specific emphasis on
    U-Boot


  • Use embedded Linux file systems, including
    JFFS2--with detailed guidelines for building Flash-resident file
    system images


  • Understand the Memory Technology Devices
    subsystem for flash (and other) memory devices


  • Master gdb, KGDB, and hardware JTAG
    debugging


  • Learn many tips and techniques for
    debugging within the Linux kernel


  • Maximize your productivity in
    cross-development environments


  • Prepare your entire development
    environment, including TFTP, DHCP, and NFS target
    servers


  • Configure, build, and initialize BusyBox
    to support your unique requirements

About the Author

Christopher Hallinan, field applications
engineer at MontaVista software, has worked for more than 20 years
in assignments ranging from engineering and engineering management
to marketing and business development. He spent four years as an
independent development consultant in the embedded Linux
marketplace. His work has appeared in magazines, including
Telecommunications Magazine, Fiber Optics Magazine,
and Aviation Digest.








Dual-Boot Install Overview













Dual-Boot Install Overview

Careful planning is essential when installing two operating systems on a single hard drive. Each operating system has restrictions on where it may lie on the disk, and you must satisfy those restrictions for every OS you install. For example, Windows 98 expects to be the first operating system on the disk, but OpenBSD's root partition expects to be within the first 8GB. This can make life difficult. Consider the restrictions on each operating system, and figure out a method you can meet them while still getting both operating systems on one drive. Write down your partitioning needs before starting an install.


You then need to create MBR partitions for each operating system, using the appropriate tool for that OS. Once you know where these MBR partitions belong, you can start to install your operating systems. Operating systems should be installed in the order that they go on disk — if Windows XP is the first operating system on your disk, install that first. This allows you to use each operating system's native tools to create the MBR partition for that operating system. Not all operating systems work well within MBR partitions created by another operating system: For example, the Windows XP installer will see partitions created by OpenBSD, but may choke when attempting to put a file system on them.


Once you have all of your operating systems on the disk, install a boot manager to control the OS you want to start at boot time.






Note�

Each additional operating system adds complexity to the installation and disk partitioning process. Be prepared to reinstall the various operating systems a few times until you have everything set up as you like. Do not load any data on your computer until you have every operating system installed and every partition formatted the way you want!












Installing Your Printer on Fedora









Installing Your Printer on Fedora


On Linux, files to be printed are first sent to a queue, where they await their turn to print on a specific printer. Before you can use your printer, you need to set up at least one queue for the printer. Most Linux distributions provide a utility with a GUI interface that makes it easy to set up your printer.


Before you start installing your printer, be sure it's connected to the computer and turned on, so that Linux can recognize it. You must use the root account.


On the Fedora main menu, select System Settings->Printing. The window in Figure 14-1 displays.


Figure 14-1. Printer configuration window.



When the configuration window opens, it displays any printers currently configured. In this figure, no printers are currently shown. Click New to start printer installation. In the print queue windows, you can click Forward to move to the next window or Back to return to a previous window.








1.
An Add Print Queue window opens.

2.
A window requests the name and description for the printer. Type a short name that you can remember. Add a description with more information if you have more than one print queue, so you can identify which is which.

3.
A Queue Type window opens. Select the appropriate item from the "Select a queue type" drop-down list at the top. If your printer is connected to your computer, select Locally-connected. Other choices might be Networked Windows (the printer is connected to a Windows computer on the same network as your computer) or Networked CUPS (the printer is connected directly to the network, rather than to a specific computer).


The list box shows the printer connections found of the type selected. For instance, if Locally-connected is chosen, you might see /dev/lp0.

4.
The next windows allow you to select your printer manufacturer and model. Generic is selected by default. Click Generic to see a drop-down list of manufacturers. Select your manufacturer and a list of models is provided, as shown in Figure 14-2. Find and select your model.


Figure 14-2. Printer model window.


5.
In the next window, click Finish to create the new print queue.

6.
You are asked whether you want to print a test page. It's best to click Yes. You are then asked whether the test page printed correctly. Wait until the page prints and click Yes or No. If you answer no, you are given information to help you determine and correct the problem, usually an incorrect manufacturer or model. If you answer yes, your new printer is added, as shown in Figure 14-3.


Figure 14-3. Installed printers.



Figure 14-3 shows two print queues available. The names and descriptions shown are the information typed in Step 2. Hpdj450 is a printer connected to a different computer on the same network. The printer was installed on the other computer. Laser is the default printer, meaning files are sent to this queue unless you specify the other queue. To set the default, highlight a print queue and click Default.









    Lab 1.1 Exercises



    [ Team LiB ]





    Lab 1.1 Exercises


    1.1.1 Understand the Nature of Computer Programs and Programming Languages


    a)

    What is a program?



    For the next two questions, consider this scenario: You have been hired to work for the ABC Company. One of your responsibilities is to produce a daily report that contains complicated calculations.



    b)

    Without using a computer program to fulfill this responsibility, what potential problems do you foresee in generating this report every day?

    c)

    Based on your observations in question b, how do you think a computer program would make that task easier?

    d)

    What is a programming language?



    For the next question, consider the following code:





    0010 0000 1110 0110 0000 0001
    0000 0011 0000 0110 1000 0000
    1010 0001 1111 0110 0000 0001

    e)

    What type of programming language is this code written in?



    For the next question, consider the following code:





    MOV AX, [01E9]
    ADD AX, 0010
    MOV [01E6], AX

    f)

    What type of programming language is this code written in?



    For the next question, consider the following code:





    variable := 2 * variable - 10;

    g)

    What type of programming language is this code written in?



    1.1.2 Understand the Differences Between Interpreted and Compiled Languages


    a)

    What is an interpreted language?

    b)

    What is a compiled language?

    c)

    Which do you think will run quicker, an interpreted or a compiled program?







      [ Team LiB ]



      JavaScript and Compression




      [ Team LiB ]









      JavaScript and Compression


      JavaScript files are highly compressible, in some cases by as much as 60 to 80 percent. Modern browsers can decompress JavaScripts either in external files or embedded within (X)HTML files. As Chapter 11, "Case Study: DHTML.com," shows, the difference in size and speed can be dramatic. You can compress JavaScript files in two different ways: proprietary and standards-based.


      Each browser has its own proprietary way of compressing JavaScripts, related to signed scripts, Java archives, or help file systems. In theory, you could create a sophisticated sniffer to load the appropriate file for the visiting browser, but you'd have to maintain four separate files. A cleaner way is to use standards-based gzip content encoding.


      Like HTML, external JavaScripts can be delivered compressed from the server and automatically decompressed by HTTP 1.1-compliant browsers. The only gotchas to watch out for are that external compressed JavaScript files must be referenced within the head element to be reliably decompressed by modern browsers, and Explorer 5 has a subtle onload bug with compressed scripts. You can work around both gotchas, however. You'll learn all the details in Chapter 18, "Compressing the Web."


      By grouping external JavaScripts and using compression, you can dramatically reduce their impact on page display speed and bandwidth usage.








        [ Team LiB ]



        9.8 Best Practices



        [ Team LiB ]






        9.8 Best Practices



        Those experienced in business
        intelligence generally agree that the following are typical reasons
        why these projects fail:




        Failure to involve business users, IT representatives, sponsoring executives, and anyone else with a vested interest throughout the data warehousing process



        Not only do all of these groups provide valuable input for creating a
        data warehouse, but lack of support by any of them can cause a data
        warehouse to fail.




        Overlooking the key reasons for the data warehouse existence



        During the planning stages, data warehouse designers can lose sight
        of the forces driving the creation of the warehouse.




        Overlooked details and incorrect assumptions



        A less-than-rigorous examination of the environment for a data
        warehouse can doom the project to failure.




        Unrealistic time frames and scope



        As with all projects, starting the creation of a data warehouse with
        too short a time frame and too aggressive a scope will force the data
        warehouse team to cut corners, resulting in the mistakes previously
        mentioned.




        Failure to manage expectations



        Data warehouses, like all technologies, are not a panacea. You must
        make sure that all members of the team, as well as the eventual users
        of the data warehouse, have an appropriate set of expectations.




        Tactical decision-making at the expense of long-term strategy



        Although it may seem overly time-consuming at the start, you must
        keep in mind the long-term goals of your project, and your
        organization, throughout the design and implementation process.
        Failing to do so has two results: it delays the onset of problems,
        but it also increases the likelihood and severity of those problems.




        Failure to leverage the experience of others



        There's nothing like learning from those who have
        succeeded on similar projects. It's almost as good
        to gain from the experience of others who have failed at similar
        tasks; at least you can avoid the mistakes that led to their
        failures.





        Successful business intelligence projects require the continuous
        involvement of business analysts and users, sponsoring executives,
        and IT. Ignoring this often-repeated piece of advice is probably the
        single biggest cause of many of the most spectacular failures.
        Establishing a warehouse has to produce a clear business benefit and
        return on investment (ROI). Executives are key throughout the process
        because data warehouse and data mart coordination often crosses
        departmental boundaries, and funding likely comes from high levels.



        Your business intelligence project should provide answers to business
        problems that are linked to key business initiatives. Ruthlessly
        eliminate any developments that take projects in another direction.
        The motivation behind the technology implementation schedule should
        be the desire to answer critical business questions. Positive ROI
        from the project should be demonstrated during the incremental
        building process.




        9.8.1 Common Misconceptions



        Having too simplistic a view
        during any part of the building process (a view that overlooks
        details) can lead to many problems. Here are just a few of the
        typical (and usually incorrect) assumptions people make in the
        process of implementing a business intelligence solution:



        • Sources of data are clean and consistent.

        • Someone in the organization understands what is in the source
          databases, the quality of the data, and where to find items of
          business interest.

        • Extractions from operational sources can be built and discarded as
          needed, with no records left behind.

        • Summary data is going to be adequate, and detailed data can be left
          out.

        • IT has all the skills available to manage and develop all the
          necessary extraction routines, tune the database(s), maintain the
          systems and the network, and perform backups and recoveries in a
          reasonable time frame.

        • Development is possible without continuous feedback and periodic
          prototyping involving analysts and possibly sponsoring executives.

        • The warehouse won't change over time, so
          "versioning" won't
          be an issue.

        • Analysts will have all the skills needed to make full use of the
          warehouse or the warehouse tools.

        • IT can control what tools the analysts select and use.

        • The number of users is known and predictable.

        • The kinds of queries are known and predictable.

        • Computer hardware is infinitely scalable, regardless of design.

        • If a business area builds a data mart independently, IT
          won't be asked to support it later.

        • Consultants will be readily available in a pinch to solve last-minute
          problems.

        • Metadata is not important, and planning for it can be delayed.




        9.8.2 Effective Strategy



        Most software and implementation projects have difficulty meeting
        schedules. Because of the complexity in business intelligence
        projects, they frequently take much longer than the initial schedule,
        which is exactly what an executive who needs the information to make
        vital strategic decisions doesn't want to hear! If
        you build in increments implementing working prototypes along the
        way, the project can begin showing positive ROI, and changes in the
        subsequent schedule can be linked back to real business requirements,
        not just to technical issues (which executives don't
        ordinarily understand).



        You must avoid scope creep and expectations throughout the project.
        When you receive recommended changes or additions from the business
        side, you must confirm that these changes provide an adequate return
        on investment or you will find yourself working long and hard on
        facets of the warehouse without any real payoff. The business
        reasoning must be part of the prioritization process; you must
        understand why trade-offs are made. If you run into departmental
        "turf wars" over the ownership of
        data, you'll need to involve key executives for
        mediation and guidance.



        The pressure of limited time and skills and immediate business needs
        sometimes leads to making tactical decisions in establishing a data
        warehouse at the expense of a long-term strategy. In spite of the
        pressures, you should create a long-term strategy at the beginning of
        the project and stick to it, or at least be aware of the consequences
        of modifying it. There should be just enough detail to prevent wasted
        efforts along the way, and the strategy should be flexible enough to
        take into account business acquisitions, mergers, and so on.



        Your long-term strategy must embrace emerging trends in warehousing
        such as web deployment and the need for high-availability solutions.
        The rate of change and volume of products being introduced sometimes
        makes it difficult to sort through what is real and what is hype.
        Most companies struggle with keeping up with the knowledge curve.
        Traditional sources of information include vendors, consultants, and
        data-processing industry consultants, each of which usually has a
        vested interest in selling something. The vendors want to sell
        products; the consultants want to sell skills they have
        "on the bench"; and IT industry
        analysts may be reselling their favorable reviews of vendors and
        consultants to those same vendors and consultants. Any single source
        can lead to wrong conclusions, but by talking to multiple sources,
        some consensus should emerge and provide answers to your questions.



        The best way to gain insight is by discussing business intelligence
        projects with similar companies�at least at the working
        prototype stage�at conferences. Finding workable solutions and
        establishing a set of contacts to network with in the future can make
        attendance at these conferences well worth the price (and may even be
        more valuable than the topics presented).










          [ Team LiB ]



          1.6 Behind Tables



          [ Team LiB ]





          1.6 Behind Tables


          What is a table? Is it a file, a block, or a stream of bytes? Here we look at tables logically and physically.


          1.6.1 Application Tablespaces


          All table data is ultimately stored in host operating system files; but, the insertion of rows never specifically identifies a host file.


          The first step is to create an intermediate logical layer called a tablespace with a CREATE TABLESPACE statement. This statement includes the host pathnames of one or more host files that are to be created. The CREATE TABLESPACE statement creates the files mentioned in the statement, formats the files, and stores information in the Oracle data dictionary. The data dictionary information tracks the fact that a tablespace is made up of specific files.


          Once the tablespace is created, the CREATE TABLE statement can reference the tablespace name in the create statement. From this point on, Oracle will use the files of that tablespace for row storage. Figure 1-3 illustrates this architecture showing that tables and tablespaces are logical entities whereas the datafiles are the ultimate physical component.


          Figure 1-3. Tables in a Tablespace.


          To replicate the environment in Figure 1-3 create the tablespace, then the table. The following creates a tablespace STUDENT_DATA and allocates 10M of disk space. The presumption is that this file does not exist; in fact, this statement will fail immediately if the file exists prior to statement execution.





          SQL> CREATE TABLESPACE student_data DATAFILE
          2 'D:\student_data.dbf' size 10M;

          Tablespace created.

          To create a STUDENTS table in the STUDENT_DATA tablespace:





          SQL> CREATE TABLE students
          2 (student_id VARCHAR2(10),
          3 student_name VARCHAR2(30),
          4 college_major VARCHAR2(15),
          5 status VARCHAR2(20)) TABLESPACE student_data;

          Table created.

          Other tables can be added to the STUDENT_DATA tablespace. The student demo is described in Chapter 4. All the demo tables are created in a STUDENT_DATA tablespace.


          A single application usually has all tables in one tablespace. There are circumstances where multiple tablespaces are used. Multiple tablespaces are driven by a variety of issues including highly demanding physical storage requirements and partitioning. The following summarizes some remaining topics on tablespaces.


          • There is a standard, known as the Optimal Flexible Architecture (OFA). The OFA standard recommends that database files fit into a directory structure where the parent directory name is the same name as the database name, plus other reasonable considerations. The aforementioned example violates this convention only to simplify the example.

          • The datafile D:\student_data.dbf did not exist prior to the CREATE TABLESPACE statement. This file is created during the execution of the CREATE TABLESPACE statement. It is possible to create a tablespace on an existing datafile�this requires a REUSE clause in the syntax.

          • A tablespace can consist of multiple files. For example, if you need 20M you can have two 10M files.

          • The datafiles in the CREATE TABLESPACE statement are formatted by Oracle. You'll notice that a CREATE TABLESPACE statement on a 2G datafile takes relatively longer that a 2M datafile. This is because Oracle formats the datafile using its own internal block structure.

          • The aforementioned example is simple and may imply a strict architecture�such as dealing with space when you fill up 10M of data. The tablespace model is highly flexible. You can add files to an existing tablespace, resize a datafile, move a datafile to another drive and resize it, or allow datafiles to auto-extend�all without taking down the database. The physical layout of an Oracle database is highly flexible.

          • A datafile can serve one and only one tablespace. You will never, and cannot possibly, have conditions where a datafile is "tied" to more than one tablespace.

          • An Oracle user always has a DEFAULT tablespace. So, if you do not specify a tablespace name, that table is created in your default tablespace. You can get your default tablespace name by querying the data dictionary view USER_USERS.




            SQL> SELECT default_tablespace FROM user_users;

            DEFAULT_TABLESPACE
            ------------------------------
            USERS
          • A table is created in a single tablespace. Exceptions to this are partitioned tables where individual partitions are created in separate tablespaces.

          • While a table is created in one tablespace, the indexes for that table are often in a separate tablespace. The DDL for the data model demo in Chapter 4 creates all indexes in the tablespace STUDENT_INDEX.


          1.6.2 Data Dictionary


          The execution of a CREATE TABLE statement causes information to be stored in the data dictionary. The data dictionary is the term used to describe tables and views that exist in the SYSTEM tablespace. The data dictionary is essentially a repository for Oracle to track information about all objects created in the database. The information tracked includes: the table name, who owns the table, when it was created, column names and datatypes, and the tablespace name to which a table belongs. All PL/SQL stored procedure source and compiled code is stored in the data dictionary. The data dictionary tables and views of the SYSTEM tablespace are illustrated in Figure 1-4.


          Figure 1-4. Data Dictionary and System Tablespace.


          The data dictionary consists of Oracle tables and views that are constructed from SELECT statements against the base tables. The data dictionary views provide the attributes of any object created. The view USER_TAB_COLUMNS can be queried to determine the column names of a table. The data dictionary view to query for student column definitions is USER_TAB_COLUMNS.


          The SYSTEM tablespace is created when the database is first created. The SYSTEM tablespace and datafiles are generated as part of the CREATE DATABASE statement. Application tablespaces, such as STUDENT_DATA, can be added to the database at any time.


          The following SQL*Plus session creates a STUDENTS table. A query of the data dictionary view USER_TAB_COLUMNS shows the column name and column type of all columns in the STUDENTS table.





          SQL> CREATE TABLE students
          2 (student_id VARCHAR2(10),
          3 student_name VARCHAR2(30),
          4 college_major VARCHAR2(15),
          5 status VARCHAR2(20)) TABLESPACE student_Data;

          Table created.

          SQL> SELECT table_name, column_name, data_type
          2 FROM user_tab_columns
          3 WHERE table_name='STUDENTS';

          TABLE_NAME COLUMN_NAME DATA_TYPE
          ---------------- ------------------------------ ----------
          STUDENTS STUDENT_ID VARCHAR2
          STUDENTS STUDENT_NAME VARCHAR2
          STUDENTS COLLEGE_MAJOR VARCHAR2
          STUDENTS STATUS VARCHAR2

          To see the tablespace in which the STUDENTS table exists, use





          SQL> SELECT tablespace_name
          2 FROM user_tables
          3 WHERE table_name='STUDENTS';

          TABLESPACE_NAME
          ------------------------------
          STUDENT_DATA

          The following shows the datafiles and file sizes associated with the STUDENT_DATA tablespace. This query selects from the DBA_DATA_FILES view and requires that you have the DBA role or SELECT_CATALOG_ROLE role.





          column file_name format a50
          SQL> SQL> SELECT file_name, bytes
          2 FROM dba_data_files
          3 WHERE tablespace_name='STUDENT_DATA';

          FILE_NAME BYTES
          ------------------------------------------ --------
          E:\ORACLE\ORADATA\ORA10\STUDENT_DATA01.DBF 5242880




            [ Team LiB ]



            Testing the Software




            I l@ve RuBoard










            Testing the Software


            After you've successfully installed PHP, either as an Apache module or a standalone binary, you should test it to ensure that all required extensions have been successfully compiled in.


            You can do this by creating a PHP script containing the following lines:



            <?php
            // name this file "verify.php"
            phpinfo();
            ?>

            Then, depending on how you chose to compile PHP, do one of the following:



            • If you compiled PHP as an Apache module, copy this file to your web server's document root (in this example, /usr/local/apache/htdocs/) and then access it by pointing your web browser to http://your_web_server/verify.php.


            • If you compiled PHP as a standalone binary, execute this script from the command line:


              $ /usr/local/bin/php verify.php


            In either case, the output should be an HTML page that looks like Figure A.1.



            Figure A.1. The output of a phpinfo() call.





            Examine this output to ensure that all the extensions you need are active.







              I l@ve RuBoard



              Section B.5. switch Fall Through







              B.5. switch Fall Through

              The switch statement was modeled after the FORTRAN IV computed go to statement. Each case falls through into the next case unless you explicitly disrupt the flow.

              Someone wrote to me once suggesting that JSLint should give a warning when a case falls through into another case. He pointed out that this is a very common source of errors, and it is a difficult error to see in the code. I answered that that was all true, but that the benefit of compactness obtained by falling through more than compensated for the chance of error.

              The next day, he reported that there was an error in JSLint. It was misidentifying an error. I investigated, and it turned out that I had a case that was falling through. In that moment, I achieved enlightenment. I no longer use intentional fall throughs. That discipline makes it much easier to find the unintentional fall throughs.

              The worst features of a language aren't the features that are obviously dangerous or useless. Those are easily avoided. The worst features are the attractive nuisances, the features that are both useful and dangerous.








              IDEA









































              Prev don't be afraid of buying books Next






























              IDEA




              with Bryan Dollery






              I've asked Bryan to
              talk about IDEA IDE from IntelliJ. Bryan is a well-known consultant
              in New Zealand and an outspoken IDEA user and advocate.





              IDEA[URL 33] has many features that will help
              with practicing TDD; in this section we will look at several of
              them.



              If I use a class that hasn't yet been imported,
              IDEA will use a tool-tip to tell me, and offer to import it for me
              — very simple, very fast. Once the class is imported its
              methods and attributes are available to me for code completion.



              If I use a class that doesn't exist, something
              that we all do at first with TDD, then IDEA puts a lightbulb in the
              gutter which, if I click on it, offers me a number of code
              generation options, including the option to create the class for
              me.




              If I have an object and attempt to call a
              method that doesn't exist, IDEA will use the lightbulb again to
              tell me that it can help out. Clicking on it gives me the option to
              create the method. Here is where IDEA starts to show its real
              intelligence. When generating a method, IDEA has to make certain
              assumptions: the return type, the parameter types, and their
              names.



              If I have started with:




              fragment.doSomething("name");






              within fragment's class, IDEA can generate:




              void doSomething(String s) {
              }






              It will then put a red box (a live template
              query) around void, with my cursor in it. It's telling me that void
              was an assumption, and that I need to either accept it by pressing
              enter or tab, or change it. Once I'm happy with the return type, I
              can press enter to move to the next assumption, the type for the
              parameter. The final assumption here is the name for the parameter,
              which I don't like, so I can change it. Of course, if I provide it
              with more information, say, by assigning the return type to a
              variable, then IDEA will make better assumptions.



              To run the tests I have a few choices available
              to me. I can compile, test, run, or debug, at a class level or a
              method level. If I right-click on a method then it assumes that I'm
              interested in that method and offers choices based on the method,
              but if I right-click on the class name (or a gap between methods)
              then I'll be offered these choices for the whole class (which is
              what I usually want). I can also choose to run all the tests within
              a given package.



              To run or debug I don't need a main method, only
              the test methods. Being able to debug at a test-method level is
              very useful. I don't have to play around getting to the method I
              really want to test; it's all done for me.



              The integration with JUnit is very tight. If the
              GUI runner shows a stack trace I can double-click on a line of the
              trace and be taken straight to that line in the editor. Fix the
              error, recompile, and alt-tab back to the GUI runner to rerun the
              tests. I can also choose to run the text-runner, the output of
              which appears in IDEA's messages window.



              However, refactoring is the jewel in the crown
              for IDEA. Look at your current editor right now, and open the
              refactoring menu. If it's not at least half the size of your screen
              then you're really missing out.



              There are 22 refactorings listed on its menu,
              but some of those are multirefactorings. Take, for example, the
              rename refactoring. It works on variables of any scope, methods,
              classes, and packages — that makes it four similar
              refactorings in one. When it renames a class, it also renames the
              file it's in, and when it renames a package it'll rename the
              directory and ensure that the change is recorded in CVS — this
              is a very bright tool, nothing is left unfinished.



              One of my favorites is Change Signature — I
              can use it to add, remove, reorder, and rename parameters to a
              method — all at once. If I change a parameter's name it'll do
              a rename refactoring automatically for me, before it does the rest
              of the changes. If I add a parameter it asks for a default value.
              If I reorder the parameters it'll ensure that the method is called
              correctly throughout my project.



              IDEA attempts to transparently automate
              repetitive and common tasks. It leaves nothing undone, and asks for
              clarification when it's guessing. It's highly polished, looks
              great, and will probably speed up your coding significantly.















































              Amazon






              On the Horizon: JPEG2000 and Vector-Based Graphics




              [ Team LiB ]









              On the Horizon: JPEG2000 and Vector-Based Graphics


              Two types of graphic formats on the horizon look promising: JPEG2000 and vector-based graphics.


              The new JPEG2000 format is designed to be a superior replacement for the popular JPEG format. The JPEG2000 format uses wavelet technology to achieve higher compression ratios with radically reduced artifacts. The JPEG2000 format can compress images to 100:1 ratios or higher with much less image degradation than JPEGs. JPEG2000 also has a lossless compression option that typically achieves 3:1 to 4:1 compression. The JPEG2000 specification has been approved, but we won't see widespread use of this image format until browsers embed a JPEG2000 decompressor within their code.


              Vector-based formats are much more efficient for displaying graphics on the web. Flash is ubiquitous, and it typically creates animations 10 times smaller than animated GIFs. Scalable Vector Graphics is the W3C's standards-based answer to Flash. These vector-based graphical formats can help reduce the footprint of images on the web, but can require plug-ins. For Flash optimization techniques, see Chapter 13, "Minimizing Multimedia," and Chapter 10, "Optimizing JavaScript for Execution Speed."



              Graphics Tools


              There are too many tools to list them all here. The following list includes only tools with PNG support. Most of them support GIF and JPEG as well.



              • Viewers: http://www.libpng.org/pub/png/pngapvw.html


              • Editors: http://www.libpng.org/pub/png/pngaped.html


              • Converters: http://www.libpng.org/pub/png/pngapcv.html



              The most popular of these are listed here:



              • ACDSEE: http://www.acdsystems.com


              • Macromedia Fireworks: http://www.macromedia.com


              • Adobe Photoshop and ImageReady: http://www.adobe.com


              • IrfanView: http://www.irfanview.com/


              • JPEG Cruncher and GIF Cruncher from Spinwave: http://www.spinwave.com/


              • JPEG Wizard etc. from Pegasus Imaging: http://www.jpg.com


              • ProJPEG, PhotoGIF, GIFMation, SuperGIF, and other tools from BoxTop Software: http://www.boxtopsoft.com/


              • Web Image Guru (GIF, JPEG, and PNG optimization) from VIMAS: http://www.vimas.com/










                [ Team LiB ]



                Sample Tables











                 < Day Day Up > 





                Sample Tables



                This study guide uses several different database and table names in examples. However, one set of tables occurs repeatedly: the tables in a database named world. This section discusses the structure of these tables. Throughout this study guide, you're assumed to be familiar with them.



                The world database contains three tables, Country, City, and CountryLanguage:



                • The Country table contains a row of information for each country in the database:








                  mysql> DESCRIBE Country;

                  +----------------+-------------------+------+-----+---------+-------+

                  | Field | Type | Null | Key | Default | Extra |

                  +----------------+-------------------+------+-----+---------+-------+

                  | Code | char(3) | | PRI | | |

                  | Name | char(52) | | | | |

                  | Continent | enum('Asia', ...) | | | Asia | |

                  | Region | char(26) | | | | |

                  | SurfaceArea | float(10,2) | | | 0.00 | |

                  | IndepYear | smallint(6) | YES | | NULL | |

                  | Population | int(11) | | | 0 | |

                  | LifeExpectancy | float(3,1) | YES | | NULL | |

                  | GNP | float(10,2) | YES | | NULL | |

                  | GNPOld | float(10,2) | YES | | NULL | |

                  | LocalName | char(45) | | | | |

                  | GovernmentForm | char(45) | | | | |

                  | HeadOfState | char(60) | YES | | NULL | |

                  | Capital | int(11) | YES | | NULL | |

                  | Code2 | char(2) | | | | |

                  +----------------+-------------------+------+-----+---------+-------+


                  The entire output of the DESCRIBE statement is too wide to display on the page, so the Type value for the Continent line has been shortened. The value enum('Asia', …) as shown actually stands for enum('Asia', 'Europe', 'North America', 'Africa', 'Oceania', 'Antarctica', 'South America').

                • The City table contains rows about cities located in countries listed in the Country table:








                  mysql> DESCRIBE City;

                  +-------------+----------+------+-----+---------+----------------+

                  | Field | Type | Null | Key | Default | Extra |

                  +-------------+----------+------+-----+---------+----------------+

                  | ID | int(11) | | PRI | NULL | auto_increment |

                  | Name | char(35) | | | | |

                  | CountryCode | char(3) | | | | |

                  | District | char(20) | | | | |

                  | Population | int(11) | | | 0 | |

                  +-------------+----------+------+-----+---------+----------------+


                • The CountryLanguage table describes languages spoken in countries listed in the Country table:








                  mysql> DESCRIBE CountryLanguage;

                  +-------------+---------------+------+-----+---------+-------+

                  | Field | Type | Null | Key | Default | Extra |

                  +-------------+---------------+------+-----+---------+-------+

                  | CountryCode | char(3) | | PRI | | |

                  | Language | char(30) | | PRI | | |

                  | IsOfficial | enum('T','F') | | | F | |

                  | Percentage | float(3,1) | | | 0.0 | |

                  +-------------+---------------+------+-----+---------+-------+




                The Name column in the Country table contains full country names. Each country also has a three-letter country code stored in the Code column. The City and CountryLanguage tables each have a column that contains country codes as well, though the column is named CountryCode in those tables.



                In the CountryLanguage table, note that each country may have multiple languages. For example, Finnish, Swedish, and several other languages are spoken in Finland. For this reason, CountryLanguage has a composite (multiple-column) index consisting of both the Country and Language columns.













                   < Day Day Up > 



                  Multimedia Basics




                  [ Team LiB ]









                  Multimedia Basics


                  This section introduces you to some fundamental multimedia concepts. Learning these concepts is a necessary starting point from which to build. You can think of multimedia as a collection of many different forms of data types (file formats) that are all used together to give the audience a richer, potentially interactive experience.


                  Web Multimedia Datatypes


                  Before you can understand the process and procedures behind efficiently delivering your multimedia, you first need to understand the strengths and weaknesses of the various multimedia file types you will encounter on the web. Let's first take a look at audio files.


                  Audio Data Types

                  Effective audio can help deliver your message with impact. Audio is as important as visuals in your presentation. Audio can convey a sense of emotion much better than a photo. You should learn as much about audio and audio compression as possible so that you can make an accurate assessment of the audio needs of your project.


                  Audio can be contained in and delivered with different types of files. For example, you might digitize your audio and save the file in Sound Designer II format on your hard drive, but when it comes time to deliver that same audio file, you'll want to compress the data into something like the MP3 format. Compression saves storage space and speeds delivery.


                  Most of the audio files you'll find on the Internet are in one of the formats discussed in the next sections.


                  MP3

                  MP3 is a compressed audio format that sounds excellent. MP3 is designed for delivering music but can be used for voice as well. MP3 uses perceptual encoding, where the algorithm "listens" to the sound and removes frequencies you cannot hear. The MP3 format lets you choose a target bit-rate setting to encode to. A setting of 128K sounds just like a normal audio CD but is one tenth the size. A setting of 40K to 60K is perfect for a simple voiceover track. MP3 can also be used for archiving CDs or other audio samples. Because audio is sampled at 16-bit 44.1K takes up 44.1K of disk space for every second in length (88.2K for stereo files), a normal CD track takes about 10.5MB of hard disk space. MP3s, on the other hand, can save files in an archive format, and the same file is only about 2MB but sounds exactly the same. MP3 compression is lossy (see http://www.mpeg.org for more information).



                  Qdesign Music

                  Qdesign Music is a compressed audio format that comes in standard and pro versions. The psycho-acoustic parametric coding algorithm is designed for low data rate applications. This codec allows you to save compressed audio in different sampling rates (11, 22, or 44kHz), at different bit depths (8 or 16-bit), and in mono or stereo. When used with Cleaner, this codec allows you to adjust the track volume, change the dynamic range, add reverb, and apply other effects normally seen only on external hardware devices such as compressors.


                  The Qdesign Music codec also comes in a professional version�for more money, of course. It allows control over targeting a specific bit-rate, has a setting so you can tell it to lean more toward "quality" or "speed" (size), and features two pass variable bit-rate encoding for more precise data-rate targeting. See http://www.qdesign.com for more information.



                  Qualcomm PureVoice

                  Qualcomm PureVoice is a codec designed for encoding voice signals. It has some amazing compression capabilities that can be helpful when you are simply using voice as your source material. The codec allows you to change the sample rate, the bit rate, the number of channels, and the setting that determines whether to use the "full rate" or "half rate" settings. The full rate settings can compress your audio in a 9:1 ratio, and the half rate setting can achieve a 19:1 compression ratio. This is pretty heavy compression, but nonetheless the audio is more than acceptable�it sounds like a long-distance phone call.



                  IMA 4:1

                  IMA 4:1 is a compressed delivery format from the Interactive Multimedia Association. As its name implies, IMA compresses audio by 75 percent with high-quality sound. Eight- and 16-bit IMA has the same quality and file size. The limitation of IMA is that you cannot IMA-compress an AIFF file for playback on a Windows system. This codec was originally designed for audio that needs to be played off a CD-ROM in real time, and it does the job well. It is one of the oldest codecs, but it is still used today for CD-ROM work and backward compatibility.



                  MPEG-4 Audio

                  MP4 is the new audio standard replacing MP3. MP4 supports bit rate, sample rate, depth, channel, hinted streaming tracks, and more. What makes this the codec of choice is that its compression algorithms are better. MP4 uses the Advanced Audio Coding (AAC) method to produce smaller files and higher quality. In fact, AAC-compressed sounds deliver higher quality than uncompressed CD audio. A 20-second 16-bit, 22.5kHz audio file of 900K compresses to 128K in MP3 at 40kHz and sounds muddy. The same file compressed with MP4 is 108K and sounds nearly identical to the original. The trouble is that because this codec is so new, most people don't have the means to decode it (another thing to worry about!). See http://www.apple.com/mpeg4/ and http://mpeg.telecomitalialab.com/ for more information.



                  WAVE

                  This format is used extensively in Microsoft Windows to store and deliver audio, and is also used in wavetable synthesis, such as E-mu's SoundFont. Conversion tools can convert .wav files to other operating systems. See http://www.microsoft.com/ for more information.



                  QuickTime Audio

                  QuickTime is both a file container and delivery format; that is, you can use QuickTime to embed MP3, AIFF, MP4, and video in your web pages and CD-ROMs. The QuickTime format is cross-platform for people with Macs and PCs. You can use Quick Time files to edit your audio and then compress them for distribution, making this a very versatile format.


                  The next section explains video formats for the web and their strengths and weaknesses.




                  Video Data Types

                  When video is digitized, it becomes "data" that needs to be contained inside some type of file format, such as QuickTime. Once video is stored in an electronic file, it can be edited (rearranged in a new order other than that in which it was shot), and then the video can be compressed and delivered to your target device, such as the web or a CD-ROM. Here are the different types of video formats you will encounter:



                  • QuickTime (QT)
                    QT is the king of video formats. Over 15 years old, QT is cross-platform, supports both progressive downloading and real-time streaming, and can get through firewalls. The professional version is for media creators, offering additional file translation and sampling capabilities that are well worth the cost. Nearly every video application supports QT. It can handle VR 3D fly-throughs, and supports scripting for interactive presentations. QuickTime requires a plug-in, which most modern browsers bundle, or you can download it from http://www.apple.com/quicktime.



                  • RealMedia
                    This is RealNetworks' cross-platform audio and video streaming delivery technology. It allows different target bandwidths and desktop or web playback. Unlike QuickTime, RealVideo files are only stored compressed, so there's no way to revert to raw video or audio for editing.



                  • Windows Media Player
                    Microsoft's answer to QuickTime. The Windows Media Player will play back either compressed or streaming WMP files in real time to all target bandwidths, such as a DSL or a 56Kbps modem. WMP can deliver compressed, downloadable files or stream a live signal. See http://www.microsoft.com for more information.




                  Because compression for QuickTime, RealPlayer, and Windows Media Player are all very similar, I explain the concepts further in the "Codecs" section, later in this chapter.



                  Animation

                  Animation on the Internet comes in two primary forms: the traditional 2D animation and the more recent 3D animation. 2D animation is the equivalent of cartoon-style frame-by-frame animation. Modern computers have made this process much easier by way of keyframes. Now artists need only to draw keyframes of the animation and let the computer interpret the motion between keyframes, a process with the delightful name of tweening. Applications such as Flash from Macromedia work this way.


                  There are currently three main data types for web-based animation:



                  • Flash
                    The most popular animation format on the web. Vector-based format files are generally small and bandwidth friendly. Flash files are compressed into SWF file format and delivered with Macromedia Shockwave technology. Flash is nearly ubiquitous on the web; 95 percent of all browsers are able to display Flash animations. For all its benefits, Flash has a clumsy professional development environment. Visit http://www.macromedia.com for more information.



                  • Shockwave and Shockwave 3D
                    Director allows the creation of vector or bitmapped-based content compressed into DCR (Director Compressed Resource) files that are played back on the web using Shockwave. Macromedia and Intel recently introduced Shockwave 3D, which also supports 3D data from applications like 3ds max or Maya for 3D experiences. Your audience can fly through online stores and examine items with HTTP links. However, creating 3D worlds with textures, lighting, and Lingo code takes a lot of skill. DCR files and the plug-in are much larger than their Flash counterparts. For more information, visit the Macromedia web site at http://www.macromedia.com.



                  • Cult3d
                    Cult3D tries to blend the simplicity of Flash with the capabilities of Director for web 3D. Cycore has a large installed base of Cult3D player downloads. Cult3D is designed with 3D e-commerce in mind, with XML database interaction and shopping cart functions built-in. For more information, visit their web site at http://www.cult3d.com.






                  Limitations to Multimedia


                  You didn't think there weren't going to be any limitations to multimedia, did you? Multimedia work requires cutting-edge technology. When you are developing multi media, you are working in the latest video, audio, and image-editing applications, and there are always limitations and issues you face when you are trying to stay current with technology.


                  In short, when you're a multimedia developer, you always have to have the latest technology to impress your clients. That means constant upgrading of software, and thus operating systems. Unlike accountants, who can get away with using one version of Excel for years, multimedia developers are constantly upgrading their hardware and software.


                  Once you get used to continuously upgrading, your next limitation will be the raw speed of your connection to the Internet. You generally need the fastest connection you can get because you are going to not only be transferring large files, but you will also need to test regularly video and audio playback on the web.


                  Processing Speed

                  Get the fastest processor you can afford because compression often is time-consuming and editing video and/or audio is CPU-intensive. However, the real limitation here is going to be your customer's CPU speed. You will need to make a CPU cut-off point for some of your projects just to protect yourself in some cases. For full-blown multimedia projects with audio, video, text, and photos, you'll have to test your finished piece on many different computers to find the ones on which it plays well, those on which it plays adequately, and the ones on which it just doesn't work at all. Then you will either have to go back to your project and more tightly compress the audio and video or print a CPU requirement on your web site or CD-ROM package.



                  Bandwidth

                  The speed of your connection will always feel like a limiting factor, regardless of how fast it is. When I started using the Internet, 1200- and 2400-baud modems were the fastest modems you could use, but they felt slow to me. A few years later, everyone had 56,000 baud, but that speed felt faster only temporarily because larger sites made even that speed feel slow. Today I'm sitting at the opposite end of a 6Mb DSL (6,000,000 bits/second) connection at home, and it still feels slow to me. Remember:



                  • It's not your connection speed but your customers' speed that's important.


                  • Regardless of your speed, you need to learn to compress your files as much as possible.


                  • Considering that most people don't upgrade, you also need to make versions of your files backward compatible.





                  Codecs


                  When you have files on your hard drive and you plan on delivering them either over the Internet, via CD-ROM, or even by email, they need to be compressed. Compression actually has two parts to it: compression and decompression. This is where the term codec comes from: compression and decompression. Compression is what you do to a video or sound file to make it smaller, and decompression is how it plays the file back. You have control over both aspects of the process using different codecs.


                  You can actually purchase codecs for video or audio that are professional versions designed for multimedia professionals. For example, the Sorenson Pro Developer Codec is used by Industrial Light & Magic (ILM) to compress all the Star Wars trailers you see on the Internet (http://www.apple.com/trailers/).


                  Why Codecs Are Necessary

                  Without the use of compression, video and audio could not effectively be distributed over the Internet. Consider the Star Wars trailer, for example. The high-speed broadband version of that video clip on the Internet is about 25MB in size. That same clip uncompressed could easily be 250 to 500MB in size; this would take up the better part of an entire CD-ROM.


                  In addition to delivery, codecs make video playback possible. Video is a CPU-intensive medium simply because there is so much data involved. One second of NTSC video captured from the miniDV format is about 3.5MB of disk space. This means that every 60 seconds of video is 210MB of data, and every five minutes is a little over 1GB of data (or 9,000 frames of images). Now, even the latest computers could have trouble trying to process 1GB of data so fast that you don't see any jerky playback or pauses. This is why codecs are needed. You can easily reduce the size of your video 10:1, 20:1, 50:1, or even 100:1 using modern codecs. In order for your computer to play five minutes of video (210MB prior to compression), all it has to handle is about 2MB of data. Because 2MB is a smaller amount of data, older and slower computers can now play back video and see your creations.



                  How Codecs Work

                  Compression is an extremely complex operation that in some cases takes days for your computer to calculate. But remember that the compression you do prior to delivery creates smaller finished video, which plays smoother on more machines.


                  Compressing video works by removing redundant data in two ways: on a frame-per-frame (spatial) basis, and on an over-time (temporal) basis.


                  Spatial compression is where the compression application looks at every frame of your video and then groups every pixel in that one frame based on how close pixel colors are to one another. The algorithm then sets similar RGB color values to the same value. What this allows is instead of assigning every pixel three values between 0 and 255, you have big groups of pixels all with the same value, resulting in at least a 2:1 savings in compression. This is similar to how static image compression works, across a set of "images."


                  In addition to pixel compression, codecs can do something called temporal compression. Temporal is an over-time compression technique over which you really don't have too much control. The Sorenson codec, for example, creates natural keyframes when the video changes dramatically from one scene to the next. These keyframes are compressed more lightly than your entire video and are used by the codec as references on how to draw the frames around it.


                  Temporal compression smoothes pixels over time and tries to remove video noise. Temporal settings also can help you remove one-frame artifacts.




                  Streaming Media


                  Streaming content is multimedia content that is played on the user's computer while it is being downloaded from a server. Streaming media can protect content by preventing users from saving data directly on their hard drives. It also has the advantage of being able to stream very long pieces of data. Streams of 90 minutes or more are possible, provided your users have a stable connection. The stream quality, or bits per second, can depend on the user's connection speed.


                  QuickTime

                  Apple's QuickTime is the best solution for video delivery on the Macintosh and some would say on the PC as well. The streaming aspect of QuickTime is no less than awesome with its feature set, performance, and ease of use for both the developer and the client.


                  Apple provides two types of streaming technologies:



                  • Real-Time, for streaming video over the Internet during a live event.


                  • Progressive, for delivering video over the Internet with prerecorded material.



                  With real-time, you can hook up your video camera to your computer, which is running a copy of QuickTime's Broadcaster. This allows you to send live video over a streaming server out to thousands of viewers (see Figure 13.1). Broadcaster takes advantage of all of QuickTime's codecs such as MP3 streaming, MP4 streaming, and video streaming.


                  Figure 13.1. The QuickTime streaming server is powerful and simple to use.



                  NOTE



                  With the new ISO standards, MPEG4 is now starting to show up in cellphones and other PDA devices. The QuickTime streaming server will enable you to communicate with all of these devices.




                  In terms of compression for streaming QuickTime content, there is not much to do other than normal compression. Video compression is both spatial and temporal. With streaming content, you pick a target bandwidth, video size, and frame rate; enable "hinted tracks;" and let the file compress. Once it's done, you upload the file to your server and make a link to it on your web page. When a user clicks on this link, the video is requested, sent to the client, buffered for a moment, and then playback starts. Remember:



                  • Dial-up modems are almost useless for video.


                  • Your client should be able to handle a 256Kbps download or it's not worth it.


                  • Use QuickTime progressive for content below 256Kbps downloads.



                  QuickTime progressive downloads begin playing as soon as enough content is buffered to allow smooth playback for the duration of the clip.



                  For More Information


                  If you think you'll use QuickTime, go to http://www.apple.com/quicktime/tools_tips/tutorials/activex.html for instructions on making your QuickTime files viewable on all platforms and browsers. If you are using version 5.5 SP2 of Internet Explorer for Windows, you must view QuickTime as an ActiveX control because Microsoft has discontinued support for Netscape style plug-ins.



                  I've found that people are much more accepting of temporal compression (over-time) of the video than spatial (frame-per-frame) compression. By cutting the frame rate from l5fps to 10fps or even 8fps, each frame has 50 percent more data per frame for the same file size, which will increase the quality of the picture. The minimum size should be 320x240. If your clients have a fast Internet connection, 400x300 is better. Anything smaller than 320x240 is just too small. To maintain good quality, always up the data rate in proportion to the image size. Also remember that doubling image size (320x240 to 640x480) requires a 4X (not 2X) increase in data rate.


                  Use this data rate formula to help target your movie for the right delivery medium:


                  Date Rate = (frames per second) x (movie width) x (movie height) divided by 35000


                  This translates to DR = FPS * W * H / 35000.


                  Here is an example: A 320x240 movie with 15 frames per second needs to be compressed to about 32.9K of data per second. Realistically, I would round this up to 35K.


                  Gamma, or the relative brightness of computer displays, is another issue that you need to understand. Macs and PCs display images with different gamma levels. If you're working with a compression tool that supports gamma adjustment (Cleaner, for example), an image you create on a Mac and display on a PC will look too dark. Conversely, an image created on a PC and displayed on a Mac will look too bright. The cross- platform gamma adjustment with Cleaner is +25 to +30 when going from Mac to PC and �30 when going from PC to Mac. Positive numbers lighten the image, and negative numbers darken the image.



                  RealMedia

                  RealMedia is bigger on the Windows operating system than it is on the Macintosh, mainly because RealNetworks can't compete with a player that comes embedded into the operating system such as QuickTime.


                  There are three different type of streaming content distribution technologies:



                  • Unicast
                    Unicast streams are simple point-to-point streams, similar to a telephone call from the host server to a individual client computer. To reach many clients at once, the server must send many streams simultaneously, which is a less efficient use of bandwidth. However, viewers of a unicast stream can randomly access movies, playing only the parts they want to see. Typically, unicast is used to stream pre-recorded movies that are stored on a host computer.



                  • Multicast
                    Multicast streams are sent directly to a group address, which can then be simultaneously accessed by many client computers.



                  • Reflected
                    Reflected multicast streams take live media from another source, such as a radio or TV broadcast, and stream it out to viewers as a series of unicasts.




                  Because the RealMedia compression tool(s) save compression attributes, picking the same compression setting for many clips is easy. Remember that video on the web really isn't effective unless you can target a 256K user. This means that the best settings for the RealEncoder are these:



                  • 256K DSL/cable modem


                  • 2-pass encoding


                  • Variable bit rate encoding




                  Windows Media

                  Microsoft (and some web reviews) say that their new Media Player provides the best audio and video quality on the web. What they don't mention is that the quality is limited to the bandwidth of your user's connection speed. It's their connection speed that determines how high-quality a streaming file their computer can handle. The latest version of the player has incorporated many new features, including these:



                  • Windows Media Audio 8 (WMA8) encoding


                  • Smart Transcode support for the best quality transfer to portable devices


                  • Windows Media Audio and Video 8 decoding


                  • A new enterprise deployment pack for larger scale (ISP-sized) streaming solutions



                  Windows Media Player is more than a streaming application; it's truly a complete "media player."


                  Windows Media Player has an impressive set of features and is integrated into Windows; unlike Real, which often has issues with streaming video through firewalls. See www.WindowsMedia.com for more information.


                  Regardless of which technology you choose to use, the "Audio Compression and Optimization" and "Video Optimization" sections will give you a start on optimizing your audio and video for delivery over the Internet or other forms of media such as CD-ROMs.



                  Streaming versus Downloading

                  The three types of video delivery methods for playback are these:



                  • Streaming
                    Video that is played while it is being downloaded.



                  • Progressive downloading
                    The progressive format is a blending of streaming and simple downloading. With progressive video, you get the benefit of video starting right away (with a little buffer), and when the video has completely loaded, you have the ability to save it to your computer for offline playback.



                  • Just plain downloading
                    In order for downloaded video to work correctly, the entire video must download before it can start playing. The advantage, however, is that once the entire video has downloaded, it will play flawlessly, assuming the user's computer is up to the task.






                  Multimedia Production Tips


                  Media optimization projects of all sizes benefit from planning right from the beginning. To give your projects a better chance of success, keep these questions in mind while you are planning your projects:



                  • What are your goals, who defines them, and how will you know when you've met them?


                  • Who is your target audience, what kind of computers do they use, and how do they connect to the Internet?


                  • What are the limiting factors to delivering your media to your target audience? (This might be connection speed, CPU speed, server disk space, and so on.)


                  • What copyright restrictions apply to your source material if it's not original content?



                  Along with a project plan, you also may want to develop a storyboard and script. It is wise to consider, particularly with the bandwidth requirements of streamed audio and video, that your audience most likely will experience the end result in a small window on a computer screen.









                    [ Team LiB ]



                    4.4 Formalizing Use Cases




                    I l@ve RuBoard









                    4.4 Formalizing Use Cases


                    The initial description of a use case is text. We can formalize the definition of a use case in terms of preconditions and postconditions. Preconditions state what must be true for the use case to execute. Postconditions state what must be true when the use case has completed.


                    The formulation of a use case in terms of pre- and postconditions is both precise and concise, in no way presupposes a particular design, and avoids the pitfall of specifying the use case in an overly procedural form.


                    4.4.1 Preconditions


                    Each use case may have zero or more preconditions.


                    Definition:
                    A use case precondition denotes a relevant, verifiable property of the system that is required to be true before the use case is performed.


                    To help identify preconditions, examine the use case's parameters. For example, Add Item to Order has the parameters Order Number, Book Number, and Quantity. This raises questions about each of these items. What must be so? In the case of Add Item to Order, then, we expect the following to be true:



                    • There is an unexpired Order not yet checked out.


                    • The item selected is a book carried by the store.


                    • The quantity selected is a number greater than zero and less than stock on hand.




                    4.4.2 Postconditions


                    Each use case has at least one postcondition.


                    Definition:
                    A use case postcondition represents what must be true when the use case has completed.


                    After Add Item to Order completes successfully, we expect the following to be true:



                    • The order is no longer empty.


                    • The order is not checked out.


                    • The book is included in the order with the given quantity.


                    • The total value of the order is increased by the unit price of the book times the quantity selected.



                    We can write a use case in terms of preconditions and postconditions, as seen in Figure 4.9.


                    Figure 4.9. Preconditions and Postconditions for Add Item to Order




                    4.4.3 Linked Use Cases


                    One use case's postcondition is often another's precondition. This is especially true when we have sequences of use cases characterized as activities, as you can see in Figure 4.10.


                    Figure 4.10. Connecting Preconditions and Postconditions










                      I l@ve RuBoard



                      8.1 Unique Instance Constraints




                      I l@ve RuBoard









                      8.1 Unique Instance Constraints


                      Objects may have sets of attributes that are required to be unique.



                      • a customer's login ID


                      • a publisher's ISBN code


                      • an order's order number





                      Object orientation endows each object with its own unique identity, the object handle. This handle uniquely identifies an object, but it is implicit and carries no semantic significance. An object's identifier carries semantics�it's a rule about the domain�and is made explicit.



                      These attributes constitute ways to identify an individual instance. We call each required-to-be-unique set of attributes an identifier.


                      Definition:
                      An identifier is a set of one or more attributes that uniquely distinguishes each instance of a class.


                      8.1.1 Single Attribute Identifiers


                      To capture this notion of uniqueness in the model, we establish uniqueness constraints. Examples of unique-instance constraints are:



                      • No two customers can have the same e-mail address.


                      • Each publisher has a unique ISBN prefix code.


                      • Each new order is assigned a unique number.




                      Uniqueness constraints formalize rules in the domain�both rules about the world (e.g., publishers are assigned unique ISBN prefix codes by the publishing industry) as well as rules that we make up (the customer's e-mail address is his login ID and therefore must be unique).


                      Any kind of attribute can be used to make up an identifier.




                      The notion of a naming attribute is different from that of an identifier, although naming attributes are frequently used in identifiers. For example, nothing prohibits two publishers from having the same company name if they were established in different jurisdictions, but the ISBN code is required, by policy, to be unique.



                      Constraints in OCL.

                      To define a unique-instance constraint formally, we may use the Object Constraint Language (OCL). Figure 8.1 depicts the constraint for the Customer.


                      Figure 8.1. Unique Instance Constraint in OCL



                      The first line defines the context of the constraint, namely the class for which the constraint applies, and the fact that the constraint is a definition of an invariant (inv).


                      The second line iterates over all instances of the class using two free variables, p1 and p2. allInstances is a predefined operator that finds all the instances of the associated class, and the dagger symbol (->) indicates that the following operation acts on a collection, in this case, all the instances of the customer.


                      The third line introduces an implication. For two arbitrary instances of the class, p1 and p2, that are not equal, this fact implies something (on the following line).


                      The last line states the invariant, namely that the identifying attribute of the two instances must not be the same.



                      Unique instance constraint idiom.

                      The example in Figure 8.2 forms an idiom.


                      Figure 8.2. Unique Instance Constraint Idiom



                      This idiom is the unique instance constraint. The unique instance constraint for the Publisher and the Order follow the same idiom.



                      OCL�The Object Constraint Language


                      The Object Constraint Language (OCL) is a fundamental part of the UML, described in Section 6 of the UML 1.4 Specification [1]. It is designated as the official way to express constraints on UML models. The semantics of UML themselves are defined using OCL.


                      The constraints shown in this chapter are written in OCL and in action language. These constraint expressions are written as boolean functions that return true if the constraint is satisfied and false if the constraint is violated.


                      We hope for and encourage a convergence between OCL and action languages, such that a developer can write a constraint using the same language used for actions and that model compilers will be able to check and to enforce these constraints.



                      Definition:
                      A constraint idiom is a general pattern for a commonly occurring type of constraint that can be represented by a predefined tag.



                      Constraints in action language.

                      The constraints in Figure 8.1 can also be written in action language, as shown in Figure 8.3.


                      Figure 8.3. Unique Instance Constraint in Action Language



                      Specifically, this is written as an instance function on the Customer class.



                      Graphical notation.

                      UML allows the definition of tags. A tag is a string that can be added to any model element, enclosed in braces {}. Figure 8.4 shows how we use the tag {I} on each identifying attribute to denote an identifier.


                      Figure 8.4. Identifiers on the Class Diagram



                      The presence of the tags is a shorthand for writing the unique instance constraint, so there is no need to explicitly write the OCL or action language.



                      Contrived identifiers unnecessary.

                      Shlaer-Mellor ([2] and [3]), a precursor to Executable UML, required that every class contain at least one identifier, even if that identifier was an attribute placed in the object solely for the purpose of being its identifier. This practice is not required in Executable UML.




                      8.1.2 Multiple Attribute Identifiers


                      An identifier may consist of multiple attributes. For example, publishers have code numbers assigned by "ISBN agencies." The code number is not unique among all publishers, but is unique among publishers in the same ISBN agency group.


                      Definition:
                      An identifying attribute is an attribute that forms part of at least one identifier.


                      The Publisher's identifier comprises two identifying attributes: Publisher.groupCode and Publisher.publisherCode.


                      The constraint is depicted in Figure 8.5. Or, see Figure 8.6 for the constraint in action language. This action language snippet asserts that the combination of the two attributes must be distinct.


                      Figure 8.5. Multiple-Attribute Identifier Constraint in OCL



                      Figure 8.6. Multiple-Attribute Identifier Constraint in Action Language



                      This idiom is another unique instance constraint, this time with multiple identifying attributes. Constraints involving any number of attributes are possible. But since the unique instance constraint is a common idiom, we do not need to write OCL or action language for it; we simply tag the attributes as shown in Figure 8.7.


                      Figure 8.7. Multiple-Attribute Identifier on Class Diagram




                      8.1.3 Multiple Identifiers


                      A class may have several identifiers, each of which consists of one or more identifying attributes. In this case, we write several unique instance constraints for the same class.


                      When there is more than one identifier, use the tags {I}, {I2}, {I3}, and so on. Select a new tag for each new identifier, and tag every identifying attribute in a given identifier with that tag. Hence, each identifying tag ({I}, {I2}, {I3}, etc.) applies to all attributes of a single identifier that defines one of the unique instance constraints.


                      Furthermore, an identifying attribute may be a part of more than one identifier. For example, a car can be identified by:



                      • manufacturer + serialNumber


                      • state + titleNumber


                      • state + tagNumber



                      In this example, the state attribute is a part of two identifiers.


                      A single attribute may be a part of several identifiers, so it may be tagged several times. In our example depicted in Figure 8.8, the tags {I2, I3} are shown for the state attribute of Car.


                      Figure 8.8. Multiple Identifiers on Class Diagram



                      Search for identifiers.

                      When abstracting attributes of classes, pay special attention to finding identifying attributes. Look for situations in which no two instances may have the same value for an attribute or set of attributes. In some cases, the business may have unique numbering or identifying schemes for many of the things being modeled. These are the identifying attributes, and they constitute a rule in the domain.










                        I l@ve RuBoard