Tuesday, November 10, 2009

Finger: A Simple Directory Service










Perl for System AdministrationSearch this book









6.2. Finger: A Simple Directory Service








Finger and
WHOIS are good examples of simple directory services. Finger exists
primarily to provide read-only information about the users of a
machine (although we'll see some more creative uses shortly).
Later versions of Finger, like the GNU Finger server and its
derivatives, expanded upon this basic functionality by allowing you
to query one machine and receive information back from all of the
machines on your network.







Finger was one of the first widely deployed directory services. Once
upon a time, if you wanted to locate a user's email address at
another site, or even within your own, the finger
command was the best option. finger
harry@hogwarts.edu
would tell you whether Harry's
email address was harry,
hpotter, or something more obscure (along with
listing all of the other Harrys at that school). Though it is still
in use today, Finger's popularity has waned over time as web
home pages became prevalent and the practice of freely giving out
user information became problematic.








Using the Finger protocol from Perl
provides another good example of TMTOWTDI. When I first looked on
CPAN for something to perform Finger operations, there were no
modules available for this task. If you look now, you'll find
Dennis Taylor's Net::Finger module, which he
published six months or so after my initial search. We'll see
how to use it in a moment, but in the meantime, let's pretend
it doesn't exist and take advantage of this opportunity to
learn how to use a more generic module to talk a specific protocol
when the "perfect" module doesn't exist.







The Finger protocol itself is a very simple TCP/IP-based text
protocol. Defined in RFC1288, it calls for a standard TCP connect to
port 79. The client passes a simple CRLF-terminated[1] string over the connection. This string either requests
specific user information or, if empty, asks for information about
all users of that machine. The server responds with the requested
data and closes the connection at the end of the data stream. You can
see this in action by telnet ing to the Finger
port directly on a remote machine:








[1]Carriage return + linefeed, i.e., ASCII 13 + ASCII 10.













$ telnet kantine.diku.dk 79
Trying 192.38.109.142 ...
Connected to kantine.diku.dk.
Escape character is '^]'.
cola<CR><LF>
Login: cola Name: RHS Linux User
Directory: /home/cola Shell: /bin/noshell
Never logged in.
No mail.
Plan:

Current state of the coke machine at DIKU
This file is updated every 5 seconds
At the moment, it's necessary to use correct change.
This has been the case the last 19 hours and 17 minutes

Column 1 is currently *empty*.
It's been 14 hours and 59 minutes since it became empty.
31 items were sold from this column before it became empty.
Column 2 contains some cokes.
It's been 2 days, 17 hours, and 43 minutes since it was filled.
Meanwhile, 30 items have been sold from this column.
Column 3 contains some cokes.
It's been 2 days, 17 hours, and 41 minutes since it was filled.
Meanwhile, 11 items have been sold from this column.
Column 4 contains some cokes.
It's been 5 days, 15 hours, and 28 minutes since it was filled.
Meanwhile, 26 items have been sold from this column.
Column 5 contains some cokes.
It's been 5 days, 15 hours, and 29 minutes since it was filled.
Meanwhile, 18 items have been sold from this column.
Column 6 contains some coke-lights.
It's been 5 days, 15 hours, and 30 minutes since it was filled.
Meanwhile, 16 items have been sold from this column.

Connection closed by foreign host.
$







In this example we've connected directly to kantine.diku.dk's Finger port. We
typed the user name "cola," and the server returned
information about that user.







I chose this particular host and user just to show you some of the
whimsy that accompanied the early days of the Internet. Finger
servers got pressed into service for all sorts of tasks. In this
case, anyone anywhere on the planet can see whether the soda machine
at the Department of Computer Science at the University of Copenhagen
is currently stocked. For more examples of strange devices hooked to
Finger servers, you may wish to check out Bennet Yee's
"Internet Accessible Coke Machines" and "Internet
Accessible Machines" pages; they are available online at
http://www.cs.ucsd.edu/~bsy/fun.html.







Let's take the network communication we just performed using a
telnet binary back to the world of Perl. With
Perl, we can also open up a network socket and communicate over it.
Instead of using lower-level socket commands, we'll use Jay
Roger's Net::Telnet module to introduce a
family of modules that handle generic network discussions. Other
modules in this family (some of which we use in other chapters)
include Eric Arnold's Comm.pl, Austin
Schutz's Expect.pm, and the venerable but
outdated and nonportable chat2.pl by Randal L.
Schwartz.








Net::Telnet will handle all of the connection
setup work for us and provides a clean interface for sending and
receiving data over this connection. Though we won't use them
in this example, Net::Telnet also provides some
handy pattern-scanning mechanisms that allow your program to watch
for specific responses from the other
server.







Here's a Net::Telnet version of a simple
Finger client. This code takes an argument of the form
user@finger_server. If the user name is omitted, a
list of all users considered active by the server will be returned.
If the hostname is omitted, we query the local host:








use Net::Telnet;

($username,$host) = split(/\@/,$ARGV[0]);
$host = $host ? $host : 'localhost';

# create a new connection
$cn = new Net::Telnet(Host => $host,
Port => 'finger');

# send the username down this connection
unless ($cn->print("$username")){ # could be "/W $username"
$cn->close;
die "Unable to send finger string: ".$cn->errmg."\n";
}

# grab all of the data we receive, stopping when the
# connection is dropped
while (defined $ret = $cn->get) {
$data .= $ret;
}

# close the connection
$cn->close;

# display the data we collected
print $data;







RFC1288 specifies that a /W switch can be
prepended to the username sent to the server to request it to provide
"a higher level of verbosity in the user information
output," hence the /W comment above.







If you need to connect to another TCP-based text protocol besides
Finger, you'd use very similar code. For example, to connect to
a Daytime server (which shows the local time on a machine) the code
looks very similar:








use Net::Telnet;

$host = $ARGV[0] ? $ARGV[0] : 'localhost';

$cn = new Net::Telnet(Host => $host,
Port => 'daytime');

while (defined $ret = $cn->get) {
$data .= $ret;
}
$cn->close;

print $data;







Now you have a sense of how easy it is to create generic TCP-based
network clients. If someone has taken the time to write a module
specifically designed to handle a protocol, it can be even easier. In
the case of Finger, you can use Taylor's
Net::Finger to turn the whole task into a single
function call:








use Net::Finger;

# finger( ) takes a user@host string and returns the data received
print finger($ARGV[0]);







Just to present all of the options, there's also the fallback
position of calling another executable (if it exists on the machine)
like so:








($username,$host) = split('@',$ARGV[0]);
$host = $host ? $host : 'localhost';

# location of finger executable, MacOS users can't use this method
$fingerex = ($^O eq "MSWin32") ?
$ENV{'SYSTEMROOT'}."\\System32\\finger" :
"/usr/ucb/finger"; # (could also be /usr/bin/finger)

print `$fingerex ${username}\@${host}`







Now you've seen three different methods for performing Finger
requests. The third method is probably the least ideal because it
requires spawning another process.
Net::Finger will handle simple
Finger requests; for everything else, Net::Telnet
or any of its kin should work well for you.















Copyright © 2001 O'Reilly & Associates. All rights reserved.







Semaphores, Events, Messages, and Timers










































Semaphores, Events, Messages, and Timers


At this point, you have a basic understanding of
the core parts of the RTOS. I have mentioned system calls several times
but never really elaborated on what a system call is. Like threads and
tasks, the definition of a system call depends on the company you keep.
In general, a system call refers to an operating system facility that,
when invoked, causes a context switch into the kernel because the
resource with which the system call interfaces is only accessible in
kernel mode. With embedded systems, the term system call
usually just refers to one of the RTOS’s API functions, and that
definition is the one I’ve used in this book. If the RTOS has all of
the nice stuff I’ve talked about so far, it still can’t do much.
Compare it to a farm tractor that has a heavy-duty motor and a lot of
power but doesn’t have any attachments. The attachments included with
the RTOS are the system calls that allow us to communicate between
tasks, to communicate between interrupt handlers and tasks, to
guarantee that only one task executes a certain function at a time, to
set up timers, and so forth. Every RTOS comes with some set of system
calls, but, in general, all RTOSs share a few basic functions.



Semaphores can be used to
synchronize access to a shared resource. A common form of
synchronization is mutual exclusion, meaning that tasks coordinate
their access to a resource so that only one task at a time is
manipulating the resource. For example, if a semaphore-protected
function has begun executing and a context switch transfers control to
another section of code that also tries to call the protected function,
the second call is blocked. Mutually exclusive access is needed
whenever more than one task (i.e., code in separate threads or
processes) is allowed to access the same resource.


For example, assume a system has some memory-mapped
register that contains eight bits, each of which control an LED (1 =
on), and assume that I use the function in Listing B.1 to modify those bits.



Listing B.1: LedOn().







unsigned char
LedOn(unsigned char onbits)
{
unsigned char current_bits;

current_bits = *(unsigned char *)LED_PORT;
current_bits |= onbits;
*(unsigned char *)LED_PORT = current_bits;
return(current_bits);
}













What would happen if (referring to Listing B.1), just after the variable current_bits is loaded from the LED_PORT
address, an asynchronous context switch occurred and some other task
called this function, passing it a different value? Quick answer: the
setting established by the second task would be lost. Let’s assume that
on the first call to LedOn(), the value of onbits is 0x01, and the current value in LED_PORT is 0x80. If no context switch occurred, the result would be that LED_PORT would be 0x81, meaning that two LEDs would be lit.


If, however, a context switch happens in just the right place, the result is different. On the first call to LedOn() after the line


current_bits = *(unsigned char *)LED_PORT; 

the value stored in current_bits is 0x80. Now assume that a context switch occurs after this line and that another task calls this function with 0x02 as the onbits value. The second invocation runs to completion leaving, for the momemt, the LED_PORT value at 0x82. At some point in the future, context is restored to the original task. This instance of the function resumes at the line


current_bits |= onbits 

with current_bits (in the original context) set to 0x80. When this value of current_bits is logically ORed with onbits (0x01), the result 0x81 is written to the LED port. The result is that the value established by the preempting task (0x82) is lost.


The solution to this problem is mutual exclusion. If these LED operations were properly wrapped with semaphore operations (Get at the top and Release
at the bottom), then when the context switch occurred, the preempting
task would not have been allowed to manipulate the LEDs until the task
that owned the semaphore was done.



Events are OS-provided
flags that, when set by one task or interrupt handler, cause some other
task to wake up (i.e., to enter the ready-to-run state). Typically,
signaling with events involves two system calls: one to post an event
and one to block waiting for an event. Events are typically not queued.
If the same event is posted several times prior to the acknowledgment
of the event by the task that is blocked while waiting for it, those
additional postings are lost. Because events are a simple flag, after
the flag is set, setting it again has no significance. Usually events
have lower overhead than other types of interprocess communication and
thus are commonly used for communication between an interrupt handler
and a task.



Messages provide a
mechanism that allows a task (or interrupt handler) to send some data
to another task. Unlike events, messages are queued. When multiple
messages are sent to some task that is blocked while waiting for the
message, each message is queued by the OS for later consumption by the
receiver of the message.


Most RTOSs support the ability to send a message to be
posted to the end of the queue and also to be posted to the head of the
queue. In some situations, it can be very handy to be able to expedite
a message by posting it to the head of the queue, but this feature must
not be abused. When a message is placed in a queue, it is usually on a
first-in first-out (FIFO) basis. If the message is put at the top of
the queue, then it becomes a last-in first-out (LIFO) basis, which is
ok, but you’d bet ter be aware of the difference.


Messages can be passed from task-to-task or from interrupt-to-task, but a message usually imposes more overhead than an event.



Timers are probably the
most heavily used facility within an RTOS. Not only is there typically
some set of system calls specifically for timers, but many of the other
system calls usually provide some time-out mechanism as well. The most
common timer function is the ability to put a task to sleep for some
period of time or to wake a task at some time of day in the future. The
simplest example would be a task that blinks an LED (see Listing B.2).



Listing B.2: task_BLINKER().






void
task_BLINKER(void)
{
while(1)
{
Turn_On_Led();
GoToSleep(1000);
Turn_Off_Led();
GoToSleep(1000);
}
}













The GoToSleep() system call is the timer function that allows the task to wake up periodically. Aside from GoToSleep(),
other system calls intrinsically use timers to support the ability to
time-out. For example, a task might want to wait for an incoming event,
but, if the event does not occur for 30 seconds, some other action must
occur. In this case, the system call is an event mechanism, but, under
the hood, the event mechanism is using timers.




































10.5 Default Auditing












for Ru-Brd & DownSky

size=+0>

10.5 Default Auditing


As we
mentioned earlier in this chapter, some actions will be stored to
operating system files whether auditing is enabled or not. These actions
are:




  • Database startup



  • Database shutdown



  • Connection to the database from a privileged account



  • Structural changes made to the database, like adding a tablespace
    datafile, etc.


When the database is started up, a record is written automatically to
an operating system file. If the database was started with either
sys or internal, the user information will not be recorded.
The information recorded is the operating system username of the process
starting the database, the terminal identifier, the timestamp (date and
time) when the database was started, and whether or not auditing was
enabled. The purpose of writing this information is to create a record of
anyone attempting to start the database and disable auditing in order to
hide their actions. At the time of database startup, the database audit
trail is not yet available, so the startup information is always written
to an operating system audit file.


In all of the auditing situations listed above, the information is
recorded to an operating system log. If the operating system does not
enable Oracle to access its audit facility, Oracle will record the
information in a log in the same directory in which the background
processes record their activities.


10.5.1 Auditing During Database Startup


The first type of default
auditing occurs during database startup. An example of operating system
audit entries stored automatically for a Windows NT
system running Oracle version 8.0.4 is shown here:

Audit trail: ACTION : 'startup' OS_AUTHENT_PREFIX : OPS$. (3:55:41 a.m.)
Audit trail: ACTION : `startup' AUDIT_TRAIL : none. (3:55:40 a.m.)
Audit trail: ACTION : `connect INTERNAL' OSPRIV : OPER CLIENT USER: SYSTEM CLIENT TERMINAL: MLT-PC. (3:55:31 a.m.)

These three entries were found in the Event Viewer, at the Start Programs
Administrative Tools Event Viewer
menu option on a Window NT system. There are three event logs — System,
Security, and Application — into which anyone can insert an event. Oracle
will log events in both the System and the Application event logs. The
time notations in parentheses were added by us to show you more clearly
the sequence of events.


The first entry in the sequence above is actually the one listed last.
This entry shows the initial connection made to the database in order to
start it. Of special interest in the third entry is the notation of the
client terminal from which the database was started — MLT-PC — and the
system privilege used — OSPRIV. The three audit notations were present in
the Windows NT Administrative Tools Event Viewer, in the Application log,
after the database was started. As each of the individual detached
processes (PMON, SMON, DBWR, LGWR, CKPT, RECO) was started, an individual
entry was inserted in the event log. There was also an entry for the time
at which the SGA was initialized.


10.5.2 Auditing During Database Shutdown


The second form of default auditing that may occur
is at the time of database shutdown. Each time the database is shut down,
a record may be written to the audit trail indicating the operating system
username, the user's terminal identifier, and the date and timestamp when
the action occurred. The use of the words "may be" in the last sentence is
intentional. Depending on the operating system involved, if a privileged
user, like SYSDBA or SYSOPER, shuts the database down, the
event might not be registered in the System event log. On a Windows NT
version 4.0 system running Oracle8 version 8.0.4, if the database is shut
down using the command:

net stop OracleStart<db_name>

no record of the database shutdown is made either to the Windows NT
Application event log or to the database alert log.


If your operating system/database automatically
records the shutdown attempts performed by non-privileged users, you will
find this information very valuable if you are investigating why your
database unexpectedly shut down. The absence of an event entry for the
shutdown could help you eliminate the fear that your database had been
intentionally shut down by an outsider.


10.5.3 Auditing During Database Connection with Privileges


The third default action
recorded to the operating system audit trail occurs when a user connects
to the database with administrative privileges. The operating system user
information is recorded. This information is very valuable in helping you
detect whether someone has managed to acquire privileges they should not
have.


10.5.4 Auditing During Database Structure Modification


When a command is issued from
the database to modify the structure of the database, the command and its
outcome are captured to the alert log for that database. This is the
fourth default action. Some examples of commands that will be captured in
the alert log follow:

CREATE TABLESPACE <tablespace_name>
ALTER TABLESPACE <tablespace_name> ADD DATAFILE
ALTER TABLESPACE OFFLINE DROP TABLESPACE <tablespace_name>
INCLUDING CONTENTS
CREATE ROLLBACK SEGMENT <rollback_segment_name>

In all of these commands, the successful completion of the command will
in some way alter the structure of the database.











for Ru-Brd & DownSky




Section 34.2.&nbsp; Routing Table Initialization










34.2. Routing Table Initialization


Routing tables are initialized with fib_hash_init, defined in net/ipv4/fib_hash.c. It is called by ip_fib_init, which initializes the IP routing subsystem, to create the ip_fib_main_table and ip_fib_local_table tables (see the section "Routing Subsystem Initialization" in Chapter 32).


The first time fib_hash_init is called, it creates the memory pool fn_hash_kmem that will be used to allocate fib_node data structures.


fib_hash_init first allocates a fib_table data structure and then initializes its virtual functions to the routines shown in Table 34-1. The function also clears the content of the bottom part of the structure (fn_hash), which, as shown in Figure 34-1, is used to distribute the routing entries on different hash tables based on their netmask lengths.


Table 34-1. Initialization of the fib_table's virtual functions

Method

Routine used

tb_lookup

fn_hash_lookup

tb_insert

fn_hash_insert

tb_delete

fn_hash_delete

tb_flush

fn_hash_flush

tb_select_default

fn_hash_select_default

tb_dump

fn_hash_dump













5.9 Combined Observer-Controller











 < Day Day Up > 











5.9 Combined Observer-Controller


The next design step is to combine the observer and controller to form the compensator of Figure 5.1. Equation 5.5 shows the complete computation of the state estimate and the plant actuator commands u using the reference input r and sensor measurements y as inputs to the algorithm.








(5.5) 


Note that in the configuration of Figure 5.1, the plant and the observer both receive the actuator commands directly. As a result, assuming the linear plant model ideally represents the real plant, the observer tracks the plant response to changes in the reference input r with zero error.


During controller startup, the state estimate vector should be initialized as accurately as possible. This will minimize transient estimator errors during the initial period of system operation.



Equation 5.5 requires the plant input vector u and the measured plant outputs y as inputs. In MATLAB state-space models, only the outputs (y vector elements) of a state-space component are available as inputs to a subsequent component. The input vector elements are not available to use as outputs, which creates a small problem.



The solution is to form an augmented plant output vector by appending the plant inputs to the output vector. This requires modification of the C and D matrices of the plant model. Equation 5.6 shows the format of the modified matrices.








(5.6) 


With the result of Eq. 5.6, the closed-loop system equations consisting of the plant model and the observer-controller are written as follows.








(5.7) 






Note 

In Eqs. 5.6 and 5.7, 0 represents a zero matrix and I is an identity matrix. For a system with r inputs and n states, the 0 matrix in these equations has r rows and n columns and the I matrix has r rows and r columns.




Equation 5.7 is the mathematical formulation of the closed-loop system shown in Figure 5.1. The top two lines of Eq. 5.7 represent the behavior of the plant. The third line implements the observer, and the fourth line computes the actuator commands. These last two lines form the observer-controller.


The companion CD-ROM contains the MATLAB function ss_design(), which develops an observer-controller and feedforward gain for a SISO plant that using the pole placement method. The inputs to this command are the plant model and constraints for the locations of the closed-loop system and the observer poles. The form of ss_design() is shown here.




>> [n, ssobsctrl, sscl] = ss_design(ssplant, t_settle, ...
damp_ratio, obs_pole_mult)


The input arguments are ssplant, the state-space plant model; t_settle, the closed-loop settling time requirement in seconds; damp_ratio, the damping ratio requirement; and obs_pole_mult, the multiplier used to determine the observer pole locations given the closed-loop pole locations. The select_poles() function is used internally to determine the closed-loop pole locations from the given specifications.


The outputs of ss_design() are the scalar feedforward gain n, the state-space observer-controller ssobsctrl, and a state-space model of the closed-loop system sscl. The ssobsctrl system requires the augmented plant output vector as its input and produces the term - as its output. The actuator command must be computed as shown in the last line of Eq. 5.7.



You can use plot_poles() (included on the CD-ROM) to display the pole locations of the closed-loop system sscl and verify that they satisfy the specifications.




>> plot_poles(sscl, t_settle, damp_ratio)



















 < Day Day Up > 



Summary










 < Free Open Study > 





Summary



The main question that organization executives and project managers ask is, "But, does it really work?" Documented case studies of software process improvement indicate that significant improvements in both quality and productivity are a result of the improvement effort.[7] When the organizational investment is made, the return on investment is typically between 5:1 and 8:1 for successful process improvement efforts.[8] Continuous process improvement begins with an awareness of the process maturity within the project development organization. Although continuous improvement could be done by a single project, the benefit is not realized until the next project. Continuous improvement is an organization-wide initiative. It must be supported by the entire organization and must extend over all projects. For an organization like AEC, in which the average billable rate is $165 per hour, a 5:1 improvement would be a net $825 for each process improvement hour invested.



Success as a software project manager is judged by delivering quality products on time and with the resources budgeted. Quality is determined by the customer and comes from improving the product development process. Continuous process improvement is the mechanism that an organization uses to ensure that products that are less costly, more capable of meeting customer requirements, and more reliable. This mechanism also reduces cost and eliminates waste within the existing development processes, thus allowing project execution on time and within resources.



This chapter looked at continuous process improvement as a process in and of itself. It is not a collection of statistical tools. It is a process of analysis and picking the highest payback improvement targets to add quality and eliminate waste.












     < Free Open Study > 



    20.4 'dg_cli' Function Using Broadcasting



    [ Team LiB ]






    20.4 dg_cli Function Using Broadcasting


    We modify our dg_cli function one more time, this time allowing it to broadcast to the standard UDP daytime server (Figure 2.18) and printing all replies. The only change we make to the main function (Figure 8.7) is to change the destination port number to 13.





    servaddr.sin_port = htons(13);


    We first compile this modified main function with the unmodified dg_cli function from Figure 8.8 and run it on the host freebsd.





    freebsd % udpcli01 192.168.42.255
    hi
    sendto error: Permission denied


    The command-line argument is the subnet-directed broadcast address for the secondary Ethernet. We type a line of input, the program calls sendto, and the error EACCES is returned. The reason we receive the error is that we are not allowed to send a datagram to a broadcast destination address unless we explicitly tell the kernel that we will be broadcasting. We do this by setting the SO_BROADCAST socket option (Section 7.5).


    Berkeley-derived implementations implement this sanity check. Solaris 2.5, on the other hand, accepts the datagram destined for the broadcast address even if we do not specify the socket option. The POSIX specification requires the SO_BROADCAST socket option to be set to send a broadcast packet.

    Broadcasting was a privileged operation with 4.2BSD and the SO_BROADCAST socket option did not exist. This option was added to 4.3BSD and any process was allowed to set the option.


    We now modify our dg_cli function as shown in Figure 20.5. This version sets the SO_BROADCAST socket option and prints all the replies received within five seconds.



    Allocate room for server's address, set socket option


    11�13 malloc allocates room for the server's address to be returned by recvfrom. The SO_BROADCAST socket option is set and a signal handler is installed for SIGALRM.




    Read line, send to socket, read all replies


    14�24 The next two steps, fgets and sendto, are similar to previous versions of this function. But since we are sending a broadcast datagram, we can receive multiple replies. We call recvfrom in a loop and print all the replies received within five seconds. After five seconds, SIGALRM is generated, our signal handler is called, and recvfrom returns the error EINTR.




    Print each received reply


    25�29 For each reply received, we call sock_ntop_host, which in the case of IPv4 returns a string containing the dotted-decimal IP address of the server. This is printed along with the server's reply.


    If we run the program specifying the subnet-directed broadcast address of 192.168.42.255, we see the following:





    freebsd % udpcli01 192.168.42.255
    hi
    from 192.168.42.2: Sat Aug 2 16:42:45 2003
    from 192.168.42.1: Sat Aug 2 14:42:45 2003
    from 192.168.42.3: Sat Aug 2 14:42:45 2003
    hello
    from 192.168.42.3: Sat Aug 2 14:42:57 2003
    from 192.168.42.2: Sat Aug 2 16:42:57 2003
    from 192.168.42.1: Sat Aug 2 14:42:57 2003


    Each time we must type a line of input to generate the output UDP datagram. Each time we receive three replies, and this includes the sending host. As we said earlier, the destination of a broadcast datagram is all the hosts on the attached network, including the sender. Each reply is unicast because the source address of the request, which is used by each server as the destination address of the reply, is a unicast address.


    All the systems report the same time because all run NTP.



    Figure 20.5 dg_cli function that broadcasts.

    bcast/dgclibcast1.c




    1 #include "unp.h"

    2 static void recvfrom_alarm(int);

    3 void
    4 dg_cli(FILE *fp, int sockfd, const SA *pservaddr, socklen_t servlen)
    5 {
    6 int n;
    7 const int on = 1;
    8 char sendline[MAXLINE], recvline[MAXLINE + 1];
    9 socklen_t len;
    10 struct sockaddr *preply_addr;

    11 preply_addr = Malloc(servlen);

    12 Setsockopt(sockfd, SOL_SOCKET, SO_BROADCAST, &on, sizeof(on));

    13 Signal(SIGALRM, recvfrom_alarm);

    14 while (Fgets(sendline, MAXLINE, fp) != NULL) {

    15 Sendto(sockfd, sendline, strlen(sendline), 0, pservaddr, servlen);

    16 alarm(5);
    17 for ( ; ; ) {
    18 len = servlen;
    19 n = recvfrom(sockfd, recvline, MAXLINE, 0, preply_addr, &len);
    20 if (n < 0) {
    21 if (errno == EINTR)
    22 break; /* waited long enough for replies */
    23 else
    24 err_sys("recvfrom error");
    25 } else {
    26 recvline[n] = 0; /* null terminate */
    27 printf("from %s: %s",
    28 Sock_ntop_host(preply_addr, len), recvline);
    29 }
    30 }
    31 }
    32 free(preply_addr);
    33 }

    34 static void
    35 recvfrom_alarm(int signo)
    36 {
    37 return; /* just interrupt the recvfrom() */
    38 }




    IP Fragmentation and Broadcasts


    Berkeley-derived kernels do not allow a broadcast datagram to be fragmented. If the size of an IP datagram that is being sent to a broadcast address exceeds the outgoing interface MTU, EMSGSIZE is returned (pp. 233�234 of TCPv2). This is a policy decision that has existed since 4.2BSD. There is nothing that prevents a kernel from fragmenting a broadcast datagram, but the feeling is that broadcasting puts enough load on the network as it is, so there is no need to multiply this load by the number of fragments.


    We can see this scenario with our program in Figure 20.5. We redirect standard input from a file containing a 2,000-byte line, which will require fragmentation on an Ethernet.





    freebsd % udpcli01 192.168.42.255 < 2000line
    sendto error: Message too long


    AIX, FreeBSD, and MacOS implement this limitation. Linux, Solaris, and HP-UX fragment datagrams sent to a broadcast address. For portability, however, an application that needs to broadcast should determine the MTU of the outgoing interface using the SIOCGIFMTU ioctl, and then subtract the IP and transport header lengths to determine the maximum payload size. Alternately, it can pick a common MTU, like Ethernet's 1500, and use it as a constant.







      [ Team LiB ]



      19.5 Measurement and the Future




      I l@ve RuBoard










      19.5 Measurement and the Future


      Measurement is becoming more important and gaining acceptance in software development. In this modern-day quality era, customers demand complex software solutions of high quality. To ensure effective development, software development organizations must gain control over the entire development process. Measurement is the key to achieving such control and to making software development a true engineering discipline. Without effective use of measurements, progress in the tasks of planning and controlling software development will remain slow and will not be systematic.


      Various software engineering techniques have emerged in the past decades: CASE tools, formal methods, software fault tolerance, object technology, new development processes, and the like. Software developers are faced with an enormous choice of methods, tools, and standards to improve productivity and quality. Relatively, there is little quantitative data and objective evaluation of various methods in software engineering. There is an urgent need for proper measurements to quantify the benefits and costs of these competing technologies. Such evaluations will help the software engineering discipline grow and mature. Progress will be made at adopting innovations that work well, and discarding or improving those that do not. Likewise, proposed process improvement practices must be tested and, substantiated or refuted via empirical studies. Software project assessments and process assessments ought to gather quantitative data on quality and productivity parameters and evaluate the link between process practices and measurable improvements.


      The "state of the art" in measurements needs to be continually refined and improved, including all kinds of metrics and models that are discussed: software reliability models, quality management models and metrics, complexity metrics and models, and customer-oriented metrics and measurement. Good measurements must be based on sound theoretical underpinnings and empirical validity. Empirical validation is the key for natural selection and for these measurements to improve and mature. It may be the common ground for the different types of metrics and models that are developed by different groups of professionals.


      To make their metrics program successful, development organizations ought to place strong focus on the data tracking system, the data quality, and the training and experience of the personnel involved. The quality of measurement practice plays a pivotal role in determining whether software measurement will become engrained in the state of practice in software engineering.


      There are certainly encouraging signs on all these fronts.







        I l@ve RuBoard



        The 'include' Directive



        [ Team LiB ]





        The include Directive


        Many programmers who come from the C and C++ worlds are disappointed with the lack of an include keyword in Java. The include directive in JSP performs the same service that the C include keyword does: It includes a file at compile time as opposed to runtime.


        The nice thing about including a file at compile time is that it requires less overhead than a file included at runtime. The included file doesn't need to be a servlet or JSP, either. When the JSP compiler sees an include directive, it reads the included file as if it were part of the JSP that's being compiled.


        You might have a standard HTML header that you want to put on all your files. For example, to include a file named header.html, your include directive would look like this:





        <%@ include file="header.html" %>

        The filename in the include directive is actually a relative URL. If you just specify a filename with no associated path, the JSP compiler assumes that the file is in the same directory as your JSP, as in the example with header.html.





          [ Team LiB ]



          Section 12.1.&nbsp; Collections Overview









          12.1. Collections Overview


          We will start with a description of the different types of collections and a number of examples to get you started.



          12.1.1. Types of Collections


          Oracle supports three different types of
          collections. While these different types have much in common, they also each have their own particular characteristics. Many of the terms mentioned in the definitions below are further explained in the "Collection Concepts and Terminology" section, immediately following.



          Associative arrays


          These are single-dimensional, unbounded, sparse collections of homogeneous elements that are available only in PL/SQL. They were called PL/SQL tables in PL/SQL 2 and index-by tables in Oracle8 Database and Oracle8i Database (because when you declare such a collection, you explicitly state that they are "indexed by" the row number). In Oracle9i Database Release 1, the name was changed to associative arrays
          . The motivation for the name change was that starting with that release, the INDEX BY syntax could be used to "associate" or index contents by VARCHAR2 or PLS_INTEGER.


          Nested tables


          These are also single-dimensional, unbounded collections of homogeneous elements. They are initially dense but can become sparse through deletions. Nested tables can be defined in both PL/SQL and the database (for example, as a column in a table). Nested tables are multisets, which means that there is no inherent order to the elements in a nested table.


          VARRAYs




          Like the other two collection types, VARRAYs (variable-sized arrays) are also single-dimensional collections of homogeneous elements. However, they are always bounded and never sparse. When you define a type of VARRAY, you must also specify the maximum number of elements it can contain. Like nested tables
          , they can be used in PL/SQL and in the database. Unlike nested tables, when you store and retrieve a VARRAY, its element order is preserved.




          12.1.2. Collections Concepts and Terminology













          The following explanations will help you understand collections and more rapidly establish a comfort level with these data structures.



          Collection type


          Each collection variable in your program must be declared based on a pre-defined collection type. As I mentioned earlier, there are, very generally, three types of collections: associative arrays, nested tables, and VARRAYs. Within those generic types, there are specific types that you define with a TYPE statement in a block's declaration section. You can then declare and use instances of those types in your programs.


          Collection or collection instance


          The term "collection" may refer to any of the following:


          • A PL/SQL variable of type associative array, nested table, or VARRAY

          • A table column of type nested table or VARRAY


          Regardless of the particular type or usage, however, a collection is at its core a single-dimensional list of homogeneous elements.


          A collection instance is an instance of a particular type of collection.


          Partly due to the syntax and names Oracle has chosen to support collections, you will also find them referred to as arrays and tables.


          Homogeneous elements


          The datatype of each row in a collection is the same; thus, its elements are homogeneous. This datatype is defined
          by the type of collection used to declare the collection itself. This datatype can, however, be a composite or complex datatype itself; you can declare a table of records, for example. And starting in Oracle9i Database Release 1, you can even define multilevel collections, in which the datatype of one collection is itself a collection type, or a record or object whose attribute contains a collection.


          One-dimensional or single-dimensional


          A PL/SQL collection always has just a single column of information in each row, and is in this way similar to a one-dimensional array. You cannot define a collection so that it can be referenced as follows:



          my_collection (10, 44)



          This is a two-dimensional structure and not currently supported with that traditional syntax. Instead, you can create multidimensional arrays
          by declaring collections of collections, in which case the syntax you use will be something like this:



          my_collection (44) (10)



          Unbounded versus bounded


          A collection is said to be bounded if there are predetermined limits to the possible values for row numbers in that collection. It is unbounded if there are no upper or lower limits on those row numbers. VARRAYs or variable-sized arrays are always bounded; when you define them, you specify the maximum number of rows allowed in that collection (the first row number is always 1). Nested tables and associative arrays are only theoretically bounded. We describe them as unbounded, because from a theoretical standpoint, there is no limit to the number of rows you can define in them.


          Sparse versus dense


          A collection (or array or list) is called dense if all rows between the first and last row are defined and given a value (including NULL). A collection is sparse if rows are not defined and populated sequentially; instead, there are gaps between defined rows, as demonstrated in the associative array example in the next section. VARRAYs are always dense. Nested tables always start as dense collections but can be made sparse. Associative arrays can be sparse or dense, depending on how you fill the collection.


          Sparseness is a very valuable feature, as it gives you the flexibility to populate rows in a collection using a primary key or other intelligent key data as the row number. By doing so, you can define an order on the data in a collection or greatly enhance the performance of lookups.


          Indexed by integers


          All collections support the ability to reference a row via the row number, an integer value. The associative array TYPE declaration makes that explicit with its INDEX BY clause, but the same rule holds true for the other collection types.


          Indexed by strings


          Starting with Oracle9i Database Release 2, it is possible to index an associative array by string values (currently up to 32K in length) instead of by numeric row numbers. This feature is not available for nested tables or VARRAYs.


          Outer table


          This refers to the enclosing table in which you have used a nested table or VARRAY as a column's datatype.


          Inner table


          This is the enclosed collection that is implemented as a column in a table; it is also known as a nested table column.


          Store table


          This is the physical table that Oracle creates to hold values of the inner table (a nested table column).




          12.1.3. Collection Examples










          This section provides relatively simple examples of
          each different type of collection with explanations of the major characteristics.



          12.1.3.1 Using an associative array

          In the following example, I declare an associative array type and then a collection based on that type. I populate it with four rows of data and then iterate through the collection, displaying the strings in the collection. A more thorough explanation appears after the code.



          1 DECLARE
          2 TYPE list_of_names_t IS TABLE OF person.first_name%TYPE
          3 INDEX BY PLS_INTEGER;
          4 happyfamily list_of_names_t;
          5 l_row PLS_INTEGER;
          6 BEGIN
          7 happyfamily (2020202020) := 'Eli';
          8 happyfamily (-15070) := 'Steven';
          9 happyfamily (-90900) := 'Chris';
          10 happyfamily (88) := 'Veva';
          11
          12 l_row := happyfamily.FIRST;
          13
          14 WHILE (l_row IS NOT NULL)
          15 LOOP
          16 DBMS_OUTPUT.put_line (happyfamily (l_row));
          17 l_row := happyfamily.NEXT (l_row);
          18 END LOOP;
          19* END;
          SQL> /
          Chris
          Steven
          Veva
          Eli



          Line(s)

          Description

          2-3

          Declare the associative array TYPE, with its distinctive INDEX BY clause. A collection based on this type contains a list of strings, each of which can be as long as the first_name column in the person table.

          4

          Declare the happyfamily collection from the list_of_names_t type.

          9 -10

          Populate the collection with four names. Notice that I can use virtually any integer value that I like. The row numbers don't have to be sequential in an associative array; they can even be negative!

          12

          Call the FIRST method (a function that is "attached" to the collection) to get the first or lowest defined row number in the collection.

          14-18

          Use a WHILE loop to iterate through the contents of the collection, displaying each row. Line 17 show the NEXT method, which is used to move from the current defined row to the next defined row, "skipping over" any gaps.





          12.1.3.2 Using a nested table


          In the following example, I first declare a nested table type as a schema-level type. In my PL/SQL block, I declare three nested tables based on that type. I put the names of everyone in my family into the happyfamily nested table. I put the names of my children in the children nested table. I then use the Oracle Database 10g set operator, MULTISET EXCEPT, to extract just the parents from the happyfamily nested table; finally, I display the names of the parents. A more thorough explanation appears after the code.



          REM Section A
          SQL> CREATE TYPE list_of_names_t IS TABLE OF VARCHAR2 (100);
          2 /
          Type created.
           
          REM Section B
          SQL>
          1 DECLARE
          2 happyfamily list_of_names_t := list_of_names_t ( );
          3 children list_of_names_t := list_of_names_t ( );
          4 parents list_of_names_t := list_of_names_t ( );
          5 BEGIN
          6 happyfamily.EXTEND (4);
          7 happyfamily (1) := 'Eli';
          8 happyfamily (2) := 'Steven';
          9 happyfamily (3) := 'Chris';
          10 happyfamily (4) := 'Veva';
          11
          12 children.EXTEND;
          13 children (1) := 'Chris';
          14 children.EXTEND;
          15 children (2) := 'Eli';
          16
          17 parents := happyfamily MULTISET EXCEPT children;
          18
          19 FOR l_row IN parents.FIRST .. parents.LAST
          20 LOOP
          21 DBMS_OUTPUT.put_line (parents (l_row));
          22 END LOOP;
          23* END;
          SQL> /
          Steven
          Veva



          Line(s)

          Description

          Section A

          The CREATE TYPE statement creates a nested table type in the database itself. By taking this approach, I can declare nested tables in any PL/SQL block that has SELECT authority on the type. I can also declare columns in relational tables of this type.

          2-4

          Declare three different nested tables based on the schema-level type. Notice that in each case I also call a constructor function to initialize the nested table. This function always has the same name as the type and is created for us by Oracle. You must initialize a nested table before it can be used.

          6

          Call the EXTEND method to "make room" in my nested table for the members of my family. Here, in contrast to associative arrays, I must explicitly ask for a row in a nested table before I can place a value in that row.

          7-10

          Populate the happyfamily collection with our names.

          12-15

          Populate the children collection. In this case, I extend a single row at a time.

          17

          To obtain the parents in this family, I simply take the children out of the happyfamily. This is transparently easy to do in releases from Oracle Database 10g onwards, where we have high-level set operators like MULTISET EXCEPT (very similar to the SQL MINUS).

          19-22

          Because I know that my parents collection is densely filled from the MULTISET EXCEPT operation, I can use the numeric FOR loop to iterate through the contents of the collection. This construct will raise a NO_DATA_FOUND exception if used with a sparse collection.





          12.1.3.3 Using a VARRAY



          In the following example, I demonstrate the use of VARRAYs as columns in a relational table. First, I declare two different schema-level VARRAY types. I then create a relational table, family, that has two VARRAY columns. Finally, in my PL/SQL code, I populate two local collections and then use them in an INSERT into the family table. A more thorough explanation appears after the code.



          REM Section A
          SQL> CREATE TYPE first_names_t IS VARRAY (2) OF VARCHAR2 (100);
          2 /
          Type created.
           
          SQL> CREATE TYPE child_names_t IS VARRAY (1) OF VARCHAR2 (100);
          2 /
          Type created.
           
          REM Section B
          SQL> CREATE TABLE family (
          2 surname VARCHAR2(1000)
          3 , parent_names first_names_t
          4 , children_names child_names_t
          5 );
           
          Table created.
           
          REM Section C
          SQL>
          1 DECLARE
          2 parents first_names_t := first_names_t ( );
          3 children child_names_t := child_names_t ( );
          4 BEGIN
          5 parents.EXTEND (2);
          6 parents (1) := 'Samuel';
          7 parents (2) := 'Charina';
          8 --
          9 children.EXTEND;
          10 children (1) := 'Feather';
          11
          12 --
          13 INSERT INTO family
          14 (surname, parent_names, children_names
          15 )
          16 VALUES ('Assurty', parents, children
          17 );
          18 END;
          SQL> /
           
          PL/SQL procedure successfully completed.
           
          SQL> SELECT * FROM family
          2 /
           
          SURNAME
          PARENT_NAMES
          CHILDREN_NAMES
          --------------------------------------------
          Assurty
          FIRST_NAMES_T('Samuel', 'Charina')
          CHILD_NAMES_T('Feather')



          Line(s)

          Description

          Section A

          Use CREATE TYPE statements to declare two different VARRAY types. Notice that with a VARRAY, I must specify the maximum length of the collection. Thus, my declarations in essence dictate a form of social policy: you can have at most two parents and at most one child.

          Section B

          Create a relational table, with three columns: a VARCHAR2 column for the surname of the family and two VARRAY columns, one for the parents and another for the children.

          Section C, lines 2-3

          Declare two local VARRAYs based on the schema-level type. As with nested tables (and unlike with associative arrays), I must call the constructor function of the same name as the TYPE to initialize the structures.

          5 -10

          Extend and populate the collections with the names of parents and then the single child. If I try to extend to a second row, Oracle will raise the ORA-06532: Subscript outside of limit error.

          13-17

          Insert a row into the family table, simply providing the VARRAYs in the list of values for the table. Oracle certainly makes it easy for us to insert collections into a relational table!






          12.1.4. Where You Can Use Collections





          The following sections describe the different places in your code where a collection can be declared and used. Because a collection type can be defined in the database itself (nested tables and VARRAYs only), you can find collections not only in PL/SQL programs but also inside tables and object types.



          12.1.4.1 Collections as components of a record

          Using a collection type in a record is similar to using any other type. You can use associative arrays, nested tables, VARRAYs, or any combination thereof in RECORD datatypes. For example:



          DECLARE
          TYPE toy_rec_t IS RECORD (
          manufacturer INTEGER,
          shipping_weight_kg NUMBER,
          domestic_colors Color_array_t,
          international_colors Color_tab_t
          );





          12.1.4.2 Collections as program parameters

          Collections can also serve as parameters in functions and procedures. The format for the parameter declaration is the same as with any other:



          parameter_name [ IN | IN OUT | OUT ] parameter_type
          [ DEFAULT | := <default_value> ]



          PL/SQL does not offer any predefined collection types. This means that before you can pass a collection as an argument, you must have already defined the collection type that will serve as the parameter type. You can do this by:


          • Defining a schema-level type with CREATE TYPE

          • Declaring the collection type in a package specification

          • Declaring that type in an outer scope from the definition of the module


          Here is an example of using a schema-level type:



          CREATE TYPE yes_no_t IS TABLE OF CHAR(1);
          /
          CREATE OR REPLACE PROCEDURE act_on_flags (flags_in IN yes_no_t)
          IS
          BEGIN
          ...
          END act_on_flags;
          /



          Here is an example of using a collection type defined in a package specification: there is only one way to declare an associative array of Booleans (and all other base datatypes), so why not define them once in a package specification and reference them throughout my application?



          /* File on web: aa_types.pks */
          CREATE OR REPLACE PACKAGE aa_types
          IS
          TYPE boolean_aat IS TABLE OF BOOLEAN
          INDEX BY BINARY_INTEGER;
          ...
          END aa_types;
          /



          Notice that when I reference the collection type in my parameter list, I must qualify it with the package name:



          CREATE OR REPLACE PROCEDURE act_on_flags (
          flags_in IN aa_types.boolean_aat)
          IS
          BEGIN
          ...
          END act_on_flags;
          /



          Finally, here is an example of declaring a collection type in an outer block and then using it in an inner block:



          DECLARE
          TYPE birthdates_aat IS VARRAY (10) OF DATE;
          l_dates birthdates_aat := birthdates_aat ( );
          BEGIN
          l_dates.EXTEND (1);
          l_dates (1) := SYSDATE;
           
          DECLARE
          FUNCTION earliest_birthdate (list_in IN birthdates_aat)
          RETURN DATE
          IS
          BEGIN
          ...
          END earliest_birthdate;
          BEGIN
          DBMS_OUTPUT.put_line (earliest_birthdate (l_dates));
          END;
          END;





          12.1.4.3 Collections as datatypes of a function's return value


          In the next example, we have defined Color_tab_t as the type of a function return value, and also used it as the datatype of a local variable. The same restriction about scope applies to this usage: types must be declared outside the module's scope.



          CREATE FUNCTION true_colors (whose_id IN NUMBER) RETURN Color_tab_t
          AS
          l_colors Color_tab_t;
          BEGIN
          SELECT favorite_colors INTO l_colors
          FROM personality_inventory
          WHERE person_id = whose_id;
          RETURN l_colors;
          EXCEPTION
          WHEN NO_DATA_FOUND
          THEN
          RETURN NULL;
          END;



          How would you use this function in a PL/SQL program? Because it acts in the place of a variable of type Color_tab_t, you can do one of two things with the returned data:


          1. Assign the entire result to a collection variable.

          2. Assign a single element of the result to a variable (as long as the variable is of a type compatible with the collection's elements).


          Option #1 is easy. Notice, by the way, that this is another circumstance where you don't have to initialize the collection variable explicitly:



          DECLARE
          color_array Color_tab_t;
          BEGIN
          color_array := true_colors (8041);
          END;



          With Option #2, we actually give the function call a subscript. The general form is:



          variable_of_element_type := function() (subscript);



          Or, in the case of the true_colors function:



          DECLARE
          one_of_my_favorite_colors VARCHAR2(30);
          BEGIN
          one_of_my_favorite_colors := true_colors (whose_id=>8041) (1);
          END;



          Note that this code has a small problem: if there is no record in the database table where person_id is 8041, the attempt to read its first element will raise a COLLECTION_IS_NULL exception. We should trap and deal with this exception in a way that makes sense to the application.


          In the previous example, I've used named parameter notation (whose_id=>) for readability, although it is not strictly required. (See Chapter 17 for more details.)




          12.1.4.4 Collection as "columns" in a database table


          Using a nested table or VARRAY, you can store and retrieve nonatomic data in a single column of a table. For example, the employee table used by the HR department could store the date of birth for each employee's dependents in a single column, as shown in Table 12-1.


          Table 12-1. Storing a column of dependents as a collection in a table of employees

          Id (NUMBER)

          Name (VARCHAR2)

          Dependents_ages (Dependent_birthdate_t)

          10010

          Zaphod Beeblebrox

          12-JAN-1763

            

          4-JUL-1977

            

          22-MAR-2021

          10020

          Molly Squiggly

          15-NOV-1968

            

          15-NOV-1968

          10030

          Joseph Josephs

           

          10040

          Cepheus Usrbin

          27-JUN-1995

            

          9-AUG-1996

            

          19-JUN-1997

          10050

          Deirdre Quattlebaum

          21-SEP-1997



          It's not terribly difficult to create such a table. First we define the collection type:



          CREATE TYPE Dependent_birthdate_t AS VARRAY(10) OF DATE;



          Now we can use it in the table definition:



          CREATE TABLE employees (
          id NUMBER,
          name VARCHAR2(50),
          ...other columns...,
          Dependents_ages Dependent_birthdate_t
          );



          We can populate this table using the following INSERT syntax, which relies on the type's default constructor to transform a list of dates into values of the proper datatype:



          INSERT INTO employees VALUES (42, 'Zaphod Beeblebrox', ...,
          Dependent_birthdate_t( '12-JAN-1765', '4-JUL-1977', '22-MAR-2021'));



          Now let's look at an example of a nested table datatype as a column. When we create the outer table personality_inventory, we must tell Oracle what we want to call the "store table."



          CREATE TABLE personality_inventory (
          person_id NUMBER,
          favorite_colors Color_tab_t,
          date_tested DATE,
          test_results BLOB)
          NESTED TABLE favorite_colors STORE AS favorite_colors_st;



          The NESTED TABLE ... STORE AS clause
          tells Oracle that we want the store table for the favorite_colors column to be called favorite_colors_st. There is no preset limit on how large this store table, which is located "out of line" (or separate from the rest of that row's data to accommodate growth) can grow.


          You cannot directly manipulate data in the store table, and any attempt to retrieve or store data directly into favorite_colors_st will generate an error. The only path by which you can read or write the store table's attributes is via the outer table. (See the discussion of collection pseudo-functions in the later section, "Working with Collections in SQL," for a few examples of doing so.) You cannot even specify storage parameters for the store table; it inherits the physical attributes of its outermost table.


          One chief difference between nested tables and VARRAYs surfaces when we use them as column datatypes
          . Although using a VARRAY as a column's datatype can achieve much the same result as a nested table, VARRAY data must be predeclared to be of a maximum size, and is actually stored "inline" with the rest of the table's data. For this reason, Oracle Corporation says that VARRAY columns are intended for "small" arrays, and that nested tables are appropriate for "large" arrays.




          12.1.4.5 Collections as attributes of an object type



          In this example, we are modeling automobile specifications. Each Auto_spec_t object will include a list of manufacturer's colors in which you can purchase the vehicle.



          CREATE TYPE Auto_spec_t AS OBJECT (
          make VARCHAR2(30),
          model VARCHAR2(30),
          available_colors Color_tab_t
          );



          Because there is no data storage required for the object type, it is not necessary to designate a name for the companion table at the time we issue the CREATE TYPE ... AS OBJECT
          statement.


          When the time comes to implement the type as, say, an object table, you could do this:



          CREATE TABLE auto_specs OF Auto_spec_t
          NESTED TABLE available_colors STORE AS available_colors_st;



          This statement requires a bit of explanation. When you create a "table of objects," Oracle looks at the object type definition to determine what columns you want. When it discovers that one of the object type's attributes, available_colors, is in fact a nested table, Oracle treats this table as it did in earlier examples; in other words, it wants to know what to name the store table. So the phrase:



          ...NESTED TABLE available_colors STORE AS available_colors_st



          says that you want the available_colors column to have a store table named available_colors_st.


          See Chapter 25 for more information about Oracle object types.





          12.1.5. Choosing a Collection Type



          Which collection type makes sense for your application? In some cases, the choice is obvious. In others, there may be several acceptable choices. This section provides some guidance. Table 12-2 illustrates many of the differences between associative arrays, nested tables, and VARRAYs.


          As a PL/SQL developer, I find myself leaning toward using associative arrays as a first instinct. Why is this? They involve the least amount of coding. You don't have to initialize or extend them. They have historically been the most efficient collection type (although this distinction will probably fade over time). However, if you want to store your collection within a database table, you cannot use an associative array. The question then becomes: nested table or VARRAY?


          The following guidelines will help you make your choice; we recommend, however, that you read the rest of the chapter first if you are not very familiar with collections already.


          • If you need sparse associative arrays (for example, for "data-smart" storage), your only practical option is an associative array. True, you could allocate and then delete elements of a nested table variable (as illustrated in the later section on NEXT and PRIOR methods), but it is inefficient to do so for anything but the smallest collections.

          • If your PL/SQL application requires negative subscripts, you also have to use associative arrays.

          • If you are running Oracle Database 10g and would find it useful to perform high-level set operations on your collections, choose nested tables over associative arrays.

          • If you want to enforce a limit to the number of rows stored in a collection, use VARRAYs.

          • If you intend to store large amounts of persistent data in a column collection, your only option is a nested table. Oracle will then use a separate table behind the scenes to hold the collection data, so you can allow for almost limitless growth.

          • If you want to preserve the order of elements stored in the collection column and if your dataset will be small, use a VARRAY. What is "small?" I tend to think in terms of how much data you can fit into a single database block; if you span blocks, you get row chaining, which decreases performance. The database block size is established at database creation time and is typically 2K, 4K, or 8K.

          • Here are some other indications that a VARRAY would be appropriate: you don't want to worry about deletions occurring in the middle of the data set; your data has an intrinsic upper bound; or you expect, in general, to retrieve the entire collection simultaneously.


          Table 12-2. Comparing Oracle collection types

          Characteristic

          Associative array

          Nested table

          VARRAY

          Dimensionality

          Single

          Single

          Single

          Usable in SQL?

          No

          Yes

          Yes

          Usable as column datatype in a table?

          No

          Yes; data stored "out of line" (in separate table)

          Yes; data stored "in line" (in same table)

          Uninitialized state

          Empty (cannot be null); elements undefined

          Atomically null; illegal to reference elements

          Atomically null; illegal to reference elements

          Initialization

          Automatic, when declared

          Via constructor, fetch, assignment

          Via constructor, fetch, assignment

          In PL/SQL, elements

          referenced via

          BINARY_INTEGER (-2,147,483,647 .. 2,147,483,647)


          VARCHAR2 (Oracle9i Database Release 2 and above)

          Positive integer between 1 and 2,147,483,647

          Positive integer between 1 and 2,147,483,647

          Sparse?

          Yes

          Initially, no; after deletions, yes

          No

          Bounded?

          No

          Can be extended

          Yes

          Can assign value to any

          element at any time?

          Yes

          No; may need to EXTEND first

          No; may need to EXTEND first, and cannot EXTEND past upper bound

          Means of extending

          Assign value to element with a new subscript

          Use built-in EXTEND procedure (or TRIM to condense), with no predefined

          maximum

          EXTEND (or TRIM), but only up to declared maximum size

          Can be compared for equality?

          No

          Yes, in Oracle Database 10g

          No

          Can be manipulated with set operators

          No

          Yes, in Oracle Database 10g

          No

          Retains ordering and subscripts when stored in and retrieved from database?

          N/A

          No

          Yes











            Newer Posts Older Posts Home