Let T be a set which is called the index set (thought of as time), then, a collection or family of random variables {X(t), t ε T} is called a stochastic process.

If T is a denumerable infinite sequence then {X(t)} is called a stochastic process with discrete parameter. If T is a finite or infinite interval, then {X(t)} is called a stochastic process with continuous parameter. In the definition above, T is the time interval involved and {X(t)} is the observation at time t.

The theory of stochastic processes has developed very rapidly and has found application in a large number of fields; for example, the study of fluctuations and noise in the physical system, in the information theory of communication and control, in operations research, in biology, in astronomy, and so on. No attempt has been made to investigate all applications in this report, as we are especially interested in the study of the theory of stochastic processes in application to operations research. Since stochastic processes provides a method of quantitative study through the mathematical model, it plays an important role in the modern discipline or operations research. The waiting-line analysis or queueing problem of operations research is the most important part in which the theory of stochastic processes applies most often. A brief description follows.

The waiting-line problem is described by a flow of customers requiring service when there is some restriction on the service that can be provided. The group waiting to receive service is called a queue; for example: Patients arriving at a clinic to see a doctor; students waiting at a window for registration packages; persons waiting in a Greyhound bus station to buy tickets; numerous problems connected with telephone exchanges; machines which stop from time to time and require attention by an operator before restarting, the operator being able to attend to only one machine at a time, etc. All these form a queue. In order to study the nature of the waiting-line problem, the following three aspects should be specified.

1. *Input process or the arrival pattern* is the probability law with both the average rate or customers and the statistical pattern of the arrivals. One of the common arrival patterns is that of Poisson input.

2. The *service mechanism* describes when service is available, how many customers can be served at a time, and how long service takes; for examples, the exponential service time distribution, the constant service time distribution, etc.

3. The *Queue discipline *is the manner in which a customer is selected for service out of all those awaiting service. One of the possible ways is "first come, first served."

The queuing theory is concerned with the effect that each of the three aspects has on various quantities of interest; such as the length of the queue, the service time distributions, and the average waiting time.

We shall first deal with the theoretical developments. The applications will then follow.

]]>This paper presents the most important results in infinite abelian groups following the exposition given by J. Rotman in his book, Theory of Groups: An Introduction. Also, some of the exercises given by J. Rotman are presented in this paper. In order to facilitate our study, two classifications of infinite abelian groups are used. The first reduces the study of abelian groups to the study of torsion groups, torsion-free groups and an extension problem. The second classification reduces to the study of divisible and reduced groups. Following this is a study of free abelian groups that are, in a certain sense, dual to the divisible groups; the basis and fundamental theorems of finitely generated abelian groups are proved. Finally, torsion groups and torsion-free groups of rank 1 are studied.

It is assumed that the reader is familiar with elementary group theory and finite abelian groups. Zorn's lemma is applied several times as well as some results of vector spaces.

]]>Migration to the United States from Mexico mushroomed after 1910, when a tumultuous revolution and poor economic conditions in Mexico encouraged Hispanics to look to the U.S. for jobs and stability. Mining industries, railroad companies, and farms in the United States demanded large numbers of unskilled and semi-skilled workers, and Hispanics proved a readily available labor source. Over time, Mexicans and Chicanos clustered in low paying, low-prestige jobs where they composed a large but obscure segment of the United States population.

Cache Valley, which stretched from northern Utah into southeastern Idaho, served as an example of a region where Hispanics played a vital economic role. Area farmers relied on sugar beets as an important cash crop, but successful beet cultivation required periods of such intense drudgery that farmers had to find wage-laborers to expedite the work. Farmers and sugar companies actively recruited Mexicans and Chicanos, and the region drew many migrants.

Cache Valley's Anglo communities depended on Hispanic farm labor, but migrants remained disconnected from the dominant Anglo population in important ways. Differences in class, race, and culture segregated the two groups and rendered the migrants largely invisible to many of Cache Valley's Anglo residents.

Recorded history demonstrated the workers' shrouded status. In spite of the longterm presence of Hispanic laborers in Cache Valley, written local histories dealt almost exclusively with the region's white citizenry. The rise to prominence of Chicano studies as a historical field underscored a pronounced need for research into the lives of Cache Valley's Hispanic population.

]]>ζ(x) - λ∫ K(x, s) ζ(s) ds = f(x) a ≤ x ≤ b

is called a Fredholm equation. By the method of successive approximations a solution can be obtained if the parameter λ is sufficiently small. If the kernel, K (x, s), is degenerate then a solution can be obtained by reducing it to a system of linear algebraic equations. In the general case the kernel is represented as an infinite Fourier Series. With this representation the solution is obtained by combining the two methods mentioned. The solution is the solution of two integral equations, one of which is solvable by successive approximations and the other has a degenerate kernel. The conditions for solvability of the Fredholm equations will be proven.

]]>Two factors which greatly influence market research and advertising programs are the attitude of the consumer toward a given product and the relationship of attitude to the degree of actual milk consumption. To avoid unnecessary waste and decreased profits resulting from production in excess of consumer demands, a method of measuring consumer preference and consumption could conceivably affect the dairy producer, processor, and marketer.

]]>Given an element a of a well-ordered set B, the set of all elements of B which procede a is called a segment of B. Every uncountable well-ordered set, all of whose segments are either finite or countable, is said to have power N_{1}. The Continuum Hypothesis is the hypothesis that the power of the continuum is N_{1 }that is 2^{N0 }= N_{1}. In the sequel, this equality will be called hypothesis H.

Factor analysis is being used in many fields. A few of the fields are sociology, meteorology, political science, medicine, geography, business, economics, ecology, soil science, and geology. The following are three specific examples.

In meteorology, White (1958) found that factor analysis could reduce considerably the number of variables in his study of sea-level pressure forecasting. In this study, there were 42 original variables. With 5, 10, and 20 underlying variables, White was able to account for 75.51, 90.70, and 97.37 per cent of the original variance, respectively.

In ecology, Orloci (1966) found that factor analysis could be used to reduce the number of variables in his study of vegetation on Newborough Warren, Anglesey. In this study, there were 101 original variables. With three underlying variables, Orloci was able to account for 43.98 per cent of the original variance.

In the study by White names were not given for the new variables that were found; whereas in the study by Orloci, meaningful names were obtained for the first three factors. Just because a factor accounts for a large portion of the variance does not imply that meaningful names may be readily applied to these factors.

In soil science, Lombard (1965) found that factor analysis could be used in his study of citrus irrigations to reduce the number of variables from 12 to 3. These three underlying variables accounted for 99.5 per cent of the variance. Meaningful names were obtained for the three underlying variables. If meaningful names cannot be obtained for the roots or new variables, then factor analysis is not of much value as a statistical tool.

Many different programs have been written to perform factor analysis. A few of the existing programs are those by Cooley and Lohnes (1962), Horst (1965), Hurst (n.d., d), in the System/360 Scientific Subroutine Package (1968), and Veldman (1967).

This report contains program write-ups and listings for three computer programs, one for principal component factor analysis, one for factor analysis transformation, and another for the centroid method factor analysis.

The Principal Component Factor Analysis program will handle up to 50 variables. The Factor Analysis Transformation program will handle up to 50 variables and 15 factors. The Centroid Method Factor Analysis program will handle up to 60 variables. These programs will all run on a 65K byte IBM 360/44 with FORTRAN IV. A card reader, card punch, printer, and one disk or tape are needed.

]]>By truncation, or censoring, the information can be obtained in a shorter period of time, since fewer items are tested. These statistical situations have frequently been encountered in what are called life testing, dosage response studies, target analysis, biological analysis, biological assays, and in other related investigations.

The methods applicable to the study of truncation may be classified roughly as follows: 1. Method of maximum likelihood estimator. This method is to be recommended when sample sizes are at least moderately large.The estimators for truncated and censored samples are consistent and asymptotically efficient. Solutions are always approximated by straightforward iterative procedures; hence, the calculations often become tedious and laborious. 2. Method of least squares or order statistics. The method should be employed when estimators must be based on samples of size 20 or less. The approach to the general case in truncation is of value not only for its numerical results but also for the drawing of inferences concerning interesting and important patterns for the coefficients, variances, and the relative efficiencies of the estimates. 3. Computer method of maximum likelihood estimation. Recently with the development and availability of electronic computers, the exhausting calculations involved in the maximum likelihood estimations have been greatly alleviated. One program furnished by Hurst (1966) has been appended for reference.

]]>The first portion of the paper discusses transformations and subgroups. However, many basic definitions and theorems will not be stated for example definition of a group, subgroup, normal subgroup, factor group, etc. Topics to be emphasized include centralizer, center, and normalizer of a group, characteristic subgroups, conjugacy and commutator.

In the second part of the paper direct sums are discussed with the ultimate proof of the famous Remak-Krull-Schmidt Theorem.

]]>An additional basic objective is to obtain explicit algebraic expressions for different types of linear trans- formations.

The first concepts to be covered are arbitrary linear transformations, various ways of looking at linear transformations, and the effects of a linear transformation on a vector of normally distributed random variables.

Next orthogonal transformations to independence, and then oblique transformations to independence will be developed in turn.

]]>It is the purpose of this study to present associated numerical methods for digital computer which are satisfactorily accurate and which are reasonably economical in both time and machine memory capacity. To carry out this objective the following procedures were used:

1. A review of literature on numerical approximations-both texts and articles from statistical journals and computer science publications.

2. .Writing test programs in Fortran for all the associated methods which can be obtained.

3. Checking the answers obtained by numerical approximation with the known answers in the table in order to determine usefulness of the numerical method.

4. Writing Fortran subprograms to evaluate those integrals by using the most accurate methods according to the experimental results.

]]>For example we show that the plane (E^{2}) does not contain uncountably many pairwise disjoint contina each of which contains a simple triod (Corollary 4. 1 ). We prove that in an uncountable collection G of pairwise disjoint simple closed curves in E^{2} "almost all" elements of G must be converged to homeomorphically "from both sides" by sequences of elements of G (see Theorem 4. 3 ). The same technique allows us to prove the nonexistence of uncountably many pairwise dis -joint wild 2 -spheres in E^{3}.

Another interesting consequence of Borsuk's Theorem is Theorem 3. 4 which shows that in each set G consisting of uncountably many compact subsets of a metric space, some element of G is an element of convergence. Proofs for this theorem do not often appear in the literature, and, as far as the author knows, the proof given here does not appear in the literature.

We wish to emphasize that all the proofs given in this report were constructed by the author without reference to the literature, in fact the author was unaware of the references until after the proofs were given. We given reference at the end of the paper where proofs in the literature can be compared with the proofs given here.

We wish to emphasize that all the proofs given in this report were constructed by the author without reference to the literature, in fact the author was unaware of the references until after the proofs were given. We given reference at the end of the paper where proofs in the literature can be compared with the proofs given here.

]]>Consequently, there have been methods developed to approximate chi-square, t and F value, when degrees of freedom and probability are known. It is the purpose of this study to present the methods of each individual distribution and evaluate its accuracy. Thus, the scope of this paper includes the following:

1. The definition and inverse function of each distribution.

2. The numerical approximate methods and examples.

3. A computer Fortran IV program to maximize the accuracy of calculation.

4. A comparison of the results obtained by numerical approximation with the known tabular value.

5. An evaluation of the capacity of these numerical methods.

]]>The first part of this report will be devoted to the general development of such functions by means of definitions and theorems. The second part will consist of generalizations of a particular function, the -r-function.

Throughout this paper, lower case Greek letters will represent real numbers and lower case English letters will represent integers. Also the basic ideas of summation and product will be assumed as already familiar to the reader.

]]>Lebesgue integration differs from Riemann integration in the way the approximations to the integral are taken. Riemann approximations use step functions which have a constant value on any given interval of the domain corresponding to some partition. Lebesgue approximations use what are called simple functions which, like the step functions, take on only a finite number of values. However, these values are not necessarily taken on by the function on intervals of the domain, but rather on arbitrary subsets of the domain. The integration of simple functions under the most general circumstances possible necessitates a generalization of our notion of length of a set when the set is more complicated than a simple interval. We define the Iebesgue measure "m" of a set E Є M, where M is some collection of sets of real numbers, to be a certain set function which assigns to Ea nonnegative extended real number 'mE '.

This report consists of the solutions of exercises found in 'Real Analysis", by H. L. Hoyden. Quotations from the book are all accompanied by the title "Definition" or "Theorem". The exercises are all entitled "Proposition" and all proofs in this report are my own. All theorems are quoted without proof The theorems and definitions occur as they are needed throughout the paper1 but some of the most basic definitions and theorems are lumped together in section II.

It is assumed in this paper that the reader is familiar with the basic concepts of advanced calculus and set theory,

]]>Several different types of problems are solved in this report. Among these are Bessel's classical differential equation of index n, two electrical circuit problems, a beam problem, a vibrating string problem, a heat flow problem, and a temperature gradient problem.

One of the objectives of this report is to illustrate several operation properties of the Laplace and finite Fourier sine transforms. Therefore, various methods of inverting transforms are employed to provide diversification.

]]>