id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cs/0311048 | Naren Ramakrishnan | Deept Kumar, Naren Ramakrishnan, Malcolm Potts, and Richard F. Helm | Turning CARTwheels: An Alternating Algorithm for Mining Redescriptions | null | null | null | null | cs.CE cs.AI | null | We present an unusual algorithm involving classification trees where two
trees are grown in opposite directions so that they are matched at their
leaves. This approach finds application in a new data mining task we formulate,
called "redescription mining". A redescription is a shift-of-vocabulary, or a
different way of communicating information about a given subset of data; the
goal of redescription mining is to find subsets of data that afford multiple
descriptions. We highlight the importance of this problem in domains such as
bioinformatics, which exhibit an underlying richness and diversity of data
descriptors (e.g., genes can be studied in a variety of ways). Our approach
helps integrate multiple forms of characterizing datasets, situates the
knowledge gained from one dataset in the context of others, and harnesses
high-level abstractions for uncovering cryptic and subtle features of data.
Algorithm design decisions, implementation details, and experimental results
are presented.
| [
{
"version": "v1",
"created": "Thu, 27 Nov 2003 18:13:38 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Kumar",
"Deept",
""
],
[
"Ramakrishnan",
"Naren",
""
],
[
"Potts",
"Malcolm",
""
],
[
"Helm",
"Richard F.",
""
]
] | TITLE: Turning CARTwheels: An Alternating Algorithm for Mining Redescriptions
ABSTRACT: We present an unusual algorithm involving classification trees where two
trees are grown in opposite directions so that they are matched at their
leaves. This approach finds application in a new data mining task we formulate,
called "redescription mining". A redescription is a shift-of-vocabulary, or a
different way of communicating information about a given subset of data; the
goal of redescription mining is to find subsets of data that afford multiple
descriptions. We highlight the importance of this problem in domains such as
bioinformatics, which exhibit an underlying richness and diversity of data
descriptors (e.g., genes can be studied in a variety of ways). Our approach
helps integrate multiple forms of characterizing datasets, situates the
knowledge gained from one dataset in the context of others, and harnesses
high-level abstractions for uncovering cryptic and subtle features of data.
Algorithm design decisions, implementation details, and experimental results
are presented.
| no_new_dataset | 0.952662 |
cs/0405007 | Tom Fawcett | Tom Fawcett | "In vivo" spam filtering: A challenge problem for data mining | null | KDD Explorations vol.5 no.2, Dec 2003. pp.140-148 | null | null | cs.AI cs.DB cs.IR | null | Spam, also known as Unsolicited Commercial Email (UCE), is the bane of email
communication. Many data mining researchers have addressed the problem of
detecting spam, generally by treating it as a static text classification
problem. True in vivo spam filtering has characteristics that make it a rich
and challenging domain for data mining. Indeed, real-world datasets with these
characteristics are typically difficult to acquire and to share. This paper
demonstrates some of these characteristics and argues that researchers should
pursue in vivo spam filtering as an accessible domain for investigating them.
| [
{
"version": "v1",
"created": "Tue, 4 May 2004 18:56:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Fawcett",
"Tom",
""
]
] | TITLE: "In vivo" spam filtering: A challenge problem for data mining
ABSTRACT: Spam, also known as Unsolicited Commercial Email (UCE), is the bane of email
communication. Many data mining researchers have addressed the problem of
detecting spam, generally by treating it as a static text classification
problem. True in vivo spam filtering has characteristics that make it a rich
and challenging domain for data mining. Indeed, real-world datasets with these
characteristics are typically difficult to acquire and to share. This paper
demonstrates some of these characteristics and argues that researchers should
pursue in vivo spam filtering as an accessible domain for investigating them.
| no_new_dataset | 0.951006 |
cs/0407035 | Shipra Agrawal | Shipra Agrawal, Jayant R. Haritsa | A Framework for High-Accuracy Privacy-Preserving Mining | null | null | null | TR-2004-02, DSL/SERC, Indian Institute of Science | cs.DB cs.IR | null | To preserve client privacy in the data mining process, a variety of
techniques based on random perturbation of data records have been proposed
recently. In this paper, we present a generalized matrix-theoretic model of
random perturbation, which facilitates a systematic approach to the design of
perturbation mechanisms for privacy-preserving mining. Specifically, we
demonstrate that (a) the prior techniques differ only in their settings for the
model parameters, and (b) through appropriate choice of parameter settings, we
can derive new perturbation techniques that provide highly accurate mining
results even under strict privacy guarantees. We also propose a novel
perturbation mechanism wherein the model parameters are themselves
characterized as random variables, and demonstrate that this feature provides
significant improvements in privacy at a very marginal cost in accuracy.
While our model is valid for random-perturbation-based privacy-preserving
mining in general, we specifically evaluate its utility here with regard to
frequent-itemset mining on a variety of real datasets. The experimental results
indicate that our mechanisms incur substantially lower identity and support
errors as compared to the prior techniques.
| [
{
"version": "v1",
"created": "Thu, 15 Jul 2004 14:30:20 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Agrawal",
"Shipra",
""
],
[
"Haritsa",
"Jayant R.",
""
]
] | TITLE: A Framework for High-Accuracy Privacy-Preserving Mining
ABSTRACT: To preserve client privacy in the data mining process, a variety of
techniques based on random perturbation of data records have been proposed
recently. In this paper, we present a generalized matrix-theoretic model of
random perturbation, which facilitates a systematic approach to the design of
perturbation mechanisms for privacy-preserving mining. Specifically, we
demonstrate that (a) the prior techniques differ only in their settings for the
model parameters, and (b) through appropriate choice of parameter settings, we
can derive new perturbation techniques that provide highly accurate mining
results even under strict privacy guarantees. We also propose a novel
perturbation mechanism wherein the model parameters are themselves
characterized as random variables, and demonstrate that this feature provides
significant improvements in privacy at a very marginal cost in accuracy.
While our model is valid for random-perturbation-based privacy-preserving
mining in general, we specifically evaluate its utility here with regard to
frequent-itemset mining on a variety of real datasets. The experimental results
indicate that our mechanisms incur substantially lower identity and support
errors as compared to the prior techniques.
| no_new_dataset | 0.951729 |
cs/0410068 | Zhuowei Li | Zhuowei Li and Amitabha Das | Analyzing and Improving Performance of a Class of Anomaly-based
Intrusion Detectors | Submit to journal for publication | null | null | cais-tr-2004-001 | cs.CR cs.AI | null | Anomaly-based intrusion detection (AID) techniques are useful for detecting
novel intrusions into computing resources. One of the most successful AID
detectors proposed to date is stide, which is based on analysis of system call
sequences. In this paper, we present a detailed formal framework to analyze,
understand and improve the performance of stide and similar AID techniques.
Several important properties of stide-like detectors are established through
formal proofs, and validated by carefully conducted experiments using test
datasets. Finally, the framework is utilized to design two applications to
improve the cost and performance of stide-like detectors which are based on
sequence analysis. The first application reduces the cost of developing AID
detectors by identifying the critical sections in the training dataset, and the
second application identifies the intrusion context in the intrusive dataset,
that helps to fine-tune the detectors. Such fine-tuning in turn helps to
improve detection rate and reduce false alarm rate, thereby increasing the
effectiveness and efficiency of the intrusion detectors.
| [
{
"version": "v1",
"created": "Tue, 26 Oct 2004 02:57:56 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Li",
"Zhuowei",
""
],
[
"Das",
"Amitabha",
""
]
] | TITLE: Analyzing and Improving Performance of a Class of Anomaly-based
Intrusion Detectors
ABSTRACT: Anomaly-based intrusion detection (AID) techniques are useful for detecting
novel intrusions into computing resources. One of the most successful AID
detectors proposed to date is stide, which is based on analysis of system call
sequences. In this paper, we present a detailed formal framework to analyze,
understand and improve the performance of stide and similar AID techniques.
Several important properties of stide-like detectors are established through
formal proofs, and validated by carefully conducted experiments using test
datasets. Finally, the framework is utilized to design two applications to
improve the cost and performance of stide-like detectors which are based on
sequence analysis. The first application reduces the cost of developing AID
detectors by identifying the critical sections in the training dataset, and the
second application identifies the intrusion context in the intrusive dataset,
that helps to fine-tune the detectors. Such fine-tuning in turn helps to
improve detection rate and reduce false alarm rate, thereby increasing the
effectiveness and efficiency of the intrusion detectors.
| no_new_dataset | 0.952794 |
cs/0411035 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng | A FP-Tree Based Approach for Mining All Strongly Correlated Pairs
without Candidate Generation | null | null | null | TR-04-06 | cs.DB cs.AI | null | Given a user-specified minimum correlation threshold and a transaction
database, the problem of mining all-strong correlated pairs is to find all item
pairs with Pearson's correlation coefficients above the threshold . Despite the
use of upper bound based pruning technique in the Taper algorithm [1], when the
number of items and transactions are very large, candidate pair generation and
test is still costly. To avoid the costly test of a large number of candidate
pairs, in this paper, we propose an efficient algorithm, called Tcp, based on
the well-known FP-tree data structure, for mining the complete set of
all-strong correlated item pairs. Our experimental results on both synthetic
and real world datasets show that, Tcp's performance is significantly better
than that of the previously developed Taper algorithm over practical ranges of
correlation threshold specifications.
| [
{
"version": "v1",
"created": "Fri, 12 Nov 2004 12:02:17 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: A FP-Tree Based Approach for Mining All Strongly Correlated Pairs
without Candidate Generation
ABSTRACT: Given a user-specified minimum correlation threshold and a transaction
database, the problem of mining all-strong correlated pairs is to find all item
pairs with Pearson's correlation coefficients above the threshold . Despite the
use of upper bound based pruning technique in the Taper algorithm [1], when the
number of items and transactions are very large, candidate pair generation and
test is still costly. To avoid the costly test of a large number of candidate
pairs, in this paper, we propose an efficient algorithm, called Tcp, based on
the well-known FP-tree data structure, for mining the complete set of
all-strong correlated item pairs. Our experimental results on both synthetic
and real world datasets show that, Tcp's performance is significantly better
than that of the previously developed Taper algorithm over practical ranges of
correlation threshold specifications.
| no_new_dataset | 0.950915 |
cs/0412019 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng | A Link Clustering Based Approach for Clustering Categorical Data | 10 pages | A poster paper in Proc. of WAIM 2004 | null | null | cs.DL cs.AI | null | Categorical data clustering (CDC) and link clustering (LC) have been
considered as separate research and application areas. The main focus of this
paper is to investigate the commonalities between these two problems and the
uses of these commonalities for the creation of new clustering algorithms for
categorical data based on cross-fertilization between the two disjoint research
fields. More precisely, we formally transform the CDC problem into an LC
problem, and apply LC approach for clustering categorical data. Experimental
results on real datasets show that LC based clustering method is competitive
with existing CDC algorithms with respect to clustering accuracy.
| [
{
"version": "v1",
"created": "Sat, 4 Dec 2004 12:41:08 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: A Link Clustering Based Approach for Clustering Categorical Data
ABSTRACT: Categorical data clustering (CDC) and link clustering (LC) have been
considered as separate research and application areas. The main focus of this
paper is to investigate the commonalities between these two problems and the
uses of these commonalities for the creation of new clustering algorithms for
categorical data based on cross-fertilization between the two disjoint research
fields. More precisely, we formally transform the CDC problem into an LC
problem, and apply LC approach for clustering categorical data. Experimental
results on real datasets show that LC based clustering method is competitive
with existing CDC algorithms with respect to clustering accuracy.
| no_new_dataset | 0.950824 |
cs/0502008 | Jim Gray | Jim Gray, David T. Liu, Maria Nieto-Santisteban, Alexander S. Szalay,
David DeWitt, Gerd Heber | Scientific Data Management in the Coming Decade | null | null | null | Microsoft Technical Report MSR-TR-2005-10 | cs.DB cs.CE | null | This is a thought piece on data-intensive science requirements for databases
and science centers. It argues that peta-scale datasets will be housed by
science centers that provide substantial storage and processing for scientists
who access the data via smart notebooks. Next-generation science instruments
and simulations will generate these peta-scale datasets. The need to publish
and share data and the need for generic analysis and visualization tools will
finally create a convergence on common metadata standards. Database systems
will be judged by their support of these metadata standards and by their
ability to manage and access peta-scale datasets. The procedural
stream-of-bytes-file-centric approach to data analysis is both too cumbersome
and too serial for such large datasets. Non-procedural query and analysis of
schematized self-describing data is both easier to use and allows much more
parallelism.
| [
{
"version": "v1",
"created": "Wed, 2 Feb 2005 03:15:42 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gray",
"Jim",
""
],
[
"Liu",
"David T.",
""
],
[
"Nieto-Santisteban",
"Maria",
""
],
[
"Szalay",
"Alexander S.",
""
],
[
"DeWitt",
"David",
""
],
[
"Heber",
"Gerd",
""
]
] | TITLE: Scientific Data Management in the Coming Decade
ABSTRACT: This is a thought piece on data-intensive science requirements for databases
and science centers. It argues that peta-scale datasets will be housed by
science centers that provide substantial storage and processing for scientists
who access the data via smart notebooks. Next-generation science instruments
and simulations will generate these peta-scale datasets. The need to publish
and share data and the need for generic analysis and visualization tools will
finally create a convergence on common metadata standards. Database systems
will be judged by their support of these metadata standards and by their
ability to manage and access peta-scale datasets. The procedural
stream-of-bytes-file-centric approach to data analysis is both too cumbersome
and too serial for such large datasets. Non-procedural query and analysis of
schematized self-describing data is both easier to use and allows much more
parallelism.
| no_new_dataset | 0.947721 |
cs/0503081 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng | An Optimization Model for Outlier Detection in Categorical Data | 12 pages | null | null | Tr-05-0329 | cs.DB cs.AI | null | The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. Detection of such
outliers is important for many applications such as fraud detection and
customer migration. Most existing methods are designed for numeric data. They
will encounter problems with real-life applications that contain categorical
data. In this paper, we formally define the problem of outlier detection in
categorical data as an optimization problem from a global viewpoint. Moreover,
we present a local-search heuristic based algorithm for efficiently finding
feasible solutions. Experimental results on real datasets and large synthetic
datasets demonstrate the superiority of our model and algorithm.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2005 13:31:01 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: An Optimization Model for Outlier Detection in Categorical Data
ABSTRACT: The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. Detection of such
outliers is important for many applications such as fraud detection and
customer migration. Most existing methods are designed for numeric data. They
will encounter problems with real-life applications that contain categorical
data. In this paper, we formally define the problem of outlier detection in
categorical data as an optimization problem from a global viewpoint. Moreover,
we present a local-search heuristic based algorithm for efficiently finding
feasible solutions. Experimental results on real datasets and large synthetic
datasets demonstrate the superiority of our model and algorithm.
| no_new_dataset | 0.949716 |
cs/0504042 | Vitaly Schetinin | V. Schetinin, J.E. Fieldsend, D. Partridge, W.J. Krzanowski, R.M.
Everson, T.C. Bailey, A. Hernandez | The Bayesian Decision Tree Technique with a Sweeping Strategy | null | null | null | null | cs.AI cs.LG | null | The uncertainty of classification outcomes is of crucial importance for many
safety critical applications including, for example, medical diagnostics. In
such applications the uncertainty of classification can be reliably estimated
within a Bayesian model averaging technique that allows the use of prior
information. Decision Tree (DT) classification models used within such a
technique gives experts additional information by making this classification
scheme observable. The use of the Markov Chain Monte Carlo (MCMC) methodology
of stochastic sampling makes the Bayesian DT technique feasible to perform.
However, in practice, the MCMC technique may become stuck in a particular DT
which is far away from a region with a maximal posterior. Sampling such DTs
causes bias in the posterior estimates, and as a result the evaluation of
classification uncertainty may be incorrect. In a particular case, the negative
effect of such sampling may be reduced by giving additional prior information
on the shape of DTs. In this paper we describe a new approach based on sweeping
the DTs without additional priors on the favorite shape of DTs. The
performances of Bayesian DT techniques with the standard and sweeping
strategies are compared on a synthetic data as well as on real datasets.
Quantitatively evaluating the uncertainty in terms of entropy of class
posterior probabilities, we found that the sweeping strategy is superior to the
standard strategy.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2005 17:45:09 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Schetinin",
"V.",
""
],
[
"Fieldsend",
"J. E.",
""
],
[
"Partridge",
"D.",
""
],
[
"Krzanowski",
"W. J.",
""
],
[
"Everson",
"R. M.",
""
],
[
"Bailey",
"T. C.",
""
],
[
"Hernandez",
"A.",
""
]
] | TITLE: The Bayesian Decision Tree Technique with a Sweeping Strategy
ABSTRACT: The uncertainty of classification outcomes is of crucial importance for many
safety critical applications including, for example, medical diagnostics. In
such applications the uncertainty of classification can be reliably estimated
within a Bayesian model averaging technique that allows the use of prior
information. Decision Tree (DT) classification models used within such a
technique gives experts additional information by making this classification
scheme observable. The use of the Markov Chain Monte Carlo (MCMC) methodology
of stochastic sampling makes the Bayesian DT technique feasible to perform.
However, in practice, the MCMC technique may become stuck in a particular DT
which is far away from a region with a maximal posterior. Sampling such DTs
causes bias in the posterior estimates, and as a result the evaluation of
classification uncertainty may be incorrect. In a particular case, the negative
effect of such sampling may be reduced by giving additional prior information
on the shape of DTs. In this paper we describe a new approach based on sweeping
the DTs without additional priors on the favorite shape of DTs. The
performances of Bayesian DT techniques with the standard and sweeping
strategies are compared on a synthetic data as well as on real datasets.
Quantitatively evaluating the uncertainty in terms of entropy of class
posterior probabilities, we found that the sweeping strategy is superior to the
standard strategy.
| no_new_dataset | 0.951323 |
cs/0504043 | Vitaly Schetinin | V. Schetinin, D. Partridge, W.J. Krzanowski, R.M. Everson, J.E.
Fieldsend, T.C. Bailey, and A. Hernandez | Experimental Comparison of Classification Uncertainty for Randomised and
Bayesian Decision Tree Ensembles | IDEAL-2004 | null | null | null | cs.AI cs.LG | null | In this paper we experimentally compare the classification uncertainty of the
randomised Decision Tree (DT) ensemble technique and the Bayesian DT technique
with a restarting strategy on a synthetic dataset as well as on some datasets
commonly used in the machine learning community. For quantitative evaluation of
classification uncertainty, we use an Uncertainty Envelope dealing with the
class posterior distribution and a given confidence probability. Counting the
classifier outcomes, this technique produces feasible evaluations of the
classification uncertainty. Using this technique in our experiments, we found
that the Bayesian DT technique is superior to the randomised DT ensemble
technique.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2005 17:53:35 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Schetinin",
"V.",
""
],
[
"Partridge",
"D.",
""
],
[
"Krzanowski",
"W. J.",
""
],
[
"Everson",
"R. M.",
""
],
[
"Fieldsend",
"J. E.",
""
],
[
"Bailey",
"T. C.",
""
],
[
"Hernandez",
"A.",
""
]
] | TITLE: Experimental Comparison of Classification Uncertainty for Randomised and
Bayesian Decision Tree Ensembles
ABSTRACT: In this paper we experimentally compare the classification uncertainty of the
randomised Decision Tree (DT) ensemble technique and the Bayesian DT technique
with a restarting strategy on a synthetic dataset as well as on some datasets
commonly used in the machine learning community. For quantitative evaluation of
classification uncertainty, we use an Uncertainty Envelope dealing with the
class posterior distribution and a given confidence probability. Counting the
classifier outcomes, this technique produces feasible evaluations of the
classification uncertainty. Using this technique in our experiments, we found
that the Bayesian DT technique is superior to the randomised DT ensemble
technique.
| no_new_dataset | 0.953966 |
cs/0504059 | Vitaly Schetinin | Vitaly Schetinin | A Neural Network Decision Tree for Learning Concepts from EEG Data | null | null | null | null | cs.NE cs.AI | null | To learn the multi-class conceptions from the electroencephalogram (EEG) data
we developed a neural network decision tree (DT), that performs the linear
tests, and a new training algorithm. We found that the known methods fail
inducting the classification models when the data are presented by the features
some of them are irrelevant, and the classes are heavily overlapped. To train
the DT, our algorithm exploits a bottom up search of the features that provide
the best classification accuracy of the linear tests. We applied the developed
algorithm to induce the DT from the large EEG dataset consisted of 65 patients
belonging to 16 age groups. In these recordings each EEG segment was
represented by 72 calculated features. The DT correctly classified 80.8% of the
training and 80.1% of the testing examples. Correspondingly it correctly
classified 89.2% and 87.7% of the EEG recordings.
| [
{
"version": "v1",
"created": "Wed, 13 Apr 2005 14:28:48 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Schetinin",
"Vitaly",
""
]
] | TITLE: A Neural Network Decision Tree for Learning Concepts from EEG Data
ABSTRACT: To learn the multi-class conceptions from the electroencephalogram (EEG) data
we developed a neural network decision tree (DT), that performs the linear
tests, and a new training algorithm. We found that the known methods fail
inducting the classification models when the data are presented by the features
some of them are irrelevant, and the classes are heavily overlapped. To train
the DT, our algorithm exploits a bottom up search of the features that provide
the best classification accuracy of the linear tests. We applied the developed
algorithm to induce the DT from the large EEG dataset consisted of 65 patients
belonging to 16 age groups. In these recordings each EEG segment was
represented by 72 calculated features. The DT correctly classified 80.8% of the
training and 80.1% of the testing examples. Correspondingly it correctly
classified 89.2% and 87.7% of the EEG recordings.
| no_new_dataset | 0.94256 |
cs/0504065 | Vitaly Schetinin | Vitaly Schetinin, Jonathan E. Fieldsend, Derek Partridge, Wojtek J.
Krzanowski, Richard M. Everson, Trevor C. Bailey and Adolfo Hernandez | Estimating Classification Uncertainty of Bayesian Decision Tree
Technique on Financial Data | null | null | null | null | cs.AI | null | Bayesian averaging over classification models allows the uncertainty of
classification outcomes to be evaluated, which is of crucial importance for
making reliable decisions in applications such as financial in which risks have
to be estimated. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the diversity of a
classifier ensemble and the required performance. The interpretability of
classification models can also give useful information for experts responsible
for making reliable classifications. For this reason Decision Trees (DTs) seem
to be attractive classification models. The required diversity of the DT
ensemble can be achieved by using the Bayesian model averaging all possible
DTs. In practice, the Bayesian approach can be implemented on the base of a
Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior
distribution. For sampling large DTs, the MCMC method is extended by Reversible
Jump technique which allows inducing DTs under given priors. For the case when
the prior information on the DT size is unavailable, the sweeping technique
defining the prior implicitly reveals a better performance. Within this Chapter
we explore the classification uncertainty of the Bayesian MCMC techniques on
some datasets from the StatLog Repository and real financial data. The
classification uncertainty is compared within an Uncertainty Envelope technique
dealing with the class posterior distribution and a given confidence
probability. This technique provides realistic estimates of the classification
uncertainty which can be easily interpreted in statistical terms with the aim
of risk evaluation.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2005 10:30:54 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Schetinin",
"Vitaly",
""
],
[
"Fieldsend",
"Jonathan E.",
""
],
[
"Partridge",
"Derek",
""
],
[
"Krzanowski",
"Wojtek J.",
""
],
[
"Everson",
"Richard M.",
""
],
[
"Bailey",
"Trevor C.",
""
],
[
"Hernandez",
"Adolfo",
""
]
] | TITLE: Estimating Classification Uncertainty of Bayesian Decision Tree
Technique on Financial Data
ABSTRACT: Bayesian averaging over classification models allows the uncertainty of
classification outcomes to be evaluated, which is of crucial importance for
making reliable decisions in applications such as financial in which risks have
to be estimated. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the diversity of a
classifier ensemble and the required performance. The interpretability of
classification models can also give useful information for experts responsible
for making reliable classifications. For this reason Decision Trees (DTs) seem
to be attractive classification models. The required diversity of the DT
ensemble can be achieved by using the Bayesian model averaging all possible
DTs. In practice, the Bayesian approach can be implemented on the base of a
Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior
distribution. For sampling large DTs, the MCMC method is extended by Reversible
Jump technique which allows inducing DTs under given priors. For the case when
the prior information on the DT size is unavailable, the sweeping technique
defining the prior implicitly reveals a better performance. Within this Chapter
we explore the classification uncertainty of the Bayesian MCMC techniques on
some datasets from the StatLog Repository and real financial data. The
classification uncertainty is compared within an Uncertainty Envelope technique
dealing with the class posterior distribution and a given confidence
probability. This technique provides realistic estimates of the classification
uncertainty which can be easily interpreted in statistical terms with the aim
of risk evaluation.
| no_new_dataset | 0.948346 |
cs/0504066 | Vitaly Schetinin | Vitaly Schetinin, Jonathan E. Fieldsend, Derek Partridge, Wojtek J.
Krzanowski, Richard M. Everson, Trevor C. Bailey, and Adolfo Hernandez | Comparison of the Bayesian and Randomised Decision Tree Ensembles within
an Uncertainty Envelope Technique | null | Journal of Mathematical Modelling and Algorithms, 2005 | null | null | cs.AI | null | Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of
classification outcomes that is of crucial importance for safety critical
applications. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the classifier diversity and
the required performance. The interpretability of MCSs can also give useful
information for experts responsible for making reliable classifications. For
this reason Decision Trees (DTs) seem to be attractive classification models
for experts. The required diversity of MCSs exploiting such classification
models can be achieved by using two techniques, the Bayesian model averaging
and the randomised DT ensemble. Both techniques have revealed promising results
when applied to real-world problems. In this paper we experimentally compare
the classification uncertainty of the Bayesian model averaging with a
restarting strategy and the randomised DT ensemble on a synthetic dataset and
some domain problems commonly used in the machine learning community. To make
the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo
technique. The classification uncertainty is evaluated within an Uncertainty
Envelope technique dealing with the class posterior distribution and a given
confidence probability. Exploring a full posterior distribution, this technique
produces realistic estimates which can be easily interpreted in statistical
terms. In our experiments we found out that the Bayesian DTs are superior to
the randomised DT ensembles within the Uncertainty Envelope technique.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2005 10:33:33 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Schetinin",
"Vitaly",
""
],
[
"Fieldsend",
"Jonathan E.",
""
],
[
"Partridge",
"Derek",
""
],
[
"Krzanowski",
"Wojtek J.",
""
],
[
"Everson",
"Richard M.",
""
],
[
"Bailey",
"Trevor C.",
""
],
[
"Hernandez",
"Adolfo",
""
]
] | TITLE: Comparison of the Bayesian and Randomised Decision Tree Ensembles within
an Uncertainty Envelope Technique
ABSTRACT: Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of
classification outcomes that is of crucial importance for safety critical
applications. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the classifier diversity and
the required performance. The interpretability of MCSs can also give useful
information for experts responsible for making reliable classifications. For
this reason Decision Trees (DTs) seem to be attractive classification models
for experts. The required diversity of MCSs exploiting such classification
models can be achieved by using two techniques, the Bayesian model averaging
and the randomised DT ensemble. Both techniques have revealed promising results
when applied to real-world problems. In this paper we experimentally compare
the classification uncertainty of the Bayesian model averaging with a
restarting strategy and the randomised DT ensemble on a synthetic dataset and
some domain problems commonly used in the machine learning community. To make
the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo
technique. The classification uncertainty is evaluated within an Uncertainty
Envelope technique dealing with the class posterior distribution and a given
confidence probability. Exploring a full posterior distribution, this technique
produces realistic estimates which can be easily interpreted in statistical
terms. In our experiments we found out that the Bayesian DTs are superior to
the randomised DT ensembles within the Uncertainty Envelope technique.
| no_new_dataset | 0.953492 |
cs/0505060 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng | A Unified Subspace Outlier Ensemble Framework for Outlier Detection in
High Dimensional Spaces | 17 pages | null | null | TR-04-08 | cs.DB cs.AI | null | The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. Detection of such
outliers is important for many applications such as fraud detection and
customer migration. Most such applications are high dimensional domains in
which the data may contain hundreds of dimensions. However, the outlier
detection problem itself is not well defined and none of the existing
definitions are widely accepted, especially in high dimensional space. In this
paper, our first contribution is to propose a unified framework for outlier
detection in high dimensional spaces from an ensemble-learning viewpoint. In
our new framework, the outlying-ness of each data object is measured by fusing
outlier factors in different subspaces using a combination function.
Accordingly, we show that all existing researches on outlier detection can be
regarded as special cases in the unified framework with respect to the set of
subspaces considered and the type of combination function used. In addition, to
demonstrate the usefulness of the ensemble-learning based outlier detection
framework, we developed a very simple and fast algorithm, namely SOE1 (Subspace
Outlier Ensemble using 1-dimensional Subspaces) in which only subspaces with
one dimension is used for mining outliers from large categorical datasets. The
SOE1 algorithm needs only two scans over the dataset and hence is very
appealing in real data mining applications. Experimental results on real
datasets and large synthetic datasets show that: (1) SOE1 has comparable
performance with respect to those state-of-art outlier detection algorithms on
identifying true outliers and (2) SOE1 can be an order of magnitude faster than
one of the fastest outlier detection algorithms known so far.
| [
{
"version": "v1",
"created": "Tue, 24 May 2005 02:41:51 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: A Unified Subspace Outlier Ensemble Framework for Outlier Detection in
High Dimensional Spaces
ABSTRACT: The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. Detection of such
outliers is important for many applications such as fraud detection and
customer migration. Most such applications are high dimensional domains in
which the data may contain hundreds of dimensions. However, the outlier
detection problem itself is not well defined and none of the existing
definitions are widely accepted, especially in high dimensional space. In this
paper, our first contribution is to propose a unified framework for outlier
detection in high dimensional spaces from an ensemble-learning viewpoint. In
our new framework, the outlying-ness of each data object is measured by fusing
outlier factors in different subspaces using a combination function.
Accordingly, we show that all existing researches on outlier detection can be
regarded as special cases in the unified framework with respect to the set of
subspaces considered and the type of combination function used. In addition, to
demonstrate the usefulness of the ensemble-learning based outlier detection
framework, we developed a very simple and fast algorithm, namely SOE1 (Subspace
Outlier Ensemble using 1-dimensional Subspaces) in which only subspaces with
one dimension is used for mining outliers from large categorical datasets. The
SOE1 algorithm needs only two scans over the dataset and hence is very
appealing in real data mining applications. Experimental results on real
datasets and large synthetic datasets show that: (1) SOE1 has comparable
performance with respect to those state-of-art outlier detection algorithms on
identifying true outliers and (2) SOE1 can be an order of magnitude faster than
one of the fastest outlier detection algorithms known so far.
| no_new_dataset | 0.949576 |
cs/0507065 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng | A Fast Greedy Algorithm for Outlier Mining | 11 pages | null | null | Tr-05-0406 | cs.DB cs.AI | null | The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. In [38], the
problem of outlier detection in categorical data is defined as an optimization
problem and a local-search heuristic based algorithm (LSA) is presented.
However, as is the case with most iterative type algorithms, the LSA algorithm
is still very time-consuming on very large datasets. In this paper, we present
a very fast greedy algorithm for mining outliers under the same optimization
model. Experimental results on real datasets and large synthetic datasets show
that: (1) Our algorithm has comparable performance with respect to those
state-of-art outlier detection algorithms on identifying true outliers and (2)
Our algorithm can be an order of magnitude faster than LSA algorithm.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2005 02:14:02 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: A Fast Greedy Algorithm for Outlier Mining
ABSTRACT: The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. In [38], the
problem of outlier detection in categorical data is defined as an optimization
problem and a local-search heuristic based algorithm (LSA) is presented.
However, as is the case with most iterative type algorithms, the LSA algorithm
is still very time-consuming on very large datasets. In this paper, we present
a very fast greedy algorithm for mining outliers under the same optimization
model. Experimental results on real datasets and large synthetic datasets show
that: (1) Our algorithm has comparable performance with respect to those
state-of-art outlier detection algorithms on identifying true outliers and (2)
Our algorithm can be an order of magnitude faster than LSA algorithm.
| no_new_dataset | 0.951774 |
cs/0508033 | Dmitri Krioukov | Priya Mahadevan, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker,
Xenofontas Dimitropoulos, kc claffy, Amin Vahdat | Lessons from Three Views of the Internet Topology | null | null | null | CAIDA-TR-2005-02 | cs.NI physics.soc-ph | null | Network topology plays a vital role in understanding the performance of
network applications and protocols. Thus, recently there has been tremendous
interest in generating realistic network topologies. Such work must begin with
an understanding of existing network topologies, which today typically consists
of a relatively small number of data sources. In this paper, we calculate an
extensive set of important characteristics of Internet AS-level topologies
extracted from the three data sources most frequently used by the research
community: traceroutes, BGP, and WHOIS. We find that traceroute and BGP
topologies are similar to one another but differ substantially from the WHOIS
topology. We discuss the interplay between the properties of the data sources
that result from specific data collection mechanisms and the resulting topology
views. We find that, among metrics widely considered, the joint degree
distribution appears to fundamentally characterize Internet AS-topologies: it
narrowly defines values for other important metrics. We also introduce an
evaluation criteria for the accuracy of topology generators and verify previous
observations that generators solely reproducing degree distributions cannot
capture the full spectrum of critical topological characteristics of any of the
three topologies. Finally, we release to the community the input topology
datasets, along with the scripts and output of our calculations. This
supplement should enable researchers to validate their models against real data
and to make more informed selection of topology data sources for their specific
needs.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2005 02:35:45 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Mahadevan",
"Priya",
""
],
[
"Krioukov",
"Dmitri",
""
],
[
"Fomenkov",
"Marina",
""
],
[
"Huffaker",
"Bradley",
""
],
[
"Dimitropoulos",
"Xenofontas",
""
],
[
"claffy",
"kc",
""
],
[
"Vahdat",
"Amin",
""
]
] | TITLE: Lessons from Three Views of the Internet Topology
ABSTRACT: Network topology plays a vital role in understanding the performance of
network applications and protocols. Thus, recently there has been tremendous
interest in generating realistic network topologies. Such work must begin with
an understanding of existing network topologies, which today typically consists
of a relatively small number of data sources. In this paper, we calculate an
extensive set of important characteristics of Internet AS-level topologies
extracted from the three data sources most frequently used by the research
community: traceroutes, BGP, and WHOIS. We find that traceroute and BGP
topologies are similar to one another but differ substantially from the WHOIS
topology. We discuss the interplay between the properties of the data sources
that result from specific data collection mechanisms and the resulting topology
views. We find that, among metrics widely considered, the joint degree
distribution appears to fundamentally characterize Internet AS-topologies: it
narrowly defines values for other important metrics. We also introduce an
evaluation criteria for the accuracy of topology generators and verify previous
observations that generators solely reproducing degree distributions cannot
capture the full spectrum of critical topological characteristics of any of the
three topologies. Finally, we release to the community the input topology
datasets, along with the scripts and output of our calculations. This
supplement should enable researchers to validate their models against real data
and to make more informed selection of topology data sources for their specific
needs.
| no_new_dataset | 0.947235 |
cs/0509011 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng | Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble
Approach | 14 pages | null | null | Tr-2002-10 | cs.AI | null | Clustering is a widely used technique in data mining applications for
discovering patterns in underlying data. Most traditional clustering algorithms
are limited to handling datasets that contain either numeric or categorical
attributes. However, datasets with mixed types of attributes are common in real
life data mining applications. In this paper, we propose a novel
divide-and-conquer technique to solve this problem. First, the original mixed
dataset is divided into two sub-datasets: the pure categorical dataset and the
pure numeric dataset. Next, existing well established clustering algorithms
designed for different types of datasets are employed to produce corresponding
clusters. Last, the clustering results on the categorical and numeric dataset
are combined as a categorical dataset, on which the categorical data clustering
algorithm is used to get the final clusters. Our contribution in this paper is
to provide an algorithm framework for the mixed attributes clustering problem,
in which existing clustering algorithms can be easily integrated, the
capabilities of different kinds of clustering algorithms and characteristics of
different types of datasets could be fully exploited. Comparisons with other
clustering algorithms on real life datasets illustrate the superiority of our
approach.
| [
{
"version": "v1",
"created": "Mon, 5 Sep 2005 02:47:12 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble
Approach
ABSTRACT: Clustering is a widely used technique in data mining applications for
discovering patterns in underlying data. Most traditional clustering algorithms
are limited to handling datasets that contain either numeric or categorical
attributes. However, datasets with mixed types of attributes are common in real
life data mining applications. In this paper, we propose a novel
divide-and-conquer technique to solve this problem. First, the original mixed
dataset is divided into two sub-datasets: the pure categorical dataset and the
pure numeric dataset. Next, existing well established clustering algorithms
designed for different types of datasets are employed to produce corresponding
clusters. Last, the clustering results on the categorical and numeric dataset
are combined as a categorical dataset, on which the categorical data clustering
algorithm is used to get the final clusters. Our contribution in this paper is
to provide an algorithm framework for the mixed attributes clustering problem,
in which existing clustering algorithms can be easily integrated, the
capabilities of different kinds of clustering algorithms and characteristics of
different types of datasets could be fully exploited. Comparisons with other
clustering algorithms on real life datasets illustrate the superiority of our
approach.
| no_new_dataset | 0.951097 |
cs/0509033 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng, Bin Dong | K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset | 11 pages | null | null | Tr-2003-08 | cs.AI | null | Clustering categorical data is an integral part of data mining and has
attracted much attention recently. In this paper, we present k-histogram, a new
efficient algorithm for clustering categorical data. The k-histogram algorithm
extends the k-means algorithm to categorical domain by replacing the means of
clusters with histograms, and dynamically updates histograms in the clustering
process. Experimental results on real datasets show that k-histogram algorithm
can produce better clustering results than k-modes algorithm, the one related
with our work most closely.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2005 06:33:08 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
],
[
"Dong",
"Bin",
""
]
] | TITLE: K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset
ABSTRACT: Clustering categorical data is an integral part of data mining and has
attracted much attention recently. In this paper, we present k-histogram, a new
efficient algorithm for clustering categorical data. The k-histogram algorithm
extends the k-means algorithm to categorical domain by replacing the means of
clusters with histograms, and dynamically updates histograms in the clustering
process. Experimental results on real datasets show that k-histogram algorithm
can produce better clustering results than k-modes algorithm, the one related
with our work most closely.
| no_new_dataset | 0.955152 |
cs/0509082 | Yossi Zana | Yossi Zana, Roberto M. Cesar-JR | Face Recognition Based on Polar Frequency Features | ACM Transactions on Applied Perception | null | null | null | cs.CV | null | A novel biologically motivated face recognition algorithm based on polar
frequency is presented. Polar frequency descriptors are extracted from face
images by Fourier-Bessel transform (FBT). Next, the Euclidean distance between
all images is computed and each image is now represented by its dissimilarity
to the other images. A Pseudo-Fisher Linear Discriminant was built on this
dissimilarity space. The performance of Discrete Fourier transform (DFT)
descriptors, and a combination of both feature types was also evaluated. The
algorithms were tested on a 40- and 1196-subjects face database (ORL and FERET,
respectively). With 5 images per subject in the training and test datasets,
error rate on the ORL database was 3.8, 1.25 and 0.2% for the FBT, DFT, and the
combined classifier, respectively, as compared to 2.6% achieved by the best
previous algorithm. The most informative polar frequency features were
concentrated at low-to-medium angular frequencies coupled to low radial
frequencies. On the FERET database, where an affine normalization
pre-processing was applied, the FBT algorithm outperformed only the PCA in a
rank recognition test. However, it achieved performance comparable to
state-of-the-art methods when evaluated by verification tests. These results
indicate the high informative value of the polar frequency content of face
images in relation to recognition and verification tasks, and that the
Cartesian frequency content can complement information about the subjects'
identity, but possibly only when the images are not pre-normalized. Possible
implications for human face recognition are discussed.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2005 15:50:27 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Zana",
"Yossi",
""
],
[
"Cesar-JR",
"Roberto M.",
""
]
] | TITLE: Face Recognition Based on Polar Frequency Features
ABSTRACT: A novel biologically motivated face recognition algorithm based on polar
frequency is presented. Polar frequency descriptors are extracted from face
images by Fourier-Bessel transform (FBT). Next, the Euclidean distance between
all images is computed and each image is now represented by its dissimilarity
to the other images. A Pseudo-Fisher Linear Discriminant was built on this
dissimilarity space. The performance of Discrete Fourier transform (DFT)
descriptors, and a combination of both feature types was also evaluated. The
algorithms were tested on a 40- and 1196-subjects face database (ORL and FERET,
respectively). With 5 images per subject in the training and test datasets,
error rate on the ORL database was 3.8, 1.25 and 0.2% for the FBT, DFT, and the
combined classifier, respectively, as compared to 2.6% achieved by the best
previous algorithm. The most informative polar frequency features were
concentrated at low-to-medium angular frequencies coupled to low radial
frequencies. On the FERET database, where an affine normalization
pre-processing was applied, the FBT algorithm outperformed only the PCA in a
rank recognition test. However, it achieved performance comparable to
state-of-the-art methods when evaluated by verification tests. These results
indicate the high informative value of the polar frequency content of face
images in relation to recognition and verification tasks, and that the
Cartesian frequency content can complement information about the subjects'
identity, but possibly only when the images are not pre-normalized. Possible
implications for human face recognition are discussed.
| no_new_dataset | 0.953057 |
cs/0510054 | Le Zhao Mr. | Le Zhao, Min Zhang, Shaoping Ma | The Nature of Novelty Detection | This paper pointed out the future direction for novelty detection
research. 37 pages, double spaced version | null | null | null | cs.IR cs.CL | null | Sentence level novelty detection aims at reducing redundant sentences from a
sentence list. In the task, sentences appearing later in the list with no new
meanings are eliminated. Aiming at a better accuracy for detecting redundancy,
this paper reveals the nature of the novelty detection task currently
overlooked by the Novelty community $-$ Novelty as a combination of the partial
overlap (PO, two sentences sharing common facts) and complete overlap (CO, the
first sentence covers all the facts of the second sentence) relations. By
formalizing novelty detection as a combination of the two relations between
sentences, new viewpoints toward techniques dealing with Novelty are proposed.
Among the methods discussed, the similarity, overlap, pool and language
modeling approaches are commonly used. Furthermore, a novel approach, selected
pool method is provided, which is immediate following the nature of the task.
Experimental results obtained on all the three currently available novelty
datasets showed that selected pool is significantly better or no worse than the
current methods. Knowledge about the nature of the task also affects the
evaluation methodologies. We propose new evaluation measures for Novelty
according to the nature of the task, as well as possible directions for future
study.
| [
{
"version": "v1",
"created": "Wed, 19 Oct 2005 14:56:48 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Zhao",
"Le",
""
],
[
"Zhang",
"Min",
""
],
[
"Ma",
"Shaoping",
""
]
] | TITLE: The Nature of Novelty Detection
ABSTRACT: Sentence level novelty detection aims at reducing redundant sentences from a
sentence list. In the task, sentences appearing later in the list with no new
meanings are eliminated. Aiming at a better accuracy for detecting redundancy,
this paper reveals the nature of the novelty detection task currently
overlooked by the Novelty community $-$ Novelty as a combination of the partial
overlap (PO, two sentences sharing common facts) and complete overlap (CO, the
first sentence covers all the facts of the second sentence) relations. By
formalizing novelty detection as a combination of the two relations between
sentences, new viewpoints toward techniques dealing with Novelty are proposed.
Among the methods discussed, the similarity, overlap, pool and language
modeling approaches are commonly used. Furthermore, a novel approach, selected
pool method is provided, which is immediate following the nature of the task.
Experimental results obtained on all the three currently available novelty
datasets showed that selected pool is significantly better or no worse than the
current methods. Knowledge about the nature of the task also affects the
evaluation methodologies. We propose new evaluation measures for Novelty
according to the nature of the task, as well as possible directions for future
study.
| no_new_dataset | 0.954435 |
cs/0511013 | Zengyou He | Zengyou He, Xiaofei Xu, Shengchun Deng | K-ANMI: A Mutual Information Based Clustering Algorithm for Categorical
Data | 18 pages | null | null | Tr-2004-03 | cs.AI cs.DB | null | Clustering categorical data is an integral part of data mining and has
attracted much attention recently. In this paper, we present k-ANMI, a new
efficient algorithm for clustering categorical data. The k-ANMI algorithm works
in a way that is similar to the popular k-means algorithm, and the goodness of
clustering in each step is evaluated using a mutual information based criterion
(namely, Average Normalized Mutual Information-ANMI) borrowed from cluster
ensemble. Experimental results on real datasets show that k-ANMI algorithm is
competitive with those state-of-art categorical data clustering algorithms with
respect to clustering accuracy.
| [
{
"version": "v1",
"created": "Thu, 3 Nov 2005 01:18:47 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xiaofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: K-ANMI: A Mutual Information Based Clustering Algorithm for Categorical
Data
ABSTRACT: Clustering categorical data is an integral part of data mining and has
attracted much attention recently. In this paper, we present k-ANMI, a new
efficient algorithm for clustering categorical data. The k-ANMI algorithm works
in a way that is similar to the popular k-means algorithm, and the goodness of
clustering in each step is evaluated using a mutual information based criterion
(namely, Average Normalized Mutual Information-ANMI) borrowed from cluster
ensemble. Experimental results on real datasets show that k-ANMI algorithm is
competitive with those state-of-art categorical data clustering algorithms with
respect to clustering accuracy.
| no_new_dataset | 0.953232 |
cs/0511075 | Vasant Honavar | Michael Terribilini, Jae-Hyung Lee, Changhui Yan, Robert L. Jernigan,
Susan Carpenter, Vasant Honavar, Drena Dobbs | Identifying Interaction Sites in "Recalcitrant" Proteins: Predicted
Protein and Rna Binding Sites in Rev Proteins of Hiv-1 and Eiav Agree with
Experimental Data | Pacific Symposium on Biocomputing, Hawaii, In press, Accepted, 2006 | null | null | null | cs.LG cs.AI | null | Protein-protein and protein nucleic acid interactions are vitally important
for a wide range of biological processes, including regulation of gene
expression, protein synthesis, and replication and assembly of many viruses. We
have developed machine learning approaches for predicting which amino acids of
a protein participate in its interactions with other proteins and/or nucleic
acids, using only the protein sequence as input. In this paper, we describe an
application of classifiers trained on datasets of well-characterized
protein-protein and protein-RNA complexes for which experimental structures are
available. We apply these classifiers to the problem of predicting protein and
RNA binding sites in the sequence of a clinically important protein for which
the structure is not known: the regulatory protein Rev, essential for the
replication of HIV-1 and other lentiviruses. We compare our predictions with
published biochemical, genetic and partial structural information for HIV-1 and
EIAV Rev and with our own published experimental mapping of RNA binding sites
in EIAV Rev. The predicted and experimentally determined binding sites are in
very good agreement. The ability to predict reliably the residues of a protein
that directly contribute to specific binding events - without the requirement
for structural information regarding either the protein or complexes in which
it participates - can potentially generate new disease intervention strategies.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2005 01:47:53 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Terribilini",
"Michael",
""
],
[
"Lee",
"Jae-Hyung",
""
],
[
"Yan",
"Changhui",
""
],
[
"Jernigan",
"Robert L.",
""
],
[
"Carpenter",
"Susan",
""
],
[
"Honavar",
"Vasant",
""
],
[
"Dobbs",
"Drena",
""
]
] | TITLE: Identifying Interaction Sites in "Recalcitrant" Proteins: Predicted
Protein and Rna Binding Sites in Rev Proteins of Hiv-1 and Eiav Agree with
Experimental Data
ABSTRACT: Protein-protein and protein nucleic acid interactions are vitally important
for a wide range of biological processes, including regulation of gene
expression, protein synthesis, and replication and assembly of many viruses. We
have developed machine learning approaches for predicting which amino acids of
a protein participate in its interactions with other proteins and/or nucleic
acids, using only the protein sequence as input. In this paper, we describe an
application of classifiers trained on datasets of well-characterized
protein-protein and protein-RNA complexes for which experimental structures are
available. We apply these classifiers to the problem of predicting protein and
RNA binding sites in the sequence of a clinically important protein for which
the structure is not known: the regulatory protein Rev, essential for the
replication of HIV-1 and other lentiviruses. We compare our predictions with
published biochemical, genetic and partial structural information for HIV-1 and
EIAV Rev and with our own published experimental mapping of RNA binding sites
in EIAV Rev. The predicted and experimentally determined binding sites are in
very good agreement. The ability to predict reliably the residues of a protein
that directly contribute to specific binding events - without the requirement
for structural information regarding either the protein or complexes in which
it participates - can potentially generate new disease intervention strategies.
| no_new_dataset | 0.952574 |
cs/0511106 | Sergiu Chelcea | Sergiu Theodor Chelcea (INRIA Rocquencourt / INRIA Sophia Antipolis),
Alzennyr Da Silva (INRIA Rocquencourt / INRIA Sophia Antipolis), Yves
Lechevallier (INRIA Rocquencourt / INRIA Sophia Antipolis), Doru Tanasa
(INRIA Rocquencourt / INRIA Sophia Antipolis), Brigitte Trousse (INRIA
Rocquencourt / INRIA Sophia Antipolis) | Benefits of InterSite Pre-Processing and Clustering Methods in
E-Commerce Domain | null | Dans Proceedings of the ECML/PKDD2005 Discovery Challenge, A
Collaborative Effort in Knowledge Discovery from Databases | null | null | cs.DB | null | This paper presents our preprocessing and clustering analysis on the
clickstream dataset proposed for the ECMLPKDD 2005 Discovery Challenge. The
main contributions of this article are double. First, after presenting the
clickstream dataset, we show how we build a rich data warehouse based an
advanced preprocesing. We take into account the intersite aspects in the given
ecommerce domain, which offers an interesting data structuration. A preliminary
statistical analysis based on time period clickstreams is given, emphasing the
importance of intersite user visits in such a context. Secondly, we describe
our crossed-clustering method which is applied on data generated from our data
warehouse. Our preliminary results are interesting and promising illustrating
the benefits of our WUM methods, even if more investigations are needed on the
same dataset.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2005 16:12:38 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Chelcea",
"Sergiu Theodor",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Da Silva",
"Alzennyr",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Lechevallier",
"Yves",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Tanasa",
"Doru",
"",
"INRIA Rocquencourt / INRIA Sophia Antipolis"
],
[
"Trousse",
"Brigitte",
"",
"INRIA\n Rocquencourt / INRIA Sophia Antipolis"
]
] | TITLE: Benefits of InterSite Pre-Processing and Clustering Methods in
E-Commerce Domain
ABSTRACT: This paper presents our preprocessing and clustering analysis on the
clickstream dataset proposed for the ECMLPKDD 2005 Discovery Challenge. The
main contributions of this article are double. First, after presenting the
clickstream dataset, we show how we build a rich data warehouse based an
advanced preprocesing. We take into account the intersite aspects in the given
ecommerce domain, which offers an interesting data structuration. A preliminary
statistical analysis based on time period clickstreams is given, emphasing the
importance of intersite user visits in such a context. Secondly, we describe
our crossed-clustering method which is applied on data generated from our data
warehouse. Our preliminary results are interesting and promising illustrating
the benefits of our WUM methods, even if more investigations are needed on the
same dataset.
| no_new_dataset | 0.938181 |
cs/0512052 | Ion Mandoiu | Ion I. Mandoiu and Claudia Prajescu | High-Throughput SNP Genotyping by SBE/SBH | 19 pages | null | null | null | cs.DS q-bio.GN | null | Despite much progress over the past decade, current Single Nucleotide
Polymorphism (SNP) genotyping technologies still offer an insufficient degree
of multiplexing when required to handle user-selected sets of SNPs. In this
paper we propose a new genotyping assay architecture combining multiplexed
solution-phase single-base extension (SBE) reactions with sequencing by
hybridization (SBH) using universal DNA arrays such as all $k$-mer arrays. In
addition to PCR amplification of genomic DNA, SNP genotyping using SBE/SBH
assays involves the following steps: (1) Synthesizing primers complementing the
genomic sequence immediately preceding SNPs of interest; (2) Hybridizing these
primers with the genomic DNA; (3) Extending each primer by a single base using
polymerase enzyme and dideoxynucleotides labeled with 4 different fluorescent
dyes; and finally (4) Hybridizing extended primers to a universal DNA array and
determining the identity of the bases that extend each primer by hybridization
pattern analysis. Our contributions include a study of multiplexing algorithms
for SBE/SBH genotyping assays and preliminary experimental results showing the
achievable tradeoffs between the number of array probes and primer length on
one hand and the number of SNPs that can be assayed simultaneously on the
other. Simulation results on datasets both randomly generated and extracted
from the NCBI dbSNP database suggest that the SBE/SBH architecture provides a
flexible and cost-effective alternative to genotyping assays currently used in
the industry, enabling genotyping of up to hundreds of thousands of
user-specified SNPs per assay.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2005 18:01:51 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Mandoiu",
"Ion I.",
""
],
[
"Prajescu",
"Claudia",
""
]
] | TITLE: High-Throughput SNP Genotyping by SBE/SBH
ABSTRACT: Despite much progress over the past decade, current Single Nucleotide
Polymorphism (SNP) genotyping technologies still offer an insufficient degree
of multiplexing when required to handle user-selected sets of SNPs. In this
paper we propose a new genotyping assay architecture combining multiplexed
solution-phase single-base extension (SBE) reactions with sequencing by
hybridization (SBH) using universal DNA arrays such as all $k$-mer arrays. In
addition to PCR amplification of genomic DNA, SNP genotyping using SBE/SBH
assays involves the following steps: (1) Synthesizing primers complementing the
genomic sequence immediately preceding SNPs of interest; (2) Hybridizing these
primers with the genomic DNA; (3) Extending each primer by a single base using
polymerase enzyme and dideoxynucleotides labeled with 4 different fluorescent
dyes; and finally (4) Hybridizing extended primers to a universal DNA array and
determining the identity of the bases that extend each primer by hybridization
pattern analysis. Our contributions include a study of multiplexing algorithms
for SBE/SBH genotyping assays and preliminary experimental results showing the
achievable tradeoffs between the number of array probes and primer length on
one hand and the number of SNPs that can be assayed simultaneously on the
other. Simulation results on datasets both randomly generated and extracted
from the NCBI dbSNP database suggest that the SBE/SBH architecture provides a
flexible and cost-effective alternative to genotyping assays currently used in
the industry, enabling genotyping of up to hundreds of thousands of
user-specified SNPs per assay.
| no_new_dataset | 0.948106 |
cs/0602031 | Wit Jakuczun | Wit Jakuczun | Classifying Signals with Local Classifiers | null | null | null | null | cs.AI | null | This paper deals with the problem of classifying signals. The new method for
building so called local classifiers and local features is presented. The
method is a combination of the lifting scheme and the support vector machines.
Its main aim is to produce effective and yet comprehensible classifiers that
would help in understanding processes hidden behind classified signals. To
illustrate the method we present the results obtained on an artificial and a
real dataset.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2006 11:38:44 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Jakuczun",
"Wit",
""
]
] | TITLE: Classifying Signals with Local Classifiers
ABSTRACT: This paper deals with the problem of classifying signals. The new method for
building so called local classifiers and local features is presented. The
method is a combination of the lifting scheme and the support vector machines.
Its main aim is to produce effective and yet comprehensible classifiers that
would help in understanding processes hidden behind classified signals. To
illustrate the method we present the results obtained on an artificial and a
real dataset.
| no_new_dataset | 0.894513 |
cs/0603090 | Alexander Gorban | A.N. Gorban, N.R. Sumner, A.Y. Zinovyev | Topological Grammars for Data Approximation | Corrected Journal version, Appl. Math. Lett., in press. 7 pgs., 2
figs | Applied Mathematics Letters 20 (2007) 382--386 | 10.1016/j.aml.2006.04.022 | null | cs.NE cs.LG | null | A method of {\it topological grammars} is proposed for multidimensional data
approximation. For data with complex topology we define a {\it principal cubic
complex} of low dimension and given complexity that gives the best
approximation for the dataset. This complex is a generalization of linear and
non-linear principal manifolds and includes them as particular cases. The
problem of optimal principal complex construction is transformed into a series
of minimization problems for quadratic functionals. These quadratic functionals
have a physically transparent interpretation in terms of elastic energy. For
the energy computation, the whole complex is represented as a system of nodes
and springs. Topologically, the principal complex is a product of
one-dimensional continuums (represented by graphs), and the grammars describe
how these continuums transform during the process of optimal complex
construction. This factorization of the whole process onto one-dimensional
transformations using minimization of quadratic energy functionals allow us to
construct efficient algorithms.
| [
{
"version": "v1",
"created": "Wed, 22 Mar 2006 22:52:23 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2006 13:41:39 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gorban",
"A. N.",
""
],
[
"Sumner",
"N. R.",
""
],
[
"Zinovyev",
"A. Y.",
""
]
] | TITLE: Topological Grammars for Data Approximation
ABSTRACT: A method of {\it topological grammars} is proposed for multidimensional data
approximation. For data with complex topology we define a {\it principal cubic
complex} of low dimension and given complexity that gives the best
approximation for the dataset. This complex is a generalization of linear and
non-linear principal manifolds and includes them as particular cases. The
problem of optimal principal complex construction is transformed into a series
of minimization problems for quadratic functionals. These quadratic functionals
have a physically transparent interpretation in terms of elastic energy. For
the energy computation, the whole complex is represented as a system of nodes
and springs. Topologically, the principal complex is a product of
one-dimensional continuums (represented by graphs), and the grammars describe
how these continuums transform during the process of optimal complex
construction. This factorization of the whole process onto one-dimensional
transformations using minimization of quadratic energy functionals allow us to
construct efficient algorithms.
| no_new_dataset | 0.950824 |
cs/0604015 | Dmitri Krioukov | Xenofontas Dimitropoulos, Dmitri Krioukov, George Riley, kc claffy | Revealing the Autonomous System Taxonomy: The Machine Learning Approach | null | PAM 2006, best paper award | null | null | cs.NI cs.LG | null | Although the Internet AS-level topology has been extensively studied over the
past few years, little is known about the details of the AS taxonomy. An AS
"node" can represent a wide variety of organizations, e.g., large ISP, or small
private business, university, with vastly different network characteristics,
external connectivity patterns, network growth tendencies, and other properties
that we can hardly neglect while working on veracious Internet representations
in simulation environments. In this paper, we introduce a radically new
approach based on machine learning techniques to map all the ASes in the
Internet into a natural AS taxonomy. We successfully classify 95.3% of ASes
with expected accuracy of 78.1%. We release to the community the AS-level
topology dataset augmented with: 1) the AS taxonomy information and 2) the set
of AS attributes we used to classify ASes. We believe that this dataset will
serve as an invaluable addition to further understanding of the structure and
evolution of the Internet.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2006 00:08:24 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Dimitropoulos",
"Xenofontas",
""
],
[
"Krioukov",
"Dmitri",
""
],
[
"Riley",
"George",
""
],
[
"claffy",
"kc",
""
]
] | TITLE: Revealing the Autonomous System Taxonomy: The Machine Learning Approach
ABSTRACT: Although the Internet AS-level topology has been extensively studied over the
past few years, little is known about the details of the AS taxonomy. An AS
"node" can represent a wide variety of organizations, e.g., large ISP, or small
private business, university, with vastly different network characteristics,
external connectivity patterns, network growth tendencies, and other properties
that we can hardly neglect while working on veracious Internet representations
in simulation environments. In this paper, we introduce a radically new
approach based on machine learning techniques to map all the ASes in the
Internet into a natural AS taxonomy. We successfully classify 95.3% of ASes
with expected accuracy of 78.1%. We release to the community the AS-level
topology dataset augmented with: 1) the AS taxonomy information and 2) the set
of AS attributes we used to classify ASes. We believe that this dataset will
serve as an invaluable addition to further understanding of the structure and
evolution of the Internet.
| new_dataset | 0.957991 |
cs/0606024 | Edgar Graaf de | Edgar de Graaf, Jeannette de Graaf, and Walter A. Kosters | Consecutive Support: Better Be Close! | 10 pages | null | null | null | cs.AI cs.DB | null | We propose a new measure of support (the number of occur- rences of a
pattern), in which instances are more important if they occur with a certain
frequency and close after each other in the stream of trans- actions. We will
explain this new consecutive support and discuss how patterns can be found
faster by pruning the search space, for instance using so-called parent support
recalculation. Both consecutiveness and the notion of hypercliques are
incorporated into the Eclat algorithm. Synthetic examples show how interesting
phenomena can now be discov- ered in the datasets. The new measure can be
applied in many areas, ranging from bio-informatics to trade, supermarkets, and
even law en- forcement. E.g., in bio-informatics it is important to find
patterns con- tained in many individuals, where patterns close together in one
chro- mosome are more significant.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2006 14:28:42 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"de Graaf",
"Edgar",
""
],
[
"de Graaf",
"Jeannette",
""
],
[
"Kosters",
"Walter A.",
""
]
] | TITLE: Consecutive Support: Better Be Close!
ABSTRACT: We propose a new measure of support (the number of occur- rences of a
pattern), in which instances are more important if they occur with a certain
frequency and close after each other in the stream of trans- actions. We will
explain this new consecutive support and discuss how patterns can be found
faster by pruning the search space, for instance using so-called parent support
recalculation. Both consecutiveness and the notion of hypercliques are
incorporated into the Eclat algorithm. Synthetic examples show how interesting
phenomena can now be discov- ered in the datasets. The new measure can be
applied in many areas, ranging from bio-informatics to trade, supermarkets, and
even law en- forcement. E.g., in bio-informatics it is important to find
patterns con- tained in many individuals, where patterns close together in one
chro- mosome are more significant.
| no_new_dataset | 0.952397 |
cs/0701013 | Zengyou He | Zengyou He, Xaiofei Xu, Shengchun Deng | Attribute Value Weighting in K-Modes Clustering | 15 pages | null | null | Tr-06-0615 | cs.AI | null | In this paper, the traditional k-modes clustering algorithm is extended by
weighting attribute value matches in dissimilarity computation. The use of
attribute value weighting technique makes it possible to generate clusters with
stronger intra-similarities, and therefore achieve better clustering
performance. Experimental results on real life datasets show that these value
weighting based k-modes algorithms are superior to the standard k-modes
algorithm with respect to clustering accuracy.
| [
{
"version": "v1",
"created": "Wed, 3 Jan 2007 09:06:03 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"He",
"Zengyou",
""
],
[
"Xu",
"Xaiofei",
""
],
[
"Deng",
"Shengchun",
""
]
] | TITLE: Attribute Value Weighting in K-Modes Clustering
ABSTRACT: In this paper, the traditional k-modes clustering algorithm is extended by
weighting attribute value matches in dissimilarity computation. The use of
attribute value weighting technique makes it possible to generate clusters with
stronger intra-similarities, and therefore achieve better clustering
performance. Experimental results on real life datasets show that these value
weighting based k-modes algorithms are superior to the standard k-modes
algorithm with respect to clustering accuracy.
| no_new_dataset | 0.95388 |
cs/0701167 | Jim Gray | Maria A. Nieto-Santisteban, Aniruddha R. Thakar, Alexander S. Szalay,
Jim Gray | Large-Scale Query and XMatch, Entering the Parallel Zone | Astronomical Data Analysis Software and Systems XV in San Lorenzo de
El Escorial, Madrid, Spain, October 2005, to appear in the ASP Conference
Series | null | null | MSR-TR-2005- 169 | cs.DB cs.CE | null | Current and future astronomical surveys are producing catalogs with millions
and billions of objects. On-line access to such big datasets for data mining
and cross-correlation is usually as highly desired as unfeasible. Providing
these capabilities is becoming critical for the Virtual Observatory framework.
In this paper we present various performance tests that show how using
Relational Database Management Systems (RDBMS) and a Zoning algorithm to
partition and parallelize the computation, we can facilitate large-scale query
and cross-match.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2007 00:33:26 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Nieto-Santisteban",
"Maria A.",
""
],
[
"Thakar",
"Aniruddha R.",
""
],
[
"Szalay",
"Alexander S.",
""
],
[
"Gray",
"Jim",
""
]
] | TITLE: Large-Scale Query and XMatch, Entering the Parallel Zone
ABSTRACT: Current and future astronomical surveys are producing catalogs with millions
and billions of objects. On-line access to such big datasets for data mining
and cross-correlation is usually as highly desired as unfeasible. Providing
these capabilities is becoming critical for the Virtual Observatory framework.
In this paper we present various performance tests that show how using
Relational Database Management Systems (RDBMS) and a Zoning algorithm to
partition and parallelize the computation, we can facilitate large-scale query
and cross-match.
| no_new_dataset | 0.937096 |
cs/0701171 | Jim Gray | Jim Gray, Maria A. Nieto-Santisteban, Alexander S. Szalay | The Zones Algorithm for Finding Points-Near-a-Point or Cross-Matching
Spatial Datasets | null | null | null | MSR TR 2006 52 | cs.DB cs.DS | null | Zones index an N-dimensional Euclidian or metric space to efficiently support
points-near-a-point queries either within a dataset or between two datasets.
The approach uses relational algebra and the B-Tree mechanism found in almost
all relational database systems. Hence, the Zones Algorithm gives a
portable-relational implementation of points-near-point, spatial cross-match,
and self-match queries. This article corrects some mistakes in an earlier
article we wrote on the Zones Algorithm and describes some algorithmic
improvements. The Appendix includes an implementation of point-near-point,
self-match, and cross-match using the USGS city and stream gauge database.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2007 05:11:20 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gray",
"Jim",
""
],
[
"Nieto-Santisteban",
"Maria A.",
""
],
[
"Szalay",
"Alexander S.",
""
]
] | TITLE: The Zones Algorithm for Finding Points-Near-a-Point or Cross-Matching
Spatial Datasets
ABSTRACT: Zones index an N-dimensional Euclidian or metric space to efficiently support
points-near-a-point queries either within a dataset or between two datasets.
The approach uses relational algebra and the B-Tree mechanism found in almost
all relational database systems. Hence, the Zones Algorithm gives a
portable-relational implementation of points-near-point, spatial cross-match,
and self-match queries. This article corrects some mistakes in an earlier
article we wrote on the Zones Algorithm and describes some algorithmic
improvements. The Appendix includes an implementation of point-near-point,
self-match, and cross-match using the USGS city and stream gauge database.
| no_new_dataset | 0.95275 |
cs/0701173 | Jim Gray | Vik Singh, Jim Gray, Ani Thakar, Alexander S. Szalay, Jordan Raddick,
Bill Boroski, Svetlana Lebedeva, Brian Yanny | SkyServer Traffic Report - The First Five Years | null | null | null | MSR TR-2006-190 | cs.DB cs.CE | null | The SkyServer is an Internet portal to the Sloan Digital Sky Survey Catalog
Archive Server. From 2001 to 2006, there were a million visitors in 3 million
sessions generating 170 million Web hits, 16 million ad-hoc SQL queries, and 62
million page views. The site currently averages 35 thousand visitors and 400
thousand sessions per month. The Web and SQL logs are public. We analyzed
traffic and sessions by duration, usage pattern, data product, and client type
(mortal or bot) over time. The analysis shows (1) the site's popularity, (2)
the educational website that delivered nearly fifty thousand hours of
interactive instruction, (3) the relative use of interactive, programmatic, and
batch-local access, (4) the success of offering ad-hoc SQL, personal database,
and batch job access to scientists as part of the data publication, (5) the
continuing interest in "old" datasets, (6) the usage of SQL constructs, and (7)
a novel approach of using the corpus of correct SQL queries to suggest similar
but correct statements when a user presents an incorrect SQL statement.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2007 05:22:15 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Singh",
"Vik",
""
],
[
"Gray",
"Jim",
""
],
[
"Thakar",
"Ani",
""
],
[
"Szalay",
"Alexander S.",
""
],
[
"Raddick",
"Jordan",
""
],
[
"Boroski",
"Bill",
""
],
[
"Lebedeva",
"Svetlana",
""
],
[
"Yanny",
"Brian",
""
]
] | TITLE: SkyServer Traffic Report - The First Five Years
ABSTRACT: The SkyServer is an Internet portal to the Sloan Digital Sky Survey Catalog
Archive Server. From 2001 to 2006, there were a million visitors in 3 million
sessions generating 170 million Web hits, 16 million ad-hoc SQL queries, and 62
million page views. The site currently averages 35 thousand visitors and 400
thousand sessions per month. The Web and SQL logs are public. We analyzed
traffic and sessions by duration, usage pattern, data product, and client type
(mortal or bot) over time. The analysis shows (1) the site's popularity, (2)
the educational website that delivered nearly fifty thousand hours of
interactive instruction, (3) the relative use of interactive, programmatic, and
batch-local access, (4) the success of offering ad-hoc SQL, personal database,
and batch job access to scientists as part of the data publication, (5) the
continuing interest in "old" datasets, (6) the usage of SQL constructs, and (7)
a novel approach of using the corpus of correct SQL queries to suggest similar
but correct statements when a user presents an incorrect SQL statement.
| no_new_dataset | 0.932515 |
physics/0006050 | Sven Bilke | Sven Bilke | Shuffling Yeast Gene Expression Data | 8 pages, 2 figures. Submitted to Proceedings of the National Academy
of Science USA | null | null | LU TP 00-18 | physics.bio-ph physics.data-an physics.med-ph q-bio.QM | null | A new method to sort gene expression patterns into functional groups is
presented. The method is based on a sorting algorithm using a non-local
similarity score, which takes all other patterns in the dataset into account.
The method is therefore very robust with respect to noise. Using the expression
data for yeast, we extract information about functional groups. Without prior
knowledge of parameters the cell cycle regulated genes in yeast can be
identified. Furthermore a second, independent cell clock is identified. The
capability of the algorithm to extract information about signal flow in the
regulatory network underlying the expression patterns is demonstrated.
| [
{
"version": "v1",
"created": "Tue, 20 Jun 2000 09:55:01 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Bilke",
"Sven",
""
]
] | TITLE: Shuffling Yeast Gene Expression Data
ABSTRACT: A new method to sort gene expression patterns into functional groups is
presented. The method is based on a sorting algorithm using a non-local
similarity score, which takes all other patterns in the dataset into account.
The method is therefore very robust with respect to noise. Using the expression
data for yeast, we extract information about functional groups. Without prior
knowledge of parameters the cell cycle regulated genes in yeast can be
identified. Furthermore a second, independent cell clock is identified. The
capability of the algorithm to extract information about signal flow in the
regulatory network underlying the expression patterns is demonstrated.
| no_new_dataset | 0.947235 |
physics/0202012 | Nicola Scafetta | Nicola Scafetta, Tim Imholt, Paolo Grigolini, and Jim Roberts | Temperature reconstruction analysis | 10 pages, 18 figures | null | null | null | physics.ao-ph physics.data-an | null | This paper presents a wavelet multiresolution analysis of a time series
dataset to study the correlation between the real temperature data and three
temperature model reconstructions at different scales. We show that the Mann
et.al. model reconstructs the temperature better at all temporal resolutions.
We show and discuss the wavelet multiresolution analysis of the Mann's
temperature reconstruction for the period from 1400 to 2000 A.D.E.
| [
{
"version": "v1",
"created": "Mon, 4 Feb 2002 23:25:34 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Scafetta",
"Nicola",
""
],
[
"Imholt",
"Tim",
""
],
[
"Grigolini",
"Paolo",
""
],
[
"Roberts",
"Jim",
""
]
] | TITLE: Temperature reconstruction analysis
ABSTRACT: This paper presents a wavelet multiresolution analysis of a time series
dataset to study the correlation between the real temperature data and three
temperature model reconstructions at different scales. We show that the Mann
et.al. model reconstructs the temperature better at all temporal resolutions.
We show and discuss the wavelet multiresolution analysis of the Mann's
temperature reconstruction for the period from 1400 to 2000 A.D.E.
| no_new_dataset | 0.946843 |
physics/0210082 | Dennis J. Mikkelson | D. J. Mikkelson (1), R. L. Mikkelson (1), T. G. Worlton (2), A.
Chatterjee (2), J. P. Hammonds (2), P. F. Peterson (2), A. J. Schultz (2)
((1) University of Wisconsin-Stout,(2) Argonne National Laboratory) | Coordinated, Interactive Data Visualization for Neutron Scattering Data | Talk at NOBUGS 2002 Conference, NIST, Gaithersburg MD. NOBUGS
abstract identifier NOBUGS/034 | null | null | NOBUGS/034 | physics.data-an physics.ins-det | null | The overall design of the Integrated Spectral Analysis Workbench (ISAW),
being developed at Argonne, provides for an extensible, highly interactive,
collaborating set of viewers for neutron scattering data. Large arbitrary
collections of spectra from multiple detectors can be viewed as an image, a
scrolled list of individual graphs, or using a 3D representation of the
instrument showing the detector positions. Data from an area detector can be
displayed using a contour or intensity map as well as an interactive table.
Selected spectra can be displayed in tables or on a conventional graph. A
unique characteristic of these viewers is their interactivity and coordination.
The position "pointed at" by the user in one viewer is sent to other viewers of
the same DataSet so they can track the position and display relevant
information. Specialized viewers for single crystal neutron diffractometers are
being developed. A "proof-of-concept" viewer that directly displays the 3D
reciprocal lattice from a complete series of runs on a single crystal
diffractometer has been implemented.
| [
{
"version": "v1",
"created": "Sun, 20 Oct 2002 04:20:36 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Mikkelson",
"D. J.",
"",
"University of Wisconsin-Stout"
],
[
"Mikkelson",
"R. L.",
"",
"University of Wisconsin-Stout"
],
[
"Worlton",
"T. G.",
"",
"Argonne National Laboratory"
],
[
"Chatterjee",
"A.",
"",
"Argonne National Laboratory"
],
[
"Hammonds",
"J. P.",
"",
"Argonne National Laboratory"
],
[
"Peterson",
"P. F.",
"",
"Argonne National Laboratory"
],
[
"Schultz",
"A. J.",
"",
"Argonne National Laboratory"
]
] | TITLE: Coordinated, Interactive Data Visualization for Neutron Scattering Data
ABSTRACT: The overall design of the Integrated Spectral Analysis Workbench (ISAW),
being developed at Argonne, provides for an extensible, highly interactive,
collaborating set of viewers for neutron scattering data. Large arbitrary
collections of spectra from multiple detectors can be viewed as an image, a
scrolled list of individual graphs, or using a 3D representation of the
instrument showing the detector positions. Data from an area detector can be
displayed using a contour or intensity map as well as an interactive table.
Selected spectra can be displayed in tables or on a conventional graph. A
unique characteristic of these viewers is their interactivity and coordination.
The position "pointed at" by the user in one viewer is sent to other viewers of
the same DataSet so they can track the position and display relevant
information. Specialized viewers for single crystal neutron diffractometers are
being developed. A "proof-of-concept" viewer that directly displays the 3D
reciprocal lattice from a complete series of runs on a single crystal
diffractometer has been implemented.
| no_new_dataset | 0.934395 |
physics/0306096 | J. R. Bogart | J. Bogart | Calibration Infrastructure for the GLAST LAT | Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, LaTeX, 2 eps figures. PSN
MOKT001 | null | null | SLAC-PUB-9890 | physics.ins-det | null | The GLAST LAT calibration infrastructure is designed to accommodate a wide
range of time-varying data types, including at a minimum hardware status bits,
conversion constants, and alignment for the GLAST LAT instrument and its
prototypes. The system will support persistent XML and ROOT data to begin with;
other physical formats will be added if necessary. In addition to the "bulk
data", each data set will have associated with it a row in a rdbms table
containing metadata, such as timestamps, data format, pointer to the location
of the bulk data, etc., which will be used to identify and locate the
appropriate data set for a particular application.
As GLAST uses the Gaudi framework for event processing, the Calibration
Infrastructure makes use of several Gaudi elements and concepts, such as
conversion services, converters and data objects and implements the prescribed
Gaudi interfaces (IDetDataSvc, IValidity, ..). This insures that calibration
data will always be valid and appropriate for the event being processed. The
persistent representation of a calibration dataset as two physical pieces in
different formats complicates the conversion process somewhat: two cooperating
conversion services are involved in the conversion of any single dataset.
| [
{
"version": "v1",
"created": "Thu, 12 Jun 2003 17:02:25 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Bogart",
"J.",
""
]
] | TITLE: Calibration Infrastructure for the GLAST LAT
ABSTRACT: The GLAST LAT calibration infrastructure is designed to accommodate a wide
range of time-varying data types, including at a minimum hardware status bits,
conversion constants, and alignment for the GLAST LAT instrument and its
prototypes. The system will support persistent XML and ROOT data to begin with;
other physical formats will be added if necessary. In addition to the "bulk
data", each data set will have associated with it a row in a rdbms table
containing metadata, such as timestamps, data format, pointer to the location
of the bulk data, etc., which will be used to identify and locate the
appropriate data set for a particular application.
As GLAST uses the Gaudi framework for event processing, the Calibration
Infrastructure makes use of several Gaudi elements and concepts, such as
conversion services, converters and data objects and implements the prescribed
Gaudi interfaces (IDetDataSvc, IValidity, ..). This insures that calibration
data will always be valid and appropriate for the event being processed. The
persistent representation of a calibration dataset as two physical pieces in
different formats complicates the conversion process somewhat: two cooperating
conversion services are involved in the conversion of any single dataset.
| no_new_dataset | 0.934694 |
physics/0311096 | Leonid Petrov | Leonid Petrov, Jean-Paul Boy | Study of the atmospheric pressure loading signal in VLBI observations | accepted by the Journal of Geophysical Research | null | 10.1029/2003JB002500 | null | physics.geo-ph | null | Redistribution of air masses due to atmospheric circulation causes loading
deformation of the Earth's crust which can be as large as 20 mm for the
vertical component and 3 mm for horizontal components. Rigorous computation of
site displacements caused by pressure loading requires knowledge of the surface
pressure field over the entire Earth surface. A procedure for computing 3-D
displacements of geodetic sites of interest using a 6-hourly pressure field
from the NCEP numerical weather models and the Ponte and Ray [2002] model of
atmospheric tides is presented. We investigated possible error sources and
found that the errors of our pressure loading time series are below the 15%
level. We validated our model by estimating the admittance factors of the
pressure loading time series using a dataset of 3.5 million VLBI observations
from 1980 to 2002. The admittance factors averaged over all sites are 0.95 -+
0.02 for the vertical displacement and 1.00 -+ 0.07 for the horizontal
displacements. For the first time horizontal displacements caused by
atmospheric pressure loading have been detected. The closeness of these
admittance factors to unity allows us to conclude that on average our model
quantitatively agrees with the observations within the error budget of the
model. At the same time we found that the model is not accurate for several
stations which are near a coast or in mountain regions. We conclude that our
model is suitable for routine data reduction of space geodesy observations.
| [
{
"version": "v1",
"created": "Wed, 19 Nov 2003 20:41:35 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Petrov",
"Leonid",
""
],
[
"Boy",
"Jean-Paul",
""
]
] | TITLE: Study of the atmospheric pressure loading signal in VLBI observations
ABSTRACT: Redistribution of air masses due to atmospheric circulation causes loading
deformation of the Earth's crust which can be as large as 20 mm for the
vertical component and 3 mm for horizontal components. Rigorous computation of
site displacements caused by pressure loading requires knowledge of the surface
pressure field over the entire Earth surface. A procedure for computing 3-D
displacements of geodetic sites of interest using a 6-hourly pressure field
from the NCEP numerical weather models and the Ponte and Ray [2002] model of
atmospheric tides is presented. We investigated possible error sources and
found that the errors of our pressure loading time series are below the 15%
level. We validated our model by estimating the admittance factors of the
pressure loading time series using a dataset of 3.5 million VLBI observations
from 1980 to 2002. The admittance factors averaged over all sites are 0.95 -+
0.02 for the vertical displacement and 1.00 -+ 0.07 for the horizontal
displacements. For the first time horizontal displacements caused by
atmospheric pressure loading have been detected. The closeness of these
admittance factors to unity allows us to conclude that on average our model
quantitatively agrees with the observations within the error budget of the
model. At the same time we found that the model is not accurate for several
stations which are near a coast or in mountain regions. We conclude that our
model is suitable for routine data reduction of space geodesy observations.
| no_new_dataset | 0.929055 |
physics/0401117 | Leonid Petrov | L. Petrov, J.-P. Boy | Atmospheric pressure loading for routine data analysis | To be published in the Proceedings of the meeting "The State of GPS
Vertical Positioning Precision: Separation of Earth Processes by Space
Geodesy" held in Luxembourg inApril 2003 | null | null | null | physics.geo-ph | null | We have computed 3-D displacements induced by atmospheric pressure loading
from 6-hourly surface pressure field from NCEP (National Center for
Environmental Predictions) Reanalysis data for all Very Long Baseline
Interferometry) and SLR (Satellite Laser Ranging) stations. We have
quantitatively estimated the error budget our time series of pressure loading
and found that the errors are below 15%. We validated our loading series by
comparing them with a dataset of 3.5 million VLBI observations for the period
of 1980--2003. We have shown that the amount of power which is present in the
loading time series, but does not present in the VLBI data is, on average, only
5%. We have also succeeded, for the first time, to detect horizontal
displacements caused by atmospheric loading. The correction of atmospheric
loading in VLBI data allows a significant reduction of baseline repeatability,
except for the annual component.
| [
{
"version": "v1",
"created": "Fri, 23 Jan 2004 01:15:01 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Petrov",
"L.",
""
],
[
"Boy",
"J. -P.",
""
]
] | TITLE: Atmospheric pressure loading for routine data analysis
ABSTRACT: We have computed 3-D displacements induced by atmospheric pressure loading
from 6-hourly surface pressure field from NCEP (National Center for
Environmental Predictions) Reanalysis data for all Very Long Baseline
Interferometry) and SLR (Satellite Laser Ranging) stations. We have
quantitatively estimated the error budget our time series of pressure loading
and found that the errors are below 15%. We validated our loading series by
comparing them with a dataset of 3.5 million VLBI observations for the period
of 1980--2003. We have shown that the amount of power which is present in the
loading time series, but does not present in the VLBI data is, on average, only
5%. We have also succeeded, for the first time, to detect horizontal
displacements caused by atmospheric loading. The correction of atmospheric
loading in VLBI data allows a significant reduction of baseline repeatability,
except for the annual component.
| no_new_dataset | 0.937555 |
physics/0408038 | Valerio Lucarini | Valerio Lucarini | Towards a definition of climate science | 10 pages, 2 figures | published on IJEP Vol. 18, No. 5, 413-422 (2002) | null | null | physics.ao-ph physics.data-an physics.geo-ph physics.soc-ph | null | The intrinsic difficulties in building realistic climate models and in
providing complete, reliable and meaningful observational datasets, and the
conceptual impossibility of testing theories against data imply that the usual
Galilean scientific validation criteria do not apply to climate science. The
different epistemology pertaining to climate science implies that its answers
cannot be singular and deterministic; they must be plural and stated in
probabilistic terms. Therefore, in order to extract meaningful estimates of
future climate change from a model, it is necessary to explore the model'
uncertainties. In terms of societal impacts of scientific knowledge, it is
necessary to accept that any political choice in a matter involving complex
systems is made under unavoidable conditions of uncertainty. Nevertheless,
detailed probabilistic results in science can provide a baseline for a sensible
process of decision making.
| [
{
"version": "v1",
"created": "Sun, 8 Aug 2004 15:18:03 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Lucarini",
"Valerio",
""
]
] | TITLE: Towards a definition of climate science
ABSTRACT: The intrinsic difficulties in building realistic climate models and in
providing complete, reliable and meaningful observational datasets, and the
conceptual impossibility of testing theories against data imply that the usual
Galilean scientific validation criteria do not apply to climate science. The
different epistemology pertaining to climate science implies that its answers
cannot be singular and deterministic; they must be plural and stated in
probabilistic terms. Therefore, in order to extract meaningful estimates of
future climate change from a model, it is necessary to explore the model'
uncertainties. In terms of societal impacts of scientific knowledge, it is
necessary to accept that any political choice in a matter involving complex
systems is made under unavoidable conditions of uncertainty. Nevertheless,
detailed probabilistic results in science can provide a baseline for a sensible
process of decision making.
| no_new_dataset | 0.944536 |
physics/0412150 | Valerio Lucarini | Alessandro Dell'Aquila, Valerio Lucarini, Paolo Ruti, Sandro Calmanti | Hayashi Spectra of the Northern Hemisphere Mid-latitude Atmospheric
Variability in the NCEP and ERA 40 Reanalyses | 30 pages, 6 figures, 2 tables | null | null | null | physics.ao-ph physics.flu-dyn physics.geo-ph | null | We compare 45 years of the reanalyses of NCEP-NCAR and ECMWF in terms of
their representation of the mid-latitude winter atmospheric variability for the
overlapping time frame 1957-2002. We adopt the classical approach of computing
the Hayashi spectra of the 500 hPa geopotential height fields. Discrepancies
are found especially in the first 15 years of the records in the
high-frequency-high wavenumber propagating waves and secondly on low
frequency-low wavenumber standing waves. This implies that in the first period
the two datasets have a different representation of the baroclinic available
energy conversion processes. In the period starting from 1973 a positive impact
of the aircraft data on the Euro-Atlantic synoptic waves has been highlighted.
Since in the first period the assimilated data are scarcer and of lower quality
than later on, they provide a weaker constraint to the model dynamics.
Therefore, the resulting discrepancies in the reanalysis products may be mainly
attributed to differences in the models' behavior.
| [
{
"version": "v1",
"created": "Wed, 22 Dec 2004 21:57:28 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Dell'Aquila",
"Alessandro",
""
],
[
"Lucarini",
"Valerio",
""
],
[
"Ruti",
"Paolo",
""
],
[
"Calmanti",
"Sandro",
""
]
] | TITLE: Hayashi Spectra of the Northern Hemisphere Mid-latitude Atmospheric
Variability in the NCEP and ERA 40 Reanalyses
ABSTRACT: We compare 45 years of the reanalyses of NCEP-NCAR and ECMWF in terms of
their representation of the mid-latitude winter atmospheric variability for the
overlapping time frame 1957-2002. We adopt the classical approach of computing
the Hayashi spectra of the 500 hPa geopotential height fields. Discrepancies
are found especially in the first 15 years of the records in the
high-frequency-high wavenumber propagating waves and secondly on low
frequency-low wavenumber standing waves. This implies that in the first period
the two datasets have a different representation of the baroclinic available
energy conversion processes. In the period starting from 1973 a positive impact
of the aircraft data on the Euro-Atlantic synoptic waves has been highlighted.
Since in the first period the assimilated data are scarcer and of lower quality
than later on, they provide a weaker constraint to the model dynamics.
Therefore, the resulting discrepancies in the reanalysis products may be mainly
attributed to differences in the models' behavior.
| no_new_dataset | 0.944331 |
physics/0504167 | Petter Holme | Gourab Ghoshal, Petter Holme | Attractiveness and activity in Internet communities | null | Physica A 364, 603-609 (2006) | 10.1016/j.physa.2005.04.047 | null | physics.soc-ph | null | Datasets of online communication often take the form of contact sequences --
ordered lists contacts (where a contact is defined as a triple of a sender, a
recipient and a time). We propose measures of attractiveness and activity for
such data sets and analyze these quantities for anonymized contact sequences
from an Internet dating community. For this data set the attractiveness and
activity measures show broad power-law like distributions. Our attractiveness
and activity measures are more strongly correlated in the real-world data than
in our reference model. Effects that indirectly can make active users more
attractive are discussed.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2005 18:13:53 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Ghoshal",
"Gourab",
""
],
[
"Holme",
"Petter",
""
]
] | TITLE: Attractiveness and activity in Internet communities
ABSTRACT: Datasets of online communication often take the form of contact sequences --
ordered lists contacts (where a contact is defined as a triple of a sender, a
recipient and a time). We propose measures of attractiveness and activity for
such data sets and analyze these quantities for anonymized contact sequences
from an Internet dating community. For this data set the attractiveness and
activity measures show broad power-law like distributions. Our attractiveness
and activity measures are more strongly correlated in the real-world data than
in our reference model. Effects that indirectly can make active users more
attractive are discussed.
| no_new_dataset | 0.935993 |
physics/0506213 | Neil F. Johnson | N. Johnson, M. Spagat, J. Restrepo, J. Bohorquez, N. Suarez, E.
Restrepo, and R. Zarama | From old wars to new wars and global terrorism | For more information, please contact [email protected] or
[email protected] | null | null | null | physics.soc-ph physics.data-an | null | Even before 9/11 there were claims that the nature of war had changed
fundamentally. The 9/11 attacks created an urgent need to understand
contemporary wars and their relationship to older conventional and terrorist
wars, both of which exhibit remarkable regularities. The frequency-intensity
distribution of fatalities in "old wars", 1816-1980, is a power-law with
exponent 1.80. Global terrorist attacks, 1968-present, also follow a power-law
with exponent 1.71 for G7 countries and 2.5 for non-G7 countries. Here we
analyze two ongoing, high-profile wars on opposite sides of the globe -
Colombia and Iraq. Our analysis uses our own unique dataset for killings and
injuries in Colombia, plus publicly available data for civilians killed in
Iraq. We show strong evidence for power-law behavior within each war. Despite
substantial differences in contexts and data coverage, the power-law
coefficients for both wars are tending toward 2.5, which is a value
characteristic of non-G7 terrorism as opposed to old wars. We propose a
plausible yet analytically-solvable model of modern insurgent warfare, which
can explain these observations.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2005 09:33:52 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Johnson",
"N.",
""
],
[
"Spagat",
"M.",
""
],
[
"Restrepo",
"J.",
""
],
[
"Bohorquez",
"J.",
""
],
[
"Suarez",
"N.",
""
],
[
"Restrepo",
"E.",
""
],
[
"Zarama",
"R.",
""
]
] | TITLE: From old wars to new wars and global terrorism
ABSTRACT: Even before 9/11 there were claims that the nature of war had changed
fundamentally. The 9/11 attacks created an urgent need to understand
contemporary wars and their relationship to older conventional and terrorist
wars, both of which exhibit remarkable regularities. The frequency-intensity
distribution of fatalities in "old wars", 1816-1980, is a power-law with
exponent 1.80. Global terrorist attacks, 1968-present, also follow a power-law
with exponent 1.71 for G7 countries and 2.5 for non-G7 countries. Here we
analyze two ongoing, high-profile wars on opposite sides of the globe -
Colombia and Iraq. Our analysis uses our own unique dataset for killings and
injuries in Colombia, plus publicly available data for civilians killed in
Iraq. We show strong evidence for power-law behavior within each war. Despite
substantial differences in contexts and data coverage, the power-law
coefficients for both wars are tending toward 2.5, which is a value
characteristic of non-G7 terrorism as opposed to old wars. We propose a
plausible yet analytically-solvable model of modern insurgent warfare, which
can explain these observations.
| new_dataset | 0.968171 |
physics/0509022 | Paolo Gasperini | Paolo Gasperini and Barbara Lolli | Correlation between the parameters of the rate equation for simple
aftershock sequences: implications for the forecasting of rates and
probabilities | 47 pages, 10 figures, 8 tables, 1 appendix with 3 tables | null | null | null | physics.geo-ph | null | We analyzed the correlations among the parameters of the Reasenberg and Jones
(1989) formula describing the aftershock rate after a mainshock as a function
of time and magnitude, on the basis of parameter estimates made in previous
works for New Zealand, Italy and California. For all of three datasets we found
that the magnitude-independent productivity a is significantly correlated with
the b-value of the Gutenberg-Richter law and, in some cases, with parameters p
and c of the modified Omori's law. We argued that the correlation between a and
b can be ascribed to an inappropriate definition of the coefficient of
mainshock magnitude as the correlation becomes insignificant if the latter is
assumed to be $\alpha\simeq$ 2/3b rather than b. This interpretation well
agrees with the results of direct a estimates we made, by an epidemic type
model (ETAS), from the data of some large Italian sequences. We also verified
that assuming $\alpha$ about 2/3 of the average b value estimated from Italian
sequences occurred in the time interval 1981-1996 improves the ability to
predict the behavior of most recent sequences (from 1997 to 2003). Our results
indicate a partial inadequacy of the original Reasenberg and Jones (1989)
formulation when used to forecast the productivity of future sequences. In
particular, the aftershock rates and probabilities tend to be overestimated for
stronger mainshocks and conversely underestimated for weaker ones.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2005 14:55:54 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Gasperini",
"Paolo",
""
],
[
"Lolli",
"Barbara",
""
]
] | TITLE: Correlation between the parameters of the rate equation for simple
aftershock sequences: implications for the forecasting of rates and
probabilities
ABSTRACT: We analyzed the correlations among the parameters of the Reasenberg and Jones
(1989) formula describing the aftershock rate after a mainshock as a function
of time and magnitude, on the basis of parameter estimates made in previous
works for New Zealand, Italy and California. For all of three datasets we found
that the magnitude-independent productivity a is significantly correlated with
the b-value of the Gutenberg-Richter law and, in some cases, with parameters p
and c of the modified Omori's law. We argued that the correlation between a and
b can be ascribed to an inappropriate definition of the coefficient of
mainshock magnitude as the correlation becomes insignificant if the latter is
assumed to be $\alpha\simeq$ 2/3b rather than b. This interpretation well
agrees with the results of direct a estimates we made, by an epidemic type
model (ETAS), from the data of some large Italian sequences. We also verified
that assuming $\alpha$ about 2/3 of the average b value estimated from Italian
sequences occurred in the time interval 1981-1996 improves the ability to
predict the behavior of most recent sequences (from 1997 to 2003). Our results
indicate a partial inadequacy of the original Reasenberg and Jones (1989)
formulation when used to forecast the productivity of future sequences. In
particular, the aftershock rates and probabilities tend to be overestimated for
stronger mainshocks and conversely underestimated for weaker ones.
| no_new_dataset | 0.944842 |
physics/0509132 | M\'ario Lino da Silva | M. Lino da Silva | Guidelines for the Calculation of Bound Molecular Spectra | 17 pages, 7 figures | null | null | null | physics.optics | null | Line-by-line calculations are becoming the standard procedure for carrying
spectral simulations. However, it is important to insure the accuracy of such
spectral simulations through the choice of adapted models for the simulation of
key parameters such as line position, intensity, and shape. Moreover, it is
necessary to rely on accurate spectral data to guaranty the accuracy of the
simulated spectra. A discussion on the most accurate models available for such
calculations is presented for diatomic and linear polyatomic discrete
radiation, and possible reductions on the number of calculated lines are
discussed in order to reduce memory and computational overheads. Examples of
different approaches for the simulation of experimentally determined
low-pressure molecular spectra are presented. The accuracy of different
simulation approaches is discussed and it is verified that a careful choice of
applied computational models and spectroscopic datasets yields precise
approximations of the measured spectra.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2005 11:41:54 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"da Silva",
"M. Lino",
""
]
] | TITLE: Guidelines for the Calculation of Bound Molecular Spectra
ABSTRACT: Line-by-line calculations are becoming the standard procedure for carrying
spectral simulations. However, it is important to insure the accuracy of such
spectral simulations through the choice of adapted models for the simulation of
key parameters such as line position, intensity, and shape. Moreover, it is
necessary to rely on accurate spectral data to guaranty the accuracy of the
simulated spectra. A discussion on the most accurate models available for such
calculations is presented for diatomic and linear polyatomic discrete
radiation, and possible reductions on the number of calculated lines are
discussed in order to reduce memory and computational overheads. Examples of
different approaches for the simulation of experimentally determined
low-pressure molecular spectra are presented. The accuracy of different
simulation approaches is discussed and it is verified that a careful choice of
applied computational models and spectroscopic datasets yields precise
approximations of the measured spectra.
| no_new_dataset | 0.951323 |
physics/0511186 | Alexei Vazquez | A.-L. Barabasi, K.-I. Goh, and A. Vazquez | Reply to Comment on "The origin of bursts and heavy tails in human
dynamics" | Reply to physics/0510216 | null | null | null | physics.data-an physics.soc-ph | null | Understanding human dynamics is of major scientific and practical importance
and can be increasingly addressed in a quantitative fashion thanks to
electronic records capturing various human activity patterns. The authors of
Ref. [1] revisit the datasets studied in Ref. [2], making four technical
observations. Some of the observations of Ref. [1] are based on the authors'
unfamiliarity with the details of the data collection process and have little
relevance to the findings of Ref. [2] and others are resolved in quantitative
fashion by other authors [3].
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2005 00:09:07 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Barabasi",
"A. -L.",
""
],
[
"Goh",
"K. -I.",
""
],
[
"Vazquez",
"A.",
""
]
] | TITLE: Reply to Comment on "The origin of bursts and heavy tails in human
dynamics"
ABSTRACT: Understanding human dynamics is of major scientific and practical importance
and can be increasingly addressed in a quantitative fashion thanks to
electronic records capturing various human activity patterns. The authors of
Ref. [1] revisit the datasets studied in Ref. [2], making four technical
observations. Some of the observations of Ref. [1] are based on the authors'
unfamiliarity with the details of the data collection process and have little
relevance to the findings of Ref. [2] and others are resolved in quantitative
fashion by other authors [3].
| no_new_dataset | 0.9462 |
physics/0611073 | Jan Bergman | Jan E.S. Bergman and Tobia D. Carozzi | Systematic Characterization of Low Frequency Electric and Magnetic Field
Data Applicable to Solar Orbiter | null | null | null | null | physics.space-ph | null | We present a systematic and physically motivated characterization of
incoherent or coherent electric and magnetic fields, as measured for instance
by the low frequency receiver on-board the Solar Orbiter spacecraft. The
characterization utilizes the 36 auto/cross correlations of the 3+3 complex
Cartesian components of the electric and magnetic fields; hence, they are
second order in the field strengths and so have physical dimension energy
density. Although such 6x6 correlation matrices have been successfully employed
on previous space missions, they are not physical quantities; because they are
not manifestly space-time tensors. In this paper we propose a systematic
representation of the 36 degrees-of-freedom of partially coherent
electromagnetic fields as a set of manifestly covariant space-time tensors,
which we call the Canonical Electromagnetic Observables (CEO). As an example,
we apply this formalism to analyze real data from a chorus emission in the
mid-latitude magnetosphere, as registered by the STAFF-SA instrument on board
the Cluster-II spacecraft. We find that the CEO analysis increases the amount
of information that can be extracted from the STAFF-SA dataset; for instance,
the reactive energy flux density, which is one of the CEO parameters,
identifies the source region of electromagnetic emissions more directly than
the active energy (Poynting) flux density alone.
| [
{
"version": "v1",
"created": "Tue, 7 Nov 2006 20:04:01 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Bergman",
"Jan E. S.",
""
],
[
"Carozzi",
"Tobia D.",
""
]
] | TITLE: Systematic Characterization of Low Frequency Electric and Magnetic Field
Data Applicable to Solar Orbiter
ABSTRACT: We present a systematic and physically motivated characterization of
incoherent or coherent electric and magnetic fields, as measured for instance
by the low frequency receiver on-board the Solar Orbiter spacecraft. The
characterization utilizes the 36 auto/cross correlations of the 3+3 complex
Cartesian components of the electric and magnetic fields; hence, they are
second order in the field strengths and so have physical dimension energy
density. Although such 6x6 correlation matrices have been successfully employed
on previous space missions, they are not physical quantities; because they are
not manifestly space-time tensors. In this paper we propose a systematic
representation of the 36 degrees-of-freedom of partially coherent
electromagnetic fields as a set of manifestly covariant space-time tensors,
which we call the Canonical Electromagnetic Observables (CEO). As an example,
we apply this formalism to analyze real data from a chorus emission in the
mid-latitude magnetosphere, as registered by the STAFF-SA instrument on board
the Cluster-II spacecraft. We find that the CEO analysis increases the amount
of information that can be extracted from the STAFF-SA dataset; for instance,
the reactive energy flux density, which is one of the CEO parameters,
identifies the source region of electromagnetic emissions more directly than
the active energy (Poynting) flux density alone.
| no_new_dataset | 0.947962 |
physics/0701046 | Alessandra Retico | A. Retico, P. Delogu, M.E. Fantacci, A. Preite Martinez, A. Stefanini,
A. Tata | A scalable Computer-Aided Detection system for microcalcification
cluster identification in a pan-European distributed database of mammograms | 6 pages, 5 figures; Proceedings of the ITBS 2005, 3rd International
Conference on Imaging Technologies in Biomedical Sciences, 25-28 September
2005, Milos Island, Greece | Nuclear Instruments and Methods in Physics Research A 569 (2006)
601-605 | 10.1016/j.nima.2006.08.094 | null | physics.med-ph | null | A computer-aided detection (CADe) system for microcalcification cluster
identification in mammograms has been developed in the framework of the
EU-founded MammoGrid project. The CADe software is mainly based on wavelet
transforms and artificial neural networks. It is able to identify
microcalcifications in different kinds of mammograms (i.e. acquired with
different machines and settings, digitized with different pitch and bit depth
or direct digital ones). The CADe can be remotely run from GRID-connected
acquisition and annotation stations, supporting clinicians from geographically
distant locations in the interpretation of mammographic data. We report the
FROC analyses of the CADe system performances on three different dataset of
mammograms, i.e. images of the CALMA INFN-founded database collected in the
Italian National screening program, the MIAS database and the so-far collected
MammoGrid images. The sensitivity values of 88% at a rate of 2.15 false
positive findings per image (FP/im), 88% with 2.18 FP/im and 87% with 5.7 FP/im
have been obtained on the CALMA, MIAS and MammoGrid database respectively.
| [
{
"version": "v1",
"created": "Thu, 4 Jan 2007 14:38:01 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Retico",
"A.",
""
],
[
"Delogu",
"P.",
""
],
[
"Fantacci",
"M. E.",
""
],
[
"Martinez",
"A. Preite",
""
],
[
"Stefanini",
"A.",
""
],
[
"Tata",
"A.",
""
]
] | TITLE: A scalable Computer-Aided Detection system for microcalcification
cluster identification in a pan-European distributed database of mammograms
ABSTRACT: A computer-aided detection (CADe) system for microcalcification cluster
identification in mammograms has been developed in the framework of the
EU-founded MammoGrid project. The CADe software is mainly based on wavelet
transforms and artificial neural networks. It is able to identify
microcalcifications in different kinds of mammograms (i.e. acquired with
different machines and settings, digitized with different pitch and bit depth
or direct digital ones). The CADe can be remotely run from GRID-connected
acquisition and annotation stations, supporting clinicians from geographically
distant locations in the interpretation of mammographic data. We report the
FROC analyses of the CADe system performances on three different dataset of
mammograms, i.e. images of the CALMA INFN-founded database collected in the
Italian National screening program, the MIAS database and the so-far collected
MammoGrid images. The sensitivity values of 88% at a rate of 2.15 false
positive findings per image (FP/im), 88% with 2.18 FP/im and 87% with 5.7 FP/im
have been obtained on the CALMA, MIAS and MammoGrid database respectively.
| no_new_dataset | 0.946794 |
physics/0701053 | Alessandra Retico | A. Retico, P. Delogu, M.E. Fantacci, P. Kasae | An Automatic System to Discriminate Malignant from Benign Massive
Lesions on Mammograms | 6 pages, 3 figures; Proceedings of the ITBS 2005, 3rd International
Conference on Imaging Technologies in Biomedical Sciences, 25-28 September
2005, Milos Island, Greece | Nuclear Instruments and Methods in Physics Research A 569 (2006)
596-600 | 10.1016/j.nima.2006.08.093 | null | physics.med-ph | null | Mammography is widely recognized as the most reliable technique for early
detection of breast cancers. Automated or semi-automated computerized
classification schemes can be very useful in assisting radiologists with a
second opinion about the visual diagnosis of breast lesions, thus leading to a
reduction in the number of unnecessary biopsies. We present a computer-aided
diagnosis (CADi) system for the characterization of massive lesions in
mammograms, whose aim is to distinguish malignant from benign masses. The CADi
system we realized is based on a three-stage algorithm: a) a segmentation
technique extracts the contours of the massive lesion from the image; b)
sixteen features based on size and shape of the lesion are computed; c) a
neural classifier merges the features into an estimated likelihood of
malignancy. A dataset of 226 massive lesions (109 malignant and 117 benign) has
been used in this study. The system performances have been evaluated terms of
the receiver-operating characteristic (ROC) analysis, obtaining A_z =
0.80+-0.04 as the estimated area under the ROC curve.
| [
{
"version": "v1",
"created": "Thu, 4 Jan 2007 14:59:11 GMT"
}
] | 2007-05-23T00:00:00 | [
[
"Retico",
"A.",
""
],
[
"Delogu",
"P.",
""
],
[
"Fantacci",
"M. E.",
""
],
[
"Kasae",
"P.",
""
]
] | TITLE: An Automatic System to Discriminate Malignant from Benign Massive
Lesions on Mammograms
ABSTRACT: Mammography is widely recognized as the most reliable technique for early
detection of breast cancers. Automated or semi-automated computerized
classification schemes can be very useful in assisting radiologists with a
second opinion about the visual diagnosis of breast lesions, thus leading to a
reduction in the number of unnecessary biopsies. We present a computer-aided
diagnosis (CADi) system for the characterization of massive lesions in
mammograms, whose aim is to distinguish malignant from benign masses. The CADi
system we realized is based on a three-stage algorithm: a) a segmentation
technique extracts the contours of the massive lesion from the image; b)
sixteen features based on size and shape of the lesion are computed; c) a
neural classifier merges the features into an estimated likelihood of
malignancy. A dataset of 226 massive lesions (109 malignant and 117 benign) has
been used in this study. The system performances have been evaluated terms of
the receiver-operating characteristic (ROC) analysis, obtaining A_z =
0.80+-0.04 as the estimated area under the ROC curve.
| new_dataset | 0.964187 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.