Hello, this is beta version of diophantus. If you want to report about a mistake, please, write to hello@diophantus.org

Iyer B. R. | Iyer Bala R. | Iyer Kartik K | Iyer Rishabh | Iyer Gautam | Iyer Jaya NN | Iyer Chandru | Iyer Srikanth K. | Iyer Sameer | Iyer Ravishankar K.

Ravishankar V. | Ravishankar Saiprasad | Ravishankar K. | Ravishankar Krishnamurthi | Ravishankar Vinit | Ravishankar M. | Ravishankar N. | Ravishankar Anitha | Ravishankar Hariharan | Ravishankar K C

Varatharajah Yogatheesan, Berry Brent, Cimbalnik Jan, Kremen Vaclav, Van Gompel Jamie, Stead Matt, Brinkmann Benjamin, Iyer Ravishankar, Worrell Gregory

15 Dec 2018
q-bio.NC cs.AI q-bio.QM
arxiv.org/abs/1812.06234

An ability to map seizure-generating brain tissue, i.e., the seizure onset zone (SOZ), without recording actual seizures could reduce the duration of invasive EEG monitoring for patients with drug-resistant epilepsy. A widely-adopted practice in the literature is to compare the incidence (events/time) of putative pathological electrophysiological biomarkers associated with epileptic brain tissue with the SOZ determined from spontaneous seizures recorded with intracranial EEG, primarily using a single biomarker. Clinical translation of the previous efforts suffers from their inability to generalize across multiple patients because of (a) the inter-patient variability and (b) the temporal variability in the epileptogenic activity. Here, we report an artificial intelligence-based approach for combining multiple interictal electrophysiological biomarkers and their temporal characteristics as a way of accounting for the above barriers and show that it can reliably identify seizure onset zones in a study cohort of 82 patients who underwent evaluation for drug-resistant epilepsy. Our investigation provides evidence that utilizing the complementary information provided by multiple electrophysiological biomarkers and their temporal characteristics can significantly improve the localization potential compared to previously published single-biomarker incidence-based approaches, resulting in an average area under ROC curve (AUC) value of 0.73 in a cohort of 82 patients. Our results also suggest that recording durations between ninety minutes and two hours are sufficient to localize SOZs with accuracies that may prove clinically relevant. The successful validation of our approach on a large cohort of 82 patients warrants future investigation on the feasibility of utilizing intra-operative EEG monitoring and artificial intelligence to localize epileptogenic brain tissue.

01 Sep 2018
cs.AI stat.AP
arxiv.org/abs/1809.00258

Clinical trials involving multiple treatments utilize randomization of the treatment assignments to enable the evaluation of treatment efficacies in an unbiased manner. Such evaluation is performed in post hoc studies that usually use supervised-learning methods that rely on large amounts of data collected in a randomized fashion. That approach often proves to be suboptimal in that some participants may suffer and even die as a result of having not received the most appropriate treatments during the trial. Reinforcement-learning methods improve the situation by making it possible to learn the treatment efficacies dynamically during the course of the trial, and to adapt treatment assignments accordingly. Recent efforts using \textit{multi-arm bandits}, a type of reinforcement-learning methods, have focused on maximizing clinical outcomes for a population that was assumed to be homogeneous. However, those approaches have failed to account for the variability among participants that is becoming increasingly evident as a result of recent clinical-trial-based studies. We present a contextual-bandit-based online treatment optimization algorithm that, in choosing treatments for new participants in the study, takes into account not only the maximization of the clinical outcomes but also the patient characteristics. We evaluated our algorithm using a real clinical trial dataset from the International Stroke Trial. The results of our retrospective analysis indicate that the proposed approach performs significantly better than either a random assignment of treatments (the current gold standard) or a multi-arm-bandit-based approach, providing substantial gains in the percentage of participants who are assigned the most suitable treatments. The contextual-bandit and multi-arm bandit approaches provide 72.63% and 64.34% gains, respectively, compared to a random assignment.

06 Apr 2007
cs.PF
arxiv.org/abs/0704.0879

We present a hierarchical simulation approach for the dependability analysis and evaluation of a highly available commercial cache-based RAID storage system. The archi-tecture is complex and includes several layers of overlap-ping error detection and recovery mechanisms. Three ab-straction levels have been developed to model the cache architecture, cache operations, and error detection and recovery mechanism. The impact of faults and errors oc-curring in the cache and in the disks is analyzed at each level of the hierarchy. A simulation submodel is associated with each abstraction level. The models have been devel-oped using DEPEND, a simulation-based environment for system-level dependability analysis, which provides facili-ties to inject faults into a functional behavior model, to simulate error detection and recovery mechanisms, and to evaluate quantitative measures. Several fault models are defined for each submodel to simulate cache component failures, disk failures, transmission errors, and data errors in the cache memory and in the disks. Some of the parame-ters characterizing fault injection in a given submodel cor-respond to probabilities evaluated from the simulation of the lower-level submodel. Based on the proposed method-ology, we evaluate and analyze 1) the system behavior un-der a real workload and high error rate (focusing on error bursts), 2) the coverage of the error detection mechanisms implemented in the system and the error latency distribu-tions, and 3) the accumulation of errors in the cache and in the disks.

07 Oct 1994
cond-mat
arxiv.org/abs/cond-mat/9410018

We study Chern-Simons (CS) superconductivity in the presence of uniform external magnetic field of {\it arbitrary strength} for a system of fermions in two spatial dimensions, which are minimally coupled both to the CS and Maxwell gauge fields. We have carried out the computation within the mean field ansatz. Analysing only the mean field (i.e., ignoring the fluctuations of the gauge fields), we find that chemical potential, susceptibility and magnetization show discontinuities for integer number of filled Landau levels. Taking into account the fluctuations of the gauge fields, we find that the masses of the excitations increase with the magnetic field, and that the presence of nonlinear magnetic susceptibilities show the absence of any critical or pseudo critical magnetic field. Finally, an interesting result is that, unlike ordinary superconductors, the system is magnetically asymmetric.

12 Oct 1994
cond-mat
arxiv.org/abs/cond-mat/9410038

We study the finite temperature (FT) effects on integer quantum Hall effect (IQHE) and fractional quantum Hall effect (FQHE) as predicted by the composite fermion model. We find that at $T\neq 0$, universality is lost, as is quantization because of a new scale $T_0=\pi\rho /m^\ast p$. We find that this loss is not inconsistent with the experimentally observed accuracies. While the model seems to work very well for IQHE, it agrees with the bulk results of FQHE but is shown to require refinement in its account of microscopic properties such as the effective mass. Our analysis also gives a qualitative account of the threshold temperatures at which the FQHE states are seen experimentally. Finally, we extract model independent features of quantum Hall effect at FT, common to all Chern-Simons theories that employ mean field ansatz.

23 Jan 1995
gr-qc
arxiv.org/abs/gr-qc/9501027

The rate of gravitational-wave energy loss from inspiralling binary systems of compact objects of arbitrary mass is derived through second post-Newtonian (2PN) order $O[(Gm/rc^2)^2]$ beyond the quadrupole approximation. The result has been derived by two independent calculations of the (source) multipole moments. The 2PN terms, and in particular the finite mass contribution therein (which cannot be obtained in perturbation calculations of black hole spacetimes), are shown to make a significant contribution to the accumulated phase of theoretical templates to be used in matched filtering of the data from future gravitational-wave detectors.

24 Jan 1995
gr-qc
arxiv.org/abs/gr-qc/9501029

Gravitational waves generated by inspiralling compact binaries are investigated to the second--post-Newtonian (2PN) approximation of general relativity. Using a recently developed 2PN-accurate wave generation formalism, we compute the gravitational waveform and associated energy loss rate from a binary system of point-masses moving on a quasi-circular orbit. The crucial new input is our computation of the 2PN-accurate ``source'' quadrupole moment of the binary. Tails in both the waveform and energy loss rate at infinity are explicitly computed. Gravitational radiation reaction effects on the orbital frequency and phase of the binary are deduced from the energy loss. In the limiting case of a very small mass ratio between the two bodies we recover the results obtained by black hole perturbation methods. We find that finite mass ratio effects are very significant as they increase the 2PN contribution to the phase by up to 52\%. The results of this paper should be of use when deciphering the signals observed by the future LIGO/VIRGO network of gravitational-wave detectors.

08 Feb 1995
gr-qc
arxiv.org/abs/gr-qc/9502020

We reformulate the standard local equations of general relativity for asymptotically flat spacetimes in terms of two non-local quantities, the holonomy $H$ around certain closed null loops on characteristic surfaces and the light cone cut function $Z$, which describes the intersection of the future null cones from arbitrary spacetime points, with future null infinity. We obtain a set of differential equations for $H$ and $Z$ equivalent to the vacuum Einstein equations. By finding an algebraic relation between $H$ and $Z$ this set of equations is reduced to just two coupled equations: an integro-differential equation for $Z$ which yields the conformal structure of the underlying spacetime and a linear differential equation for the ``vacuum'' conformal factor. These equations, which apply to all vacuum asymptotically flat spacetimes, are however lengthy and complicated and we do not yet know of any solution generating technique. They nevertheless are amenable to an attractive perturbative scheme which has Minkowski space as a zeroth order solution.

27 Mar 1995
gr-qc hep-th
arxiv.org/abs/gr-qc/9503052

The entropy of stationary black holes has recently been calculated by a
number of different approaches. Here we compare the Noether charge approach
(defined for any diffeomorphism invariant Lagrangian theory) with various
Euclidean methods, specifically, (i) the microcanonical ensemble approach of
Brown and York, (ii) the closely related approach of Ba\~nados, Teitelboim, and
Zanelli which ultimately expresses black hole entropy in terms of the Hilbert
action surface term, (iii) another formula of Ba\~nados, Teitelboim and Zanelli
(also used by Susskind and Uglum) which views black hole entropy as conjugate
to a conical deficit angle, and (iv) the pair creation approach of Garfinkle,
Giddings, and Strominger. All of these approaches have a more restrictive
domain of applicability than the Noether charge approach. Specifically,
approaches (i) and (ii) appear to be restricted to a class of theories
satisfying certain properties listed in section 2; approach (iii) appears to
require the Lagrangian density to be linear in the curvature; and approach (iv)
requires the existence of suitable instanton solutions. However, we show that
within their domains of applicability, all of these approaches yield results in
agreement with the Noether charge approach. In the course of our analysis, we
generalize the definition of Brown and York's quasilocal energy to a much more
general class of diffeomorphism invariant, Lagrangian theories of gravity. In
an appendix, we show that in an arbitrary diffeomorphism invariant theory of
gravity, the `volume term" in the`

off-shell" Hamiltonian associated with a
time evolution vector field $t^a$ always can be expressed as the spatial
integral of $t^a {\cal C}_a$, where ${\cal C}_a = 0$ are the constraints
associated with the diffeomorphism invariance.

24 Apr 1995
chem-ph physics.chem-ph
arxiv.org/abs/chem-ph/9504003

In the distributed nucleus approximation we represent the singular nucleus as smeared over a smallportion of a Cartesian grid. Delocalizing the nucleus allows us to solve the Poisson equation for theoverall electrostatic potential using a linear scaling multigrid algorithm.This work is done in the context of minimizing the Kohn-Sham energy functionaldirectly in real space with a multiscale approach. The efficacy of the approximation is illustrated bylocating the ground state density of simple one electron atoms and moleculesand more complicated multiorbital systems.