Search results for: weighted interval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1399

Search results for: weighted interval

1039 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 72
1038 Probabilistic Modeling Laser Transmitter

Authors: H. S. Kang

Abstract:

Coupled electrical and optical model for conversion of electrical energy into coherent optical energy for transmitter-receiver link by solid state device is presented. Probability distribution for travelling laser beam switching time intervals and the number of switchings in the time interval is obtained. Selector function mapping is employed to regulate optical data transmission speed. It is established that regulated laser transmission from PhotoActive Laser transmitter follows principal of invariance. This considerably simplifies design of PhotoActive Laser Transmission networks.

Keywords: computational mathematics, finite difference Markov chain methods, sequence spaces, singularly perturbed differential equations

Procedia PDF Downloads 431
1037 Pragmatic Development of Chinese Sentence Final Particles via Computer-Mediated Communication

Authors: Qiong Li

Abstract:

This study investigated in which condition computer-mediated communication (CMC) could promote pragmatic development. The focal feature included four Chinese sentence final particles (SFPs), a, ya, ba, and ne. They occur frequently in Chinese, and function as mitigators to soften the tone of speech. However, L2 acquisition of SFPs is difficult, suggesting the necessity of additional exposure to or explicit instruction on Chinese SFPs. This study follows this line and aims to explore two research questions: (1) Is CMC combined with data-driven instruction more effective than CMC alone in promoting L2 Chinese learners’ SFP use? (2) How does L2 Chinese learners’ SFP use change over time, as compared to the production of native Chinese speakers? The study involved 19 intermediate-level learners of Chinese enrolled at a private American university. They were randomly assigned to two groups: (1) the control group (N = 10), which was exposed to SFPs through CMC alone, (2) the treatment group (N = 9), which was exposed to SFPs via CMC and data-driven instruction. Learners interacted with native speakers on given topics through text-based CMC over Skype. Both groups went through six 30-minute CMC sessions on a weekly basis, with a one-week interval after the first two CMC sessions and a two-week interval after the second two CMC sessions (nine weeks in total). The treatment group additionally received a data-driven instruction after the first two sessions. Data analysis focused on three indices: token frequency, type frequency, and acceptability of SFP use. Token frequency was operationalized as the raw occurrence of SFPs per clause. Type frequency was the range of SFPs. Acceptability was rated by two native speakers using a rating rubric. The results showed that the treatment group made noticeable progress over time on the three indices. The production of SFPs approximated the native-like level. In contrast, the control group only slightly improved on token frequency. Only certain SFPs (a and ya) reached the native-like use. Potential explanations for the group differences were discussed in two aspects: the property of Chinese SFPs and the role of CMC and data-driven instruction. Though CMC provided the learners with opportunities to notice and observe SFP use, as a feature with low saliency, SFPs were not easily noticed in input. Data-driven instruction in the treatment group directed the learners’ attention to these particles, which facilitated the development.

Keywords: computer-mediated communication, data-driven instruction, pragmatic development, second language Chinese, sentence final particles

Procedia PDF Downloads 418
1036 The Impact of Trait and Mathematical Anxiety on Oscillatory Brain Activity during Lexical and Numerical Error-Recognition Tasks

Authors: Alexander N. Savostyanov, Tatyana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Yulia V. Kovas

Abstract:

The present study compared spectral-power indexes and cortical topography of brain activity in a sample characterized by different levels of trait and mathematical anxiety. 52 healthy Russian-speakers (age 17-32; 30 males) participated in the study. Participants solved an error recognition task under 3 conditions: A lexical condition (simple sentences in Russian), and two numerical conditions (simple arithmetic and complicated algebraic problems). Trait and mathematical anxiety were measured using self-repot questionnaires. EEG activity was recorded simultaneously during task execution. Event-related spectral perturbations (ERSP) were used to analyze spectral-power changes in brain activity. Additionally, sLORETA was applied in order to localize the sources of brain activity. When exploring EEG activity recorded after tasks onset during lexical conditions, sLORETA revealed increased activation in frontal and left temporal cortical areas, mainly in the alpha/beta frequency ranges. When examining the EEG activity recorded after task onset during arithmetic and algebraic conditions, additional activation in delta/theta band in the right parietal cortex was observed. The ERSP plots reveled alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three (lexical, arithmetic and algebraic) conditions. The level of trait anxiety was positively correlated with the amplitude of alpha/beta desynchronization. The level of mathematical anxiety was negatively correlated with the amplitude of theta synchronization and of alpha/beta desynchronization. Overall, trait anxiety was related with an increase in brain activation during task execution, whereas mathematical anxiety was associated with increased inhibitory-related activity. We gratefully acknowledge the support from the №11.G34.31.0043 grant from the Government of the Russian Federation.

Keywords: anxiety, EEG, lexical and numerical error-recognition tasks, alpha/beta desynchronization

Procedia PDF Downloads 525
1035 Finding Viable Pollution Routes in an Urban Network under a Predefined Cost

Authors: Dimitra Alexiou, Stefanos Katsavounis, Ria Kalfakakou

Abstract:

In an urban area the determination of transportation routes should be planned so as to minimize the provoked pollution taking into account the cost of such routes. In the sequel these routes are cited as pollution routes. The transportation network is expressed by a weighted graph G= (V, E, D, P) where every vertex represents a location to be served and E contains unordered pairs (edges) of elements in V that indicate a simple road. The distances/cost and a weight that depict the provoked air pollution by a vehicle transition at every road are assigned to each road as well. These are the items of set D and P respectively. Furthermore the investigated pollution routes must not exceed predefined corresponding values concerning the route cost and the route pollution level during the vehicle transition. In this paper we present an algorithm that generates such routes in order that the decision maker selects the most appropriate one.

Keywords: bi-criteria, pollution, shortest paths, computation

Procedia PDF Downloads 374
1034 Currency Exchange Rate Forecasts Using Quantile Regression

Authors: Yuzhi Cai

Abstract:

In this paper, we discuss a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. Together with a combining forecasts technique, we then predict USD to GBP currency exchange rates. Combined forecasts contain all the information captured by the fitted QAR models at different quantile levels and are therefore better than those obtained from individual models. Our results show that an unequally weighted combining method performs better than other forecasting methodology. We found that a median AR model can perform well in point forecasting when the predictive density functions are symmetric. However, in practice, using the median AR model alone may involve the loss of information about the data captured by other QAR models. We recommend that combined forecasts should be used whenever possible.

Keywords: combining forecasts, MCMC, predictive density functions, quantile forecasting, quantile modelling

Procedia PDF Downloads 256
1033 Minimizing Unscheduled Maintenance from an Aircraft and Rolling Stock Maintenance Perspective: Preventive Maintenance Model

Authors: Adel A. Ghobbar, Varun Raman

Abstract:

The Corrective maintenance of components and systems is a problem plaguing almost every industry in the world today. Train operators’ and the maintenance repair and overhaul subsidiary of the Dutch railway company is also facing this problem. A considerable portion of the maintenance activities carried out by the company are unscheduled. This, in turn, severely stresses and stretches the workforce and resources available. One possible solution is to have a robust preventive maintenance plan. The other possible solution is to plan maintenance based on real-time data obtained from sensor-based ‘Health and Usage Monitoring Systems.’ The former has been investigated in this paper. The preventive maintenance model developed for train operator will subsequently be extended, to tackle the unscheduled maintenance problem also affecting the aerospace industry. The extension of the model to the aerospace sector will be dealt with in the second part of the research, and it would, in turn, validate the soundness of the model developed. Thus, there are distinct areas that will be addressed in this paper, including the mathematical modelling of preventive maintenance and optimization based on cost and system availability. The results of this research will help an organization to choose the right maintenance strategy, allowing it to save considerable sums of money as opposed to overspending under the guise of maintaining high asset availability. The concept of delay time modelling was used to address the practical problem of unscheduled maintenance in this paper. The delay time modelling can be used to help with support planning for a given asset. The model was run using MATLAB, and the results are shown that the ideal inspection intervals computed using the extended from a minimal cost perspective were 29 days, and from a minimum downtime, perspective was 14 days. Risk matrix integration was constructed to represent the risk in terms of the probability of a fault leading to breakdown maintenance and its consequences in terms of maintenance cost. Thus, the choice of an optimal inspection interval of 29 days, resulted in a cost of approximately 50 Euros and the corresponding value of b(T) was 0.011. These values ensure that the risk associated with component X being maintained at an inspection interval of 29 days is more than acceptable. Thus, a switch in maintenance frequency from 90 days to 29 days would be optimal from the point of view of cost, downtime and risk.

Keywords: delay time modelling, unscheduled maintenance, reliability, maintainability, availability

Procedia PDF Downloads 132
1032 Robust Noisy Speech Identification Using Frame Classifier Derived Features

Authors: Punnoose A. K.

Abstract:

This paper presents an approach for identifying noisy speech recording using a multi-layer perception (MLP) trained to predict phonemes from acoustic features. Characteristics of the MLP posteriors are explored for clean speech and noisy speech at the frame level. Appropriate density functions are used to fit the softmax probability of the clean and noisy speech. A function that takes into account the ratio of the softmax probability density of noisy speech to clean speech is formulated. These phoneme independent scoring is weighted using a phoneme-specific weightage to make the scoring more robust. Simple thresholding is used to identify the noisy speech recording from the clean speech recordings. The approach is benchmarked on standard databases, with a focus on precision.

Keywords: noisy speech identification, speech pre-processing, noise robustness, feature engineering

Procedia PDF Downloads 127
1031 Turkish Validation of the Nursing Outcomes for Urinary Incontinence and Their Sensitivities on Nursing Interventions

Authors: Dercan Gencbas, Hatice Bebis, Sue Moorhead

Abstract:

In the nursing process, many of the nursing classification systems were created to be used in international. From these, NANDA-I, Nursing Outcomes Classification (NOC) and Nursing Interventions Classification (NIC). In this direction, the main objective of this study is to establish a model for caregivers in hospitals and communities in Turkey and to ensure that nursing outputs are assessed by NOC-based measures. There are many scales to measure Urinary Incontinence (UI), which is very common in children, in old age, vaginal birth, NOC scales are ideal for use in the nursing process for comprehensive and holistic assessment, with surveys available. For this reason, the purpose of this study is to evaluate the validity of the NOC outputs and indicators used for UI NANDA-I. This research is a methodological study. In addition to the validity of scale indicators in the study, how much they will contribute to recovery after the nursing intervention was assessed by experts. Scope validations have been applied and calculated according to Fehring 1987 work model. According to this, nursing inclusion criteria and scores were determined. For example, if experts have at least four years of clinical experience, their score was 4 points or have at least one year of the nursing classification system, their score was 1 point. The experts were a publication experience about nursing classification, their score was 1 point, or have a doctoral degree in nursing, their score was 2 points. If the expert has a master degree, their score was 1 point. Total of 55 experts rated Fehring as a “senior degree” with a score of 90 according to the expert scoring. The nursing interventions to be applied were asked to what extent these indicators would contribute to recovery. For coverage validity tailored to Fehring's model, each NOC and NOC indicator from specialists was asked to score between 1-5. Score for the significance of indicators was from 1=no precaution to 5=very important. After the expert opinion, these weighted scores obtained for each NOC and NOC indicator were classified as 0.8 critical, 0.8 > 0.5 complements, > 0.5 are excluded. In the NANDA-I / NOC / NIC system (guideline), 5 NOCs proposed for nursing diagnoses for UI were proposed. These outputs are; Urinary Continence, Urinary Elimination, Tissue Integrity, Self CareToileting, Medication Response. After the scales are translated into Turkish, the weighted average of the scores obtained from specialists for the coverage of all 5 NOCs and the contribution of nursing initiatives exceeded 0.8. After the opinions of the experts, 79 of the 82 indicators were calculated as critical, 3 of the indicators were calculated as supplemental. Because of 0.5 > was not obtained, no substance was removed. All NOC outputs were identified as valid and usable scales in Turkey. In this study, five NOC outcomes were verified for the evaluation of the output of individuals who have received nursing knowledge of UI and variant types. Nurses in Turkey can benefit from the outputs of the NOC scale to perform the care of the elderly incontinence.

Keywords: nursing outcomes, content validity, nursing diagnosis, urinary incontinence

Procedia PDF Downloads 125
1030 The Analysis of Different Classes of Weighted Fuzzy Petri Nets and Their Features

Authors: Yurii Bloshko, Oksana Olar

Abstract:

This paper presents the analysis of 6 different classes of Petri nets: fuzzy Petri nets (FPN), generalized fuzzy Petri nets (GFPN), parameterized fuzzy Petri nets (PFPN), T2GFPN, flexible generalized fuzzy Petri nets (FGFPN), binary Petri nets (BPN). These classes were simulated in the special software PNeS® for the analysis of its pros and cons on the example of models which are dedicated to the decision-making process of passenger transport logistics. The paper includes the analysis of two approaches: when input values are filled with the experts’ knowledge; when fuzzy expectations represented by output values are added to the point. These approaches fulfill the possibilities of triples of functions which are replaced with different combinations of t-/s-norms.

Keywords: fuzzy petri net, intelligent computational techniques, knowledge representation, triangular norms

Procedia PDF Downloads 141
1029 Association Between Swallowing Disorders and Cognitive Disorders in Adults: Systematic Review and Metaanalysis

Authors: Shiva Ebrahimian Dehaghani, Afsaneh Doosti, Morteza Zare

Abstract:

Background: There is no consensus regarding the association between dysphagia and cognition. Purpose: The aim of this study was to quantitatively and qualitatively analyze the available evidence on the direction and strength of association between dysphagia and cognition. Methodology: PubMed, Scopus, Embase and Web of Science were searched about the association between dysphagia and cognition. A random-effects model was used to determine weighted odds ratios (OR) and 95% confidence intervals (CI). Sensitivity analysis was performed to determine the impact of each individual study on the pooled results. Results: A total of 1427 participants showed that some cognitive disorders were significantly associated with dysphagia (OR = 3.23; 95% CI, 2.33–4.48). Conclusion: The association between cognition and swallowing disorders suggests that multiple neuroanatomical systems are involved in these two functions.

Keywords: adult, association, cognitive impairment, dysphagia, systematic review

Procedia PDF Downloads 161
1028 Short-Long Term between Gross Domestic Product and Consumption in Indonesia

Authors: Teguh Sugiarto, Ahmad Subagyo, Ludiro Madu, Amir Mohammadian Amiri

Abstract:

Recently, the significant fluctuations accosiated with Indonesian economy justifies the need for paying more attention to this issue. In this regard, the main objective of this study is to investigate the relationship between two issues related to the macro Indonesia economy called consumption and GDP during the period of 1967 to 2014. This research method exploits short term and long term relationships using Granger and subsequently, models them by the causality method . However, using analysis of Granger with Johansen shows that there is not only a long term, but also a short-long relationship between GDP and consumption using lags the interval 5.

Keywords: cointegration, Granger causality, GDP, consumption

Procedia PDF Downloads 357
1027 Adaptive Nonparametric Approach for Guaranteed Real-Time Detection of Targeted Signals in Multichannel Monitoring Systems

Authors: Andrey V. Timofeev

Abstract:

An adaptive nonparametric method is proposed for stable real-time detection of seismoacoustic sources in multichannel C-OTDR systems with a significant number of channels. This method guarantees given upper boundaries for probabilities of Type I and Type II errors. Properties of the proposed method are rigorously proved. The results of practical applications of the proposed method in a real C-OTDR-system are presented in this report.

Keywords: guaranteed detection, multichannel monitoring systems, change point, interval estimation, adaptive detection

Procedia PDF Downloads 447
1026 A Trapezoidal-Like Integrator for the Numerical Solution of One-Dimensional Time Dependent Schrödinger Equation

Authors: Johnson Oladele Fatokun, I. P. Akpan

Abstract:

In this paper, the one-dimensional time dependent Schrödinger equation is discretized by the method of lines using a second order finite difference approximation to replace the second order spatial derivative. The evolving system of stiff ordinary differential equation (ODE) in time is solved numerically by an L-stable trapezoidal-like integrator. Results show accuracy of relative maximum error of order 10-4 in the interval of consideration. The performance of the method as compared to an existing scheme is considered favorable.

Keywords: Schrodinger’s equation, partial differential equations, method of lines (MOL), stiff ODE, trapezoidal-like integrator

Procedia PDF Downloads 418
1025 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research

Authors: Christoph Opperer

Abstract:

Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.

Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images

Procedia PDF Downloads 74
1024 Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms

Authors: Tian Xia, Yuan Yan Tang

Abstract:

In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection.

Keywords: small target detection, local contrast, human vision system, Laplacian of Gaussian

Procedia PDF Downloads 469
1023 Confidence Intervals for Quantiles in the Two-Parameter Exponential Distributions with Type II Censored Data

Authors: Ayman Baklizi

Abstract:

Based on type II censored data, we consider interval estimation of the quantiles of the two-parameter exponential distribution and the difference between the quantiles of two independent two-parameter exponential distributions. We derive asymptotic intervals, Bayesian, as well as intervals based on the generalized pivot variable. We also include some bootstrap intervals in our comparisons. The performance of these intervals is investigated in terms of their coverage probabilities and expected lengths.

Keywords: asymptotic intervals, Bayes intervals, bootstrap, generalized pivot variables, two-parameter exponential distribution, quantiles

Procedia PDF Downloads 414
1022 Investigating Suicide Cases in Attica, Greece: Insight from an Autopsy-Based Study

Authors: Ioannis N. Sergentanis, Stavroula Papadodima, Maria Tsellou, Dimitrios Vlachodimitropoulos, Sotirios Athanaselis, Chara Spiliopoulou

Abstract:

Introduction: The aim of this study is the investigation of characteristics of suicide, as documented in autopsies during a five-year interval in the greater area of Attica, including the city of Athens. This could reveal possible protective or aggravating factors for suicide risk during a period strongly associated with the Greek debt crisis. Materials and Methods: Data was obtained following registration of suicide cases among autopsies performed in the Forensic Medicine and Toxicology Department, School of Medicine, National and Kapodistrian University of Athens, Greece, during the time interval from January 2011 to December 2015. Anonymity and medical secret were respected. A series of demographic and social factors in addition to special characteristics of suicide were entered into a specially established pre-coded database. These factors include social data as well as psychiatric background and certain autopsy characteristics. Data analysis was performed using descriptive statistics and Fisher’s exact test. The software used was STATA/SE 13 (Stata Corp., College Station, TX, USA). Results: A total of 162 cases were studied, 128 men and 34 women. Age ranged from 14 to 97 years old with an average of 53 years, presenting two peaks around 40 and 60 years. A 56% of cases were single/ divorced/ widowed. 25% of cases occurred during the weekend, and 66% of cases occurred in the house. A predominance of hanging as the leading method of suicide (41.4%) followed by jumping from a height (22.8%) and firearms (19.1%) was noted. Statistical analysis showed an association was found between suicide method and gender (P < 0.001, Fisher’s exact test); specifically, no woman used a firearm while only one man used medication overdose (against four women). Discussion: Greece has historically been one of the countries with the lowest suicide rates in Europe. Given a possible change in suicide trends during the financial crisis, further research seems necessary in order to establish risk factors. According to our study, suicide is more frequent in men who are not married, inside their house. Gender seems to be a factor affecting the method of suicide. These results seem in accordance with the international literature. Stronger than expected predominance in male suicide can be associated with failure to live up to social and family expectations for financial reasons.

Keywords: autopsy, Greece, risk factors, suicide

Procedia PDF Downloads 220
1021 Wrist Pain, Technological Device Used, and Perceived Academic Performance Among the College of Computer Studies Students

Authors: Maquiling Jhuvie Jane R., Ojastro Regine B., Peroja Loreille Marie B., Pinili Joy Angela., Salve Genial Gail M., Villavicencio Marielle Irene B., Yap Alther Francis Garth B.

Abstract:

Introduction: This study investigated the impact of prolonged device usage on wrist pain and perceived academic performance among college students in Computer Studies. The research aims to explore the correlation between the frequency of technological device use and the incidence of wrist pain, as well as how this pain affects students' academic performance. The study seeks to provide insights that could inform interventions to promote better musculoskeletal health among students engaged in intensive technology use to further improve their academic performance. Method: The study utilized descriptive-correlational and comparative design, focusing on bona fide students from Silliman University’s College of Computer Studies during the second semester of 2023-2024. Participants were recruited through a survey sent via school email, with responses collected until March 30, 2024. Data was gathered using a password-protected device and Google Forms, ensuring restricted access to raw data. The demographic profile was summarized, and the prevalence of wrist pain and device usage were analyzed using percentages and weighted means. Statistical analyses included Spearman’s rank correlation coefficient to assess the relationship between wrist pain and device usage and an Independent T-test to evaluate differences in academic performance based on wrist pain presence. Alpha was set at 0.05. Results: The study revealed that 40% of College of Computer Studies students experience wrist pain, with 2 out of every 5 students affected. Laptops and desktops were the most frequently used devices for academic work, achieving a weighted mean of 4.511, while mobile phones and tablets received lower means of 4.183 and 1.911, respectively. The average academic performance score among students was 29.7, classified as ‘Good Performance.’ Notably, there was no significant relationship between the frequency of device usage and wrist pain, as indicated by p-values exceeding 0.05. However, a significant difference in perceived academic performance was observed, with students without wrist pain scoring an average of 30.39 compared to 28.72 for those with wrist pain and a p-value of 0.0134 confirming this distinction. Conclusion: The study revealed that about 40% of students in the College of Computer Studies experience wrist pain, but there is no significant link between device usage and pain occurrence. However, students without wrist pain demonstrated better academic performance than those with pain, suggesting that wrist health may impact academic success. These findings imply that physical therapy practices in the Philippines should focus on preventive strategies and ergonomic education to improve student health and performance.

Keywords: wrist pain, frequency of use of technological devices, perceived academic performance, physical therapy

Procedia PDF Downloads 14
1020 Densities and Viscosities of Binary Mixture Containing Diethylamine and 2-Alkanol

Authors: Elham jassemi Zargani, Mohammad almasi

Abstract:

Densities and viscosities for binary mixtures of diethylamine + 2 Alkanol (2 propanol up to 2 pentanol) were measured over the entire composition range and temperature interval of 293.15 to 323.15 K. Excess molar volumes V_m^E and viscosity deviations Δη were calculated and correlated by the Redlich−Kister type function to derive the coefficients and estimate the standard error. For mixtures of diethylamine with used 2-alkanols, V_m^E and Δη are negative over the entire range of mole fraction. The observed variations of these parameters, with alkanols chain length and temperature, are discussed in terms of the inter-molecular interactions between the unlike molecules of the binary mixtures.

Keywords: densities, viscosities, diethylamine, 2-alkanol, Redlich-Kister

Procedia PDF Downloads 388
1019 Utilization of Long Acting Reversible Contraceptive Methods, and Associated Factors among Female College Students in Gondar Town, Northwest Ethiopia, 2018

Authors: Woledegebrieal Aregay

Abstract:

Introduction: Family planning is defined as the ability of individuals and couples to anticipate and attain their desired number of children and the spacing and timing of their births. It is part of a strategy to reduce poverty, maternal, infant and child mortality; empowers women by lightening the burden of excessive childbearing. Family planning is achieved through the use of different contraceptive methods among which the most effective method is modern family planning methods like Long-Acting Reversible Contraceptive (LARCs) which are IUCD and Implant and these methods have multiple advantages over other reversible methods. Most importantly, once in place, they do not require maintenance and their duration of action is long, ranging from 3 to10 years. Methods: An institutional-based cross-sectional study was conducted in Gondar town among female college students from April-May. A simple random sampling technique was employed to recruit a total of 1166 study subjects. Descriptive variables were computed for all predictors & dependent variables. The presence of an association between covariates & LARC use was observed by two tables’ findings using the chi-square test. Bivariate logistic regression was conducted to identify all possible factors affecting LARC utilization & its crude Odds Ratio, 95% Confidence Interval (CI) & P-value was observed. A multivariable logistic regression model was developed to control possible confounding variables. Adjusted Odds Ratio (AOR) with 95% Confidence Interval (CI) &P-values will be computed to identify significantly associated factors (P < 0.05) with LARC utilization. Result: Utilization of LARCs was 20.4%, the most common is Implant 86(96.5%), and followed by Intra-Uterine Contraceptive Device (IUCD) 3(3.5%). The result of the multivariate analysis revealed that the significant association of marital status of the respondent on utilization of LARC [AOR 3.965(2.051-7.665)], discussion of the respondent about LARC utilization with the husband/boyfriend [AOR 2.198(1.191-4.058)], and attitude of the respondent on implant was found to be associated [AOR 0.365(0.143-0.933)].Conclusion: The level of knowledge and attitude in this study was not satisfactory, the utilization of long-acting reversible contraceptives among college students was relatively satisfactory but if the knowledge and attitude of the participant has improved the prevalence of LARC were increased.

Keywords: utilization, long-acting reversible contraceptive, Ethiopia, Gondar

Procedia PDF Downloads 224
1018 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 165
1017 Design and Production of Thin-Walled UHPFRC Footbridge

Authors: P. Tej, P. Kněž, M. Blank

Abstract:

The paper presents design and production of thin-walled U-profile footbridge made of UHPFRC. The main structure of the bridge is one prefabricated shell structure made of UHPFRC with dispersed steel fibers without any conventional reinforcement. The span of the bridge structure is 10 m and the clear width of 1.5 m. The thickness of the UHPFRC shell structure oscillated in an interval of 30-45 mm. Several calculations were made during the bridge design and compared with the experiments. For the purpose of verifying the calculations, a segment of 1.5 m was first produced, followed by the whole footbridge for testing. After the load tests were done, the design was optimized to cast the final footbridge.

Keywords: footbridge, non-linear analysis, shell structure, UHPFRC, Ultra-High Performance Fibre Reinforced Concrete

Procedia PDF Downloads 232
1016 Dimension Free Rigid Point Set Registration in Linear Time

Authors: Jianqin Qu

Abstract:

This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.

Keywords: covariant point, point matching, dimension free, rigid registration

Procedia PDF Downloads 168
1015 Uniform and Controlled Cooling of a Steel Block by Multiple Jet Impingement and Airflow

Authors: E. K. K. Agyeman, P. Mousseau, A. Sarda, D. Edelin

Abstract:

During the cooling of hot metals by the circulation of water in canals formed by boring holes in the metal, the rapid phase change of the water due to the high initial temperature of the metal leads to a non homogenous distribution of the phases within the canals. The liquid phase dominates towards the entrance of the canal while the gaseous phase dominates towards the exit. As a result of the different thermal properties of both phases, the metal is not uniformly cooled. This poses a problem during the cooling of moulds, where a uniform temperature distribution is needed in order to ensure the integrity of the part being formed. In this study, the simultaneous use of multiple water jets and an airflow for the uniform and controlled cooling of a steel block is investigated. A circular hole is bored at the centre of the steel block along its length and a perforated steel pipe is inserted along the central axis of the hole. Water jets that impact the internal surface of the steel block are generated from the perforations in the steel pipe when the water within it is put under pressure. These jets are oriented in the opposite direction to that of gravity. An intermittent airflow is imposed in the annular space between the steel pipe and the surface of hole bored in the steel block. The evolution of the temperature with respect to time of the external surface of the block is measured with the help of thermocouples and an infrared camera. Due to the high initial temperature of the steel block (350 °C), the water changes phase when it impacts the internal surface of the block. This leads to high heat fluxes. The strategy used to control the cooling speed of the block is the intermittent impingement of its internal surface by the jets. The intervals of impingement and of non impingement are varied in order to achieve the desired result. An airflow is used during the non impingement periods as an additional regulator of the cooling speed and to improve the temperature homogeneity of the impinged surface. After testing different jet positions, jet speeds and impingement intervals, it’s observed that the external surface of the steel block has a uniform temperature distribution along its length. However, the temperature distribution along its width isn’t uniform with the maximum temperature difference being between the centre of the block and its edge. Changing the positions of the jets has no significant effect on the temperature distribution on the external surface of the steel block. It’s also observed that reducing the jet impingement interval and increasing the non impingement interval slows down the cooling of the block and improves upon the temperature homogeneity of its external surface while increasing the duration of jet impingement speeds up the cooling process.

Keywords: cooling speed, homogenous cooling, jet impingement, phase change

Procedia PDF Downloads 125
1014 A Spectral Decomposition Method for Ordinary Differential Equation Systems with Constant or Linear Right Hand Sides

Authors: R. B. Ogunrinde, C. C. Jibunoh

Abstract:

In this paper, a spectral decomposition method is developed for the direct integration of stiff and nonstiff homogeneous linear (ODE) systems with linear, constant, or zero right hand sides (RHSs). The method does not require iteration but obtains solutions at any random points of t, by direct evaluation, in the interval of integration. All the numerical solutions obtained for the class of systems coincide with the exact theoretical solutions. In particular, solutions of homogeneous linear systems, i.e. with zero RHS, conform to the exact analytical solutions of the systems in terms of t.

Keywords: spectral decomposition, linear RHS, homogeneous linear systems, eigenvalues of the Jacobian

Procedia PDF Downloads 330
1013 Analysis Rescuers' Viewpoint about Victims Tracking in Earthquake by Using Radio Frequency Identification (RFID)

Authors: Sima Ajami, Batool Akbari

Abstract:

Background: Radio frequency identification (RFID) system has been successfully applied to the areas of manufacturing, supply chain, agriculture, transportation, healthcare, and services. The RFID is already used to track and trace the victims in a disaster situation. Data can be collected in real time and be immediately available to emergency personnel and saves time by the RFID. Objectives: The aim of this study was, first, to identify stakeholders and customers for rescuing earthquake victims, second, to list key internal and external factors to use RFID to track earthquake victims, finally, to assess SWOT for rescuers' viewpoint. Materials and Methods: This study was an applied and analytical study. The study population included scholars, experts, planners, policy makers and rescuers in the "red crescent society of Isfahan province", "disaster management Isfahan province", "maintenance and operation department of Isfahan", "fire and safety services organization of Isfahan municipality", and "medical emergencies and disaster management center of Isfahan". After that, researchers held a workshop to teach participants about RFID and its usages in tracking earthquake victims. In the meanwhile of the workshop, participants identified, listed, and weighed key internal factors (strengths and weaknesses; SW) and external factors (opportunities and threats; OT) to use RFID in tracking earthquake victims. Therefore, participants put weigh strengths, weaknesses, opportunities, and threats (SWOT) and their weighted scales were calculated. Then, participants' opinions about this issue were assessed. Finally, according to the SWOT matrix, strategies to solve the weaknesses, problems, challenges, and threats through opportunities and strengths were proposed by participants. Results: The SWOT analysis showed that the total weighted score for internal and external factors were 3.91 (Internal Factor Evaluation) and 3.31 (External Factor Evaluation) respectively. Therefore, it was in a quadrant SO strategies cell in the SWOT analysis matrix and aggressive strategies were resulted. Organizations, scholars, experts, planners, policy makers and rescue workers should plan to use RFID technology in order to save more victims and manage their life. Conclusions: Researchers suppose to apply SO strategies and use a firm’s internal strength to take advantage of external opportunities. It is suggested, policy maker should plan to use the most developed technologies to save earthquake victims and deliver the easiest service to them. To do this, education, informing, and encouraging rescuers to use these technologies is essential. Originality/ Value: This study was a research paper that showed how RFID can be useful to track victims in earthquake.

Keywords: frequency identification system, strength, weakness, earthquake, victim

Procedia PDF Downloads 322
1012 A Hazard Rate Function for the Time of Ruin

Authors: Sule Sahin, Basak Bulut Karageyik

Abstract:

This paper introduces a hazard rate function for the time of ruin to calculate the conditional probability of ruin for very small intervals. We call this function the force of ruin (FoR). We obtain the expected time of ruin and conditional expected time of ruin from the exact finite time ruin probability with exponential claim amounts. Then we introduce the FoR which gives the conditional probability of ruin and the condition is that ruin has not occurred at time t. We analyse the behavior of the FoR function for different initial surpluses over a specific time interval. We also obtain FoR under the excess of loss reinsurance arrangement and examine the effect of reinsurance on the FoR.

Keywords: conditional time of ruin, finite time ruin probability, force of ruin, reinsurance

Procedia PDF Downloads 405
1011 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10

Authors: Sofia Papadimitriou

Abstract:

INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.

Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score

Procedia PDF Downloads 155
1010 Measuring Housing Quality Using Geographic Information System (GIS)

Authors: Silvija ŠIljeg, Ante ŠIljeg, Ivan Marić

Abstract:

Measuring housing quality is being done on objective and subjective level using different indicators. During the research 5 urban and housing indicators formed according to 58 variables from different housing, domains were used. The aims of the research were to measure housing quality based on GIS approach and to detect critical points of housing in the example of Croatian coastal Town Zadar. The purposes of GIS in the research are to generate models of housing quality indexes by standardisation and aggregation of variables and to examine accuracy model of housing quality index. Analysis of accuracy has been done on the example of variable referring to educational objects availability. By defining weighted coefficients and using different GIS methods high, middle and low housing quality zones were determined. Obtained results can be of use to town planners, spatial planners and town authorities in the process of generating decisions, guidelines, and spatial interventions.

Keywords: housing quality, GIS, housing quality index, indicators, models of housing quality

Procedia PDF Downloads 298