Search results for: data interpolating empirical orthogonal function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29461

Search results for: data interpolating empirical orthogonal function

26101 Unveiling the Impact of Ultra High Vacuum Annealing Levels on Physico-Chemical Properties of Bulk ZnSe Semiconductor

Authors: Kheira Hamaida, Mohamed Salah Halati

Abstract:

In this current paper, our aim work is to link as possible the obtained simulation results and the other experimental ones, just focusing on the electronic and optical properties of ZnSe. The predictive spectra of the total and partial densities of states using the Full Potential Linearized/Augmented Plane Wave method with the newly Tran-Blaha (TB) modified Becke-Johnson (mBJ) exchange-correlation potential (EXC). So the upper valence energy (UVE) levels contain the relative contribution of Se-(4p and 3d) states with considerable contribution from the electrons of Zn-2s orbital. The dielectric function of w-ZnSe, with its two parts, appears with a noticeable anisotropy character. The microscopic origins of the electronic states that are responsible for the observed peaks in the spectrum are determined through the decomposition of the spectrum to the individual contributions of the electronic transitions between the pairs of bands, where Vi is an occupied state in the valence band, and Ci is an unoccupied state in the conduction band. X-PES (X Ray-Photo Electron Spectroscopy) is an important technique used to probe the homogeneity, stoichiometry, and purity state of the title compound. In order to check the electron transitions derived from simulations and the others from Reflected Electron Energy Loss Spectroscopy (REELS) technique which was of great sensitivity, is used to determine the interband electronic transitions. In the optical window (Eg), all the electron energy states created were also determined through the specific gaussian deconvolution of the photoluminescence spectrum (PLS) that probed under a room temperature (RT).

Keywords: spectroscopy, WIEN2K, IIB-VIA semiconductors, dielectric function

Procedia PDF Downloads 55
26100 Improving Our Understanding of the in vivo Modelling of Psychotic Disorders

Authors: Zsanett Bahor, Cristina Nunes-Fonseca, Gillian L. Currie, Emily S. Sena, Lindsay D.G. Thomson, Malcolm R. Macleod

Abstract:

Psychosis is ranked as the third most disabling medical condition in the world by the World Health Organization. Despite a substantial amount of research in recent years, available treatments are not universally effective and have a wide range of adverse side effects. Since many clinical drug candidates are identified through in vivo modelling, a deeper understanding of these models, and their strengths and limitations, might help us understand reasons for difficulties in psychosis drug development. To provide an unbiased summary of the preclinical psychosis literature we performed a systematic electronic search of PubMed for publications modelling a psychotic disorder in vivo, identifying 14,721 relevant studies. Double screening of 11,000 publications from this dataset so far established 2403 animal studies of psychosis, with the most common model being schizophrenia (95%). 61% of these models are induced using pharmacological agents. For all the models only 56% of publications test a therapeutic treatment. We propose a systematic review of these studies to assess the prevalence of reporting of measures to reduce risk of bias, and a meta-analysis to assess the internal and external validity of these animal models. Our findings are likely to be relevant to future preclinical studies of psychosis as this generation of strong empirical evidence has the potential to identify weaknesses, areas for improvement and make suggestions on refinement of experimental design. Such a detailed understanding of the data which inform what we think we know will help improve the current attrition rate between bench and bedside in psychosis research.

Keywords: animal models, psychosis, systematic review, schizophrenia

Procedia PDF Downloads 276
26099 Fatigue Crack Initiation of Al-Alloys: Effect of Heat Treatment Condition

Authors: M. Benachour, N. Benachour, M. Benguediab

Abstract:

In this investigation an empirical study was made on fatigue crack initiation on 7075 T6 and 7075 T71 al-alloys under constant amplitude loading. At initiation stage, local strain approach at the notch was applied. Single Edge Notch Tensile specimen with semi circular notch is used. Based on experimental results, effect of mean stress, is highlights on fatigue initiation life. Results show that fatigue life initiation is affected by notch geometry and mean stress.

Keywords: fatigue crack initiation, al-alloy, mean stress, heat treatment state

Procedia PDF Downloads 220
26098 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations

Authors: Xiao Zhou, Jianlin Cheng

Abstract:

A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.

Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining

Procedia PDF Downloads 447
26097 Detection Efficient Enterprises via Data Envelopment Analysis

Authors: S. Turkan

Abstract:

In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.

Keywords: data envelopment analysis, super efficiency, logistic regression, financial ratios

Procedia PDF Downloads 315
26096 Simplified Empirical Method for Predicting Liquefaction Potential and Its Application to Kaohsiung Areas in Taiwan

Authors: Darn H. Hsiao, Zhu-Yun Zheng

Abstract:

Since Taiwan is located between the Eurasian and Filipino plates and earthquakes often thus occur. The coastal plains in western Taiwan are alluvial plains, and the soils of the alluvium are mostly from the Lao-Shan belt in the central mountainous area of ​​southern Taiwan. It could come mostly from sand/shale and slate. The previous investigation found that the soils in the Kaohsiung area of ​​southern Taiwan are mainly composed of slate, shale, quartz, low-plastic clay, silt, silty sand and so on. It can also be found from the past earthquakes that the soil in Kaohsiung is highly susceptible to soil subsidence due to liquefaction. Insufficient bearing capacity of building will cause soil liquefaction disasters. In this study, the boring drilling data from nine districts among the Love River Basin in the city center, and some factors affecting liquefaction include the content of fines (FC), standard penetration test N value (SPT N), the thickness of clay layer near ground-surface, and the thickness of possible liquefied soil were further discussed for liquefaction potential as well as groundwater level. The results show that the liquefaction potential is higher in the areas near the riverside, the backfill area, and the west area of ​​the study area. This paper also uses the old paleo-geological map, soil particle distribution curve, compared with LPI map calculated from the analysis results. After all the parameters finally were studied for five sub zones in the Love River Basin by maximum-minimum method, it is found that both of standard penetration test N value and the thickness of the clay layer will be most influential.

Keywords: liquefaction, western Taiwan, liquefaction potential map, high liquefaction potential areas

Procedia PDF Downloads 111
26095 The Sociocultural and Critical Theories under the Empiricism of a Study Abroad Program

Authors: Magda Silva

Abstract:

This paper presents the sociocultural and critical theories used in the creation of a study abroad program in Brazil, as well as the successful results obtained in the fourteen years of experience provided by the program in distinct regions of Brazil. This program maximizes students’ acquisition of the Portuguese language, and affords them an in-depth intercultural and intracultural competence by on site studies in cosmopolitan Rio de Janeiro, afro-heritage Salvador da Bahia, and Amazonian Belém do Pará. The program provides the means to acknowledge the presence, influence, similarities, and differences of Portuguese-speaking Brazil in Latin America.

Keywords: study abroad, critical thinking, sociocultural theory, foreign language, empirical, theoretical

Procedia PDF Downloads 409
26094 Intelligent Process Data Mining for Monitoring for Fault-Free Operation of Industrial Processes

Authors: Hyun-Woo Cho

Abstract:

The real-time fault monitoring and diagnosis of large scale production processes is helpful and necessary in order to operate industrial process safely and efficiently producing good final product quality. Unusual and abnormal events of the process may have a serious impact on the process such as malfunctions or breakdowns. This work try to utilize process measurement data obtained in an on-line basis for the safe and some fault-free operation of industrial processes. To this end, this work evaluated the proposed intelligent process data monitoring framework based on a simulation process. The monitoring scheme extracts the fault pattern in the reduced space for the reliable data representation. Moreover, this work shows the results of using linear and nonlinear techniques for the monitoring purpose. It has shown that the nonlinear technique produced more reliable monitoring results and outperforms linear methods. The adoption of the qualitative monitoring model helps to reduce the sensitivity of the fault pattern to noise.

Keywords: process data, data mining, process operation, real-time monitoring

Procedia PDF Downloads 623
26093 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model

Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho

Abstract:

Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.

Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem

Procedia PDF Downloads 280
26092 Pay Per Click Attribution: Effects on Direct Search Traffic and Purchases

Authors: Toni Raurich-Marcet, Joan Llonch-Andreu

Abstract:

This research is focused on the relationship between Search Engine Marketing (SEM) and traditional advertising. The dominant assumption is that SEM does not help brand awareness and only does it in session as if it were the cost of manufacturing the product being sold. The study is methodologically developed using an experiment where the effects were determined to analyze the billboard effect. The research allowed the cross-linking of theoretical and empirical knowledge on digital marketing. This paper has validated this marketing generates retention as traditional advertising would by measuring brand awareness and its improvements. This changes the way performance and brand campaigns are split within marketing departments, effectively rebalancing budgets moving forward.

Keywords: attribution, performance marketing, SEM, marketplaces

Procedia PDF Downloads 119
26091 Share Pledging and Financial Constraints in China

Authors: Zijian Cheng, Frank Liu, Yupu Sun

Abstract:

The relationship between the intensity of share pledging activities and the level of financial constraint in publicly listed firms in China is examined in this paper. Empirical results show that the high financial constraint level may motivate insiders to use share pledging as an alternative funding source and an expropriation mechanism. Share collateralization can cause a subsequently more constrained financing condition. Evidence is found that share pledging made by the controlling shareholder is likely to mitigate financial constraints in the following year. Research findings are robust to alternative measures and an instrumental variable for dealing with endogeneity problems.

Keywords: share pledge, financial constraint, controlling shareholder, dividend policy

Procedia PDF Downloads 150
26090 Quantifying Meaning in Biological Systems

Authors: Richard L. Summers

Abstract:

The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.

Keywords: meaning, information, Lyapunov, living systems

Procedia PDF Downloads 120
26089 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 125
26088 Stochastic Nuisance Flood Risk for Coastal Areas

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.

Keywords: flood risk, nuisance flooding, urban flooding, FMEA

Procedia PDF Downloads 78
26087 Identifying a Drug Addict Person Using Artificial Neural Networks

Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh

Abstract:

Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.

Keywords: drug addiction, artificial neural networks, multilayer perceptron (MLP), decision support system

Procedia PDF Downloads 284
26086 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 139
26085 Influencing Factors to Mandatory versus Non-Mandatory E-Government Services Adoption in India: An Empirical Study

Authors: Rajiv Kumar, Amit Sachan, Arindam Mukherjee

Abstract:

Government agencies around the world, including India, are incorporating digital technologies and processes into their day-to-day operations to become more efficient. Despite low internet penetration (around 34.8% of total population) in India, Government of India has made some public services mandatory to access online (e.g. passport, tax filing).This is insisting citizens to access mandatory public services online. However, due to digital divide, all citizens do not have equal access to internet. In light of this, it is an interesting topic to explore how citizens are able to access mandatory online public services. It is important to understand how citizens are adopting these mandatory e-government services and how the adoption behavior of these mandatory e-government services is different or similar to adoption behavior of non-mandatory e-government services. The purpose of this research is to investigate the factors that influence adoption of mandatory and non-mandatory e-government services in India. A quantitative technique is employed in this study. A conceptual model has been proposed by integrating the influencing factors to adopt e-government services from previous studies. The proposed conceptual model highlights a comprehensive set of potential factors influencing the adoption of e-government services. The proposed model has been validated by keeping in view the local context of Indian society. Online and paper based survey was administered, collected data was analyzed and results have been discussed. A total of 463 valid responses were received and further the responses were analyzed. The research reveals that the influencing factors to adopt e-government services are not same for both mandatory and non-mandatory e-government services. There are some factors that influence adoption of both mandatory and non-mandatory e-government services but there are some which are relevant for either of mandatory and non-mandatory e-government services. The research findings may help government or concerned agencies in successfully implementing e-government services.

Keywords: adoption, e-government, India, mandatory, non-mandatory

Procedia PDF Downloads 302
26084 A Machine Learning Approach for the Leakage Classification in the Hydraulic Final Test

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

The widespread use of machine learning applications in production is significantly accelerated by improved computing power and increasing data availability. Predictive quality enables the assurance of product quality by using machine learning models as a basis for decisions on test results. The use of real Bosch production data based on geometric gauge blocks from machining, mating data from assembly and hydraulic measurement data from final testing of directional valves is a promising approach to classifying the quality characteristics of workpieces.

Keywords: machine learning, classification, predictive quality, hydraulics, supervised learning

Procedia PDF Downloads 196
26083 Analysis of Cyber Activities of Potential Business Customers Using Neo4j Graph Databases

Authors: Suglo Tohari Luri

Abstract:

Data analysis is an important aspect of business performance. With the application of artificial intelligence within databases, selecting a suitable database engine for an application design is also very crucial for business data analysis. The application of business intelligence (BI) software into some relational databases such as Neo4j has proved highly effective in terms of customer data analysis. Yet what remains of great concern is the fact that not all business organizations have the neo4j business intelligence software applications to implement for customer data analysis. Further, those with the BI software lack personnel with the requisite expertise to use it effectively with the neo4j database. The purpose of this research is to demonstrate how the Neo4j program code alone can be applied for the analysis of e-commerce website customer visits. As the neo4j database engine is optimized for handling and managing data relationships with the capability of building high performance and scalable systems to handle connected data nodes, it will ensure that business owners who advertise their products at websites using neo4j as a database are able to determine the number of visitors so as to know which products are visited at routine intervals for the necessary decision making. It will also help in knowing the best customer segments in relation to specific goods so as to place more emphasis on their advertisement on the said websites.

Keywords: data, engine, intelligence, customer, neo4j, database

Procedia PDF Downloads 184
26082 Decision Making System for Clinical Datasets

Authors: P. Bharathiraja

Abstract:

Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.

Keywords: decision making, data mining, normalization, fuzzy rule, classification

Procedia PDF Downloads 503
26081 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: concrete bridges, deterioration, Markov chains, probability matrix

Procedia PDF Downloads 328
26080 Validation of Visibility Data from Road Weather Information Systems by Comparing Three Data Resources: Case Study in Ohio

Authors: Fan Ye

Abstract:

Adverse weather conditions, particularly those with low visibility, are critical to the driving tasks. However, the direct relationship between visibility distances and traffic flow/roadway safety is uncertain due to the limitation of visibility data availability. The recent growth of deployment of Road Weather Information Systems (RWIS) makes segment-specific visibility information available which can be integrated with other Intelligent Transportation System, such as automated warning system and variable speed limit, to improve mobility and safety. Before applying the RWIS visibility measurements in traffic study and operations, it is critical to validate the data. Therefore, an attempt was made in the paper to examine the validity and viability of RWIS visibility data by comparing visibility measurements among RWIS, airport weather stations, and weather information recorded by police in crash reports, based on Ohio data. The results indicated that RWIS visibility measurements were significantly different from airport visibility data in Ohio, but no conclusion regarding the reliability of RWIS visibility could be drawn in the consideration of no verified ground truth in the comparisons. It was suggested that more objective methods are needed to validate the RWIS visibility measurements, such as continuous in-field measurements associated with various weather events using calibrated visibility sensors.

Keywords: RWIS, visibility distance, low visibility, adverse weather

Procedia PDF Downloads 237
26079 Design and Simulation of All Optical Fiber to the Home Network

Authors: Rahul Malhotra

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 539
26078 Chronic Cognitive Impacts of Mild Traumatic Brain Injury during Aging

Authors: Camille Charlebois-Plante, Marie-Ève Bourassa, Gaelle Dumel, Meriem Sabir, Louis De Beaumont

Abstract:

To the extent of our knowledge, there has been little interest in the chronic effects of mild traumatic brain injury (mTBI) on cognition during normal aging. This is rather surprising considering the impacts on daily and social functioning. In addition, sustaining a mTBI during late adulthood may increase the effect of normal biological aging in individuals who consider themselves normal and healthy. The objective of this study was to characterize the persistent neuropsychological repercussions of mTBI sustained during late adulthood, on average 12 months prior to testing. To this end, 35 mTBI patients and 42 controls between the ages of 50 and 69 completed an exhaustive neuropsychological assessment lasting three hours. All mTBI patients were asymptomatic and all participants had a score ≥ 27 at the MoCA. The evaluation consisted of 20 standardized neuropsychological tests measuring memory, attention, executive and language functions, as well as information processing speed. Performance on tests of visual (Brief Visuospatial Memory Test Revised) and verbal memory (Rey Auditory Verbal Learning Test and WMS-IV Logical Memory subtest), lexical access (Boston Naming Test) and response inhibition (Stroop) revealed to be significantly lower in the mTBI group. These findings suggest that a mTBI sustained during late adulthood induces lasting effects on cognitive function. Episodic memory and executive functions seem to be particularly vulnerable to enduring mTBI effects.

Keywords: cognitive function, late adulthood, mild traumatic brain injury, neuropsychology

Procedia PDF Downloads 161
26077 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm

Authors: Vahid Bayrami Rad

Abstract:

In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.

Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability

Procedia PDF Downloads 52
26076 Applying the Quad Model to Estimate the Implicit Self-Esteem of Patients with Depressive Disorders: Comparing the Psychometric Properties with the Implicit Association Test Effect

Authors: Yi-Tung Lin

Abstract:

Researchers commonly assess implicit self-esteem with the Implicit Association Test (IAT). The IAT’s measure, often referred to as the IAT effect, indicates the strengths of automatic preferences for the self relative to others, which is often considered an index of implicit self-esteem. However, based on the Dual-process theory, the IAT does not rely entirely on the automatic process; it is also influenced by a controlled process. The present study, therefore, analyzed the IAT data with the Quad model, separating four processes on the IAT performance: the likelihood that automatic association is activated by the stimulus in the trial (AC); that a correct response is discriminated in the trial (D); that the automatic bias is overcome in favor of a deliberate response (OB); and that when the association is not activated, and the individual fails to discriminate a correct answer, there is a guessing or response bias drives the response (G). The AC and G processes are automatic, while the D and OB processes are controlled. The AC parameter is considered as the strength of the association activated by the stimulus, which reflects what implicit measures of social cognition aim to assess. The stronger the automatic association between self and positive valence, the more likely it will be activated by a relevant stimulus. Therefore, the AC parameter was used as the index of implicit self-esteem in the present study. Meanwhile, the relationship between implicit self-esteem and depression is not fully investigated. In the cognitive theory of depression, it is assumed that the negative self-schema is crucial in depression. Based on this point of view, implicit self-esteem would be negatively associated with depression. However, the results among empirical studies are inconsistent. The aims of the present study were to examine the psychometric properties of the AC (i.e., test-retest reliability and its correlations with explicit self-esteem and depression) and compare it with that of the IAT effect. The present study had 105 patients with depressive disorders completing the Rosenberg Self-Esteem Scale, Beck Depression Inventory-II and the IAT on the pretest. After at least 3 weeks, the participants completed the second IAT. The data were analyzed by the latent-trait multinomial processing tree model (latent-trait MPT) with the TreeBUGS package in R. The result showed that the latent-trait MPT had a satisfactory model fit. The effect size of test-retest reliability of the AC and the IAT effect were medium (r = .43, p < .0001) and small (r = .29, p < .01) respectively. Only the AC showed a significant correlation with explicit self-esteem (r = .19, p < .05). Neither of the two indexes was correlated with depression. Collectively, the AC parameter was a satisfactory index of implicit self-esteem compared with the IAT effect. Also, the present study supported the results that implicit self-esteem was not correlated with depression.

Keywords: cognitive modeling, implicit association test, implicit self-esteem, quad model

Procedia PDF Downloads 112
26075 Endoscopic Pituitary Surgery: Learning Curve and Nasal Quality of Life

Authors: Martin Dupuy, Solange Grunenwald, Pierre-Louis Colombo, Laurence Mahieu, Pomone Richard, Philippe Bartoli

Abstract:

Endonasal endoscopic trans-sphenoidal surgery for pituitary tumours has become a mainstay of treatment over the last two decades. Although it is generally accepted that there is no significant difference between endoscopic versus microscopic approach for surgical outcomes (endocrine and ophthalmologic status), nasal morbidity seems to the benefit of endoscopic procedures. Minimally invasive endoscopic surgery needs an operative learning curve to achieve surgeon’s efficiency. This learning curve is now well known for surgical outcomes and complications rate, however, few data are available for nasal morbidity. The aim of our series is to document operative experience and nasal quality of life after (NQOL) endoscopic trans-sphenoidal surgery. The prospective pituitary surgical cohort consisted of 525 consecutives patients referred to our Skull Base Diseases Department. Endoscopic procedures were performed by a single neurosurgeon using an uninostril approach. NQOL was evaluated using the Sino-Nasal Test (SNOT-22), the Anterior Base Nasal Inventory (ASBNI) and the Skull Base Inventory Score (SBIS). Data were collected before surgery during hospital stay and 3 months after the surgery. The seventy first patients were compared to the latest 70 patients. There was no significant difference between comparison score before versus after surgery for SNOT-22, ASBNI and SBIS during the single surgeon’s learning curve. Our series demonstrates that in our institution there is no statistically significant learning curve for NQOL after uninostril endoscopic pituitary surgery. A careful progression through sinonasal structures with very limited mucosal incision is associated with minimal morbidity and preserves nasal function. Conservative and minimal invasive approach could be achieved early during learning curve.

Keywords: pituitary surgery, quality of life, minimal invasive surgery, learning curve, pituitary tumours, skull base surgery, endoscopic surgery

Procedia PDF Downloads 107
26074 Loss of the Skin Barrier after Dermal Application of the Low Molecular Methyl Siloxanes: Volatile Methyl Siloxanes, VMS Silicones

Authors: D. Glamowska, K. Szymkowska, K. Mojsiewicz- Pieńkowska, K. Cal, Z. Jankowski

Abstract:

Introduction: The integrity of the outermost layer of skin (stratum corneum) is vital to the penetration of various compounds, including toxic substances. Barrier function of skin depends of its structure. The barrier function of the stratum corneum is provided by patterned lipid lamellae (binlayer). However, a lot of substances, including the low molecular methyl siloxanes (volatile methyl siloxanes, VMS) have an impact on alteration the skin barrier due to damage of stratum corneum structure. VMS belong to silicones. They are widely used in the pharmaceutical as well as cosmetic industry. Silicones fulfill the role of ingredient or excipient in medicinal products and the excipient in personal care products. Due to the significant human exposure to this group of compounds, an important aspect is toxicology of the compounds and safety assessment of products. Silicones in general opinion are considered as a non-toxic substances, but there are some data about their negative effect on living organisms through the inhaled or oral application. However, the transdermal route has not been described in the literature as a possible alternative route of penetration. The aim of the study was to verify the possibility of penetration of the stratum corneum, further permeation into the deeper layers of the skin (epidermis and dermis) as well as to the fluid acceptor by VMS. Methods: Research methodology was developed based on the OECD and WHO guidelines. In ex-vivo study, the fluorescence microscope and ATR FT-IR spectroscopy was used. The Franz- type diffusion cells were used to application of the VMS on the sample of human skin (A=0.65 cm) for 24h. The stratum corneum at the application site was tape-stripped. After separation of epidermis, relevant dyes: fluorescein, sulforhodamine B, rhodamine B hexyl ester were put on and observations were carried in the microscope. To confirm the penetration and permeation of the cyclic or linear VMS and thus the presence of silicone in the individual layers of the skin, spectra ATR FT-IR of the sample after application of silicone and H2O (control sample) were recorded. The research included comparison of the intesity of bands in characteristic positions for silicones (1263 cm-1, 1052 cm-1 and 800 cm-1). Results: and Conclusions The results present that cyclic and linear VMS are able to overcome the barrier of the skin. Influence of them on damage of corneocytes of the stratum corneum was observed. This phenomenon was due to distinct disturbances in the lipid structure of the stratum corneum. The presence of cyclic and linear VMS were identified in the stratum corneum, epidermis as well as in the dermis by both fluorescence microscope and ATR FT-IR spectroscopy. This confirms that the cyclic and linear VMS can penetrate to stratum corneum and permeate through the human skin layers. Apart from this they cause changes in the structure of the skin. Results show to possible absorption into the blood and lymphathic vessels by the VMS with linear and cyclic structure.

Keywords: low molecular methyl siloxanes, volatile methyl siloxanes, linear and cyclic siloxanes, skin penetration, skin permeation

Procedia PDF Downloads 331
26073 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective

Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan

Abstract:

Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.

Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence

Procedia PDF Downloads 420
26072 Diffusive Transport of VOCs Through Composite Liners

Authors: Christina Jery, R. K. Anjana, D. N. Arnepalli, R. Sobha

Abstract:

Modern landfills employ a composite liner consisting of a geomembrane overlying a compacted clay liner (CCL) or a geosynthetic clay liner (GCL) as a barrier system. The primary function of a barrier system is to control the contaminant transport from the leachate (dissolved phase) and landfill gas (vapour phase) out of the landfill thereby minimizing the environmental impact. This study is undertaken to investigate the diffusive migration of VOCs through composite liners. VOCs are known hazardous air pollutants were often existing in both the vapour phase and dissolved phase. These compounds are known to diffuse readily through the polymeric geomembranes. The objective of the research is to develop a comprehensive data set of diffusive parameters involved in the diffusion of VOCs in the composite liner (1.5 mm HDPE geomembrane overlying a 30mm compacted clay layer). For this purpose, the study aims to develop a new experimental setup for determining the diffusion characteristics. The key parameters of diffusion (partitioning, diffusion and permeation coefficients) are examined. The diffusion tests are carried out both in aqueous and vapor phase. Finally, an attempt is also made to study the effect of low temperature on the diffusion characteristics.

Keywords: diffusion, sorption, organic compounds, composite liners, geomembrane

Procedia PDF Downloads 350