Search results for: continuous traumatic event
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3684

Search results for: continuous traumatic event

3204 Integral Abutment Bridge: A Study on Types, Importance, Limitations and Design Guidelines

Authors: Babitha Elizabeth Philip

Abstract:

This paper aims to study in general about bridges without expansion joints. Integral Abutment Bridges (IAB) fall into this category of bridges. They are having a continuous deck and also the girders are integrated into the abutments. They are most cost effective system in terms of construction, maintenance, and longevity. The main advantage of IAB is that it is corrosion resistant since water is not allowed to pass through the structure. The other attractions of integral bridges are its simple and rapid construction, smooth and uninterrupted deck which provides a safe ride. Also damages to the abutments can be avoided to a great extent due to better load distribution at the bridge ends. Damages due to improper drainage are not seen in IAB because of its properly drained approach slabs thus eliminating the possibility of erosion of the abutment backfill and freeze and thaw damage resulting from saturated backfill.

Keywords: continuous bridge, integral abutment bridge, joint bridge, life cycle cost, soil interaction

Procedia PDF Downloads 440
3203 A Simulation of Patient Queuing System on Radiology Department at Tertiary Specialized Referral Hospital in Indonesia

Authors: Yonathan Audhitya Suthihono, Ratih Dyah Kusumastuti

Abstract:

The radiology department in a tertiary referral hospital faces service operation challenges such as huge and various patient arrival, which can increase the probability of patient queuing. During the COVID-19 pandemic, it is mandatory to apply social distancing protocol in the radiology department. A strategy to prevent the accumulation of patients at one spot would be required. The aim of this study is to identify an alternative solution which can reduce the patient’s waiting time in radiology department. Discrete event simulation (DES) is used for this study by constructing several improvement scenarios with Arena simulation software. Statistical analysis is used to test the validity of the base case scenario model and to investigate the performance of the improvement scenarios. The result of this study shows that the selected scenario is able to reduce patient waiting time significantly, which leads to more efficient services in a radiology department, be able to serve patients more effectively, and thus increase patient satisfaction. The result of the simulation can be used by the hospital management to improve the operational performance of the radiology department.

Keywords: discrete event simulation, hospital management patient queuing model, radiology department services

Procedia PDF Downloads 106
3202 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System

Authors: Imran Dayan, Ashiqul Khan

Abstract:

Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.

Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining

Procedia PDF Downloads 322
3201 Applied Complement of Probability and Information Entropy for Prediction in Student Learning

Authors: Kennedy Efosa Ehimwenma, Sujatha Krishnamoorthy, Safiya Al‑Sharji

Abstract:

The probability computation of events is in the interval of [0, 1], which are values that are determined by the number of outcomes of events in a sample space S. The probability Pr(A) that an event A will never occur is 0. The probability Pr(B) that event B will certainly occur is 1. This makes both events A and B a certainty. Furthermore, the sum of probabilities Pr(E₁) + Pr(E₂) + … + Pr(Eₙ) of a finite set of events in a given sample space S equals 1. Conversely, the difference of the sum of two probabilities that will certainly occur is 0. This paper first discusses Bayes, the complement of probability, and the difference of probability for occurrences of learning-events before applying them in the prediction of learning objects in student learning. Given the sum of 1; to make a recommendation for student learning, this paper proposes that the difference of argMaxPr(S) and the probability of student-performance quantifies the weight of learning objects for students. Using a dataset of skill-set, the computational procedure demonstrates i) the probability of skill-set events that have occurred that would lead to higher-level learning; ii) the probability of the events that have not occurred that requires subject-matter relearning; iii) accuracy of the decision tree in the prediction of student performance into class labels and iv) information entropy about skill-set data and its implication on student cognitive performance and recommendation of learning.

Keywords: complement of probability, Bayes’ rule, prediction, pre-assessments, computational education, information theory

Procedia PDF Downloads 145
3200 Elucidation of the Photoreactivity of 2-Hydroxychalcones and the Effect of Continuous Photoflow Method on the Photoreactivity

Authors: Sobiya George, Anna Dora Gudmundsdottir

Abstract:

The 2-hydroxychalcones form an important group of organic compounds not only because of their pharmacological properties but also because they are intermediates in the biosynthesis of flavanones. We studied the photoreactivity of 2-hydroxychalcone derivatives in aprotic solvent acetonitrile and found that their photochemistry is concentration-dependent. Irradiation of 2-hydroxychalcone derivatives with 365 nm light emitting diode (LED) in dilute concentration selectively forms flavanones, whereas, at higher concentrations, an additional photoproduct is observed. However, the application of the continuous photo-flow method resulted in the selective formation of flavanones even at higher concentrations. To understand the reaction mechanism and explain the concentration-dependent photoreactivity of 2-hydroxychalcones, we preformed trapping studies with tris(trimethylsilyl)silane, nanosecond laser flash photolysis, and time dependent-density functional theory (TD-DFT) calculations.

Keywords: flavanones, hydroxychalcones, laser flash photolysis, TD-DFT calculations

Procedia PDF Downloads 137
3199 Limited Ventilation Efficacy of Prehospital I-Gel Insertion in Out-of-Hospital Cardiac Arrest Patients

Authors: Eunhye Cho, Hyuk-Hoon Kim, Sieun Lee, Minjung Kathy Chae

Abstract:

Introduction: I-gel is a commonly used supraglottic advanced airway device in prehospital out-of-hospital cardiac arrest (OHCA) allowing for minimal interruption of continuous chest compression. However, previous studies have shown that prehospital supraglottic airway had inferior neurologic outcomes and survival compared to no advanced prehospital airway with conventional bag mask ventilation. We hypothesize that continuous compression with i-gel as an advanced airway may cause insufficient ventilation compared to 30:2 chest compression with conventional BVM. Therefore, we investigated the ventilation efficacy of i-gel with the initial arterial blood gas analysis in OHCA patients visiting our ER. Material and Method: Demographics, arrest parameters including i-gel insertion, initial arterial blood gas analysis was retrospectively analysed for 119 transported OHCA patients that visited our ER. Linear regression was done to investigate the association with i-gel insertion and initial pCO2 as a surrogate of prehospital ventilation. Result: A total of 52 patients were analysed for the study. Of the patients who visited the ER during OHCA, 24 patients had i-gel insertion and 28 patients had BVM as airway management in the prehospital phase. Prehospital i-gel insertion was associated with the initial pCO2 level (B coefficient 29.9, SE 10.1, p<0.01) after adjusting for bystander CPR, cardiogenic cause of arrest, EMS call to arrival. Conclusion: Despite many limitations to the study, prehospital insertion of i-gel was associated with high initial pCO2 values in OHCA patients visiting our ER, possibly indicating insufficient ventilation with prehospital i-gel as an advanced airway and continuous chest compressions.

Keywords: arrest, I-gel, prehospital, ventilation

Procedia PDF Downloads 328
3198 Comparison of the Effects of Continuous Flow Microwave Pre-Treatment with Different Intensities on the Anaerobic Digestion of Sewage Sludge for Sustainable Energy Recovery from Sewage Treatment Plant

Authors: D. Hephzibah, P. Kumaran, N. M. Saifuddin

Abstract:

Anaerobic digestion is a well-known technique for sustainable energy recovery from sewage sludge. However, sewage sludge digestion is restricted due to certain factors. Pre-treatment methods have been established in various publications as a promising technique to improve the digestibility of the sewage sludge and to enhance the biogas generated which can be used for energy recovery. In this study, continuous flow microwave (MW) pre-treatment with different intensities were compared by using 5 L semi-continuous digesters at a hydraulic retention time of 27 days. We focused on the effects of MW at different intensities on the sludge solubilization, sludge digestibility, and biogas production of the untreated and MW pre-treated sludge. The MW pre-treatment demonstrated an increase in the ratio of soluble chemical oxygen demand to total chemical oxygen demand (sCOD/tCOD) and volatile fatty acid (VFA) concentration. Besides that, the total volatile solid (TVS) removal efficiency and tCOD removal efficiency also increased during the digestion of the MW pre-treated sewage sludge compared to the untreated sewage sludge. Furthermore, the biogas yield also subsequently increases due to the pre-treatment effect. A higher MW power level and irradiation time generally enhanced the biogas generation which has potential for sustainable energy recovery from sewage treatment plant. However, the net energy balance tabulation shows that the MW pre-treatment leads to negative net energy production.

Keywords: anaerobic digestion, biogas, microwave pre-treatment, sewage sludge

Procedia PDF Downloads 307
3197 Modeling Intelligent Threats: Case of Continuous Attacks on a Specific Target

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez

Abstract:

In this paper, we treat a model that falls in the area of protecting targeted systems from intelligent threats including terrorism. We introduce the concept of system survivability, in the context of continuous attacks, as the probability that a system under attack will continue operation up to some fixed time t. We define a constant attack rate (CAR) process as an attack on a targeted system that follows an exponential distribution. We consider the superposition of several CAR processes. From the attacker side, we determine the optimal attack strategy that minimizes the system survivability. We also determine the optimal strengthening strategy that maximizes the system survivability under limited defensive resources. We use operations research techniques to identify optimal strategies of each antagonist. Our results may be used as interesting starting points to develop realistic protection strategies against intentional attacks.

Keywords: CAR processes, defense/attack strategies, exponential failure, survivability

Procedia PDF Downloads 380
3196 Constant Order Predictor Corrector Method for the Solution of Modeled Problems of First Order IVPs of ODEs

Authors: A. A. James, A. O. Adesanya, M. R. Odekunle, D. G. Yakubu

Abstract:

This paper examines the development of one step, five hybrid point method for the solution of first order initial value problems. We adopted the method of collocation and interpolation of power series approximate solution to generate a continuous linear multistep method. The continuous linear multistep method was evaluated at selected grid points to give the discrete linear multistep method. The method was implemented using a constant order predictor of order seven over an overlapping interval. The basic properties of the derived corrector was investigated and found to be zero stable, consistent and convergent. The region of absolute stability was also investigated. The method was tested on some numerical experiments and found to compete favorably with the existing methods.

Keywords: interpolation, approximate solution, collocation, differential system, half step, converges, block method, efficiency

Procedia PDF Downloads 322
3195 Effects of a Cooler on the Sampling Process in a Continuous Emission Monitoring System

Authors: J. W. Ahn, I. Y. Choi, T. V. Dinh, J. C. Kim

Abstract:

A cooler has been widely employed in the extractive system of the continuous emission monitoring system (CEMS) to remove water vapor in the gas stream. The effect of the cooler on analytical target gases was investigated in this research. A commercial cooler for the CEMS operated at 4 C was used. Several gases emitted from a coal power plant (i.e. CO2, SO2, NO, NO2 and CO) were mixed with humid air, and then introduced into the cooler to observe its effect. Concentrations of SO2, NO, NO2 and CO were made as 200 ppm. The CO2 concentration was 8%. The inlet absolute humidity was produced as 12.5% at 100 C using a bubbling method. It was found that the reduction rate of SO2 was the highest (~21%), followed by NO2 (~17%), CO2 (~11%) and CO (~10%). In contrast, the cooler was not affected by NO gas. The result indicated that the cooler caused a significant effect on the water soluble gases due to condensate water in the cooler. To overcome this problem, a correction factor may be applied. However, water vapor might be different, and emissions of target gases are also various. Therefore, the correction factor is not only a solution, but also a better available method should be employed.

Keywords: cooler, CEMS, monitoring, reproductive, sampling

Procedia PDF Downloads 349
3194 Connectomic Correlates of Cerebral Microhemorrhages in Mild Traumatic Brain Injury Victims with Neural and Cognitive Deficits

Authors: Kenneth A. Rostowsky, Alexander S. Maher, Nahian F. Chowdhury, Andrei Irimia

Abstract:

The clinical significance of cerebral microbleeds (CMBs) due to mild traumatic brain injury (mTBI) remains unclear. Here we use magnetic resonance imaging (MRI), diffusion tensor imaging (DTI) and connectomic analysis to investigate the statistical association between mTBI-related CMBs, post-TBI changes to the human connectome and neurological/cognitive deficits. This study was undertaken in agreement with US federal law (45 CFR 46) and was approved by the Institutional Review Board (IRB) of the University of Southern California (USC). Two groups, one consisting of 26 (13 females) mTBI victims and another comprising 26 (13 females) healthy control (HC) volunteers were recruited through IRB-approved procedures. The acute Glasgow Coma Scale (GCS) score was available for each mTBI victim (mean µ = 13.2; standard deviation σ = 0.4). Each HC volunteer was assigned a GCS of 15 to indicate the absence of head trauma at the time of enrollment in our study. Volunteers in the HC and mTBI groups were matched according to their sex and age (HC: µ = 67.2 years, σ = 5.62 years; mTBI: µ = 66.8 years, σ = 5.93 years). MRI [including T1- and T2-weighted volumes, gradient recalled echo (GRE)/susceptibility weighted imaging (SWI)] and gradient echo (GE) DWI volumes were acquired using the same MRI scanner type (Trio TIM, Siemens Corp.). Skull-stripping and eddy current correction were implemented. DWI volumes were processed in TrackVis (http://trackvis.org) and 3D Slicer (http://www.slicer.org). Tensors were fit to DWI data to perform DTI, and tractography streamlines were then reconstructed using deterministic tractography. A voxel classifier was used to identify image features as CMB candidates using Microbleed Anatomic Rating Scale (MARS) guidelines. For each peri-lesional DTI streamline bundle, the null hypothesis was formulated as the statement that there was no neurological or cognitive deficit associated with between-scan differences in the mean FA of DTI streamlines within each bundle. The statistical significance of each hypothesis test was calculated at the α = 0.05 level, subject to the family-wise error rate (FWER) correction for multiple comparisons. Results: In HC volunteers, the along-track analysis failed to identify statistically significant differences in the mean FA of DTI streamline bundles. In the mTBI group, significant differences in the mean FA of peri-lesional streamline bundles were found in 21 out of 26 volunteers. In those volunteers where significant differences had been found, these differences were associated with an average of ~47% of all identified CMBs (σ = 21%). In 12 out of the 21 volunteers exhibiting significant FA changes, cognitive functions (memory acquisition and retrieval, top-down control of attention, planning, judgment, cognitive aspects of decision-making) were found to have deteriorated over the six months following injury (r = -0.32, p < 0.001). Our preliminary results suggest that acute post-TBI CMBs may be associated with cognitive decline in some mTBI patients. Future research should attempt to identify mTBI patients at high risk for cognitive sequelae.

Keywords: traumatic brain injury, magnetic resonance imaging, diffusion tensor imaging, connectomics

Procedia PDF Downloads 162
3193 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data

Authors: N. Borjalilu, P. Rabiei, A. Enjoo

Abstract:

Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.

Keywords: F-topsis, fuzzy set, flight data monitoring (FDM), flight safety

Procedia PDF Downloads 152
3192 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 64
3191 Analysis of Ionospheric Variations over Japan during 23rd Solar Cycle Using Wavelet Techniques

Authors: C. S. Seema, P. R. Prince

Abstract:

The characterization of spatio-temporal inhomogeneities occurring in the ionospheric F₂ layer is remarkable since these variations are direct consequences of electrodynamical coupling between magnetosphere and solar events. The temporal and spatial variations of the F₂ layer, which occur with a period of several days or even years, mainly owe to geomagnetic and meteorological activities. The hourly F₂ layer critical frequency (foF2) over 23rd solar cycle (1996-2008) of three ionosonde stations (Wakkanai, Kokunbunji, and Okinawa) in northern hemisphere, which falls within same longitudinal span, is analyzed using continuous wavelet techniques. Morlet wavelet is used to transform continuous time series data of foF2 to a two dimensional time-frequency space, quantifying the time evolution of the oscillatory modes. The presence of significant time patterns (periodicities) at a particular time period and the time location of each periodicity are detected from the two-dimensional representation of the wavelet power, in the plane of scale and period of the time series. The mean strength of each periodicity over the entire period of analysis is studied using global wavelet spectrum. The quasi biennial, annual, semiannual, 27 day, diurnal and 12 hour variations of foF2 are clearly evident in the wavelet power spectra in all the three stations. Critical frequency oscillations with multi-day periods (2-3 days and 9 days in the low latitude station, 6-7 days in all stations and 15 days in mid-high latitude station) are also superimposed over large time scaled variations.

Keywords: continuous wavelet analysis, critical frequency, ionosphere, solar cycle

Procedia PDF Downloads 202
3190 Modelling for Temperature Non-Isothermal Continuous Stirred Tank Reactor Using Fuzzy Logic

Authors: Nasser Mohamed Ramli, Mohamad Syafiq Mohamad

Abstract:

Many types of controllers were applied on the continuous stirred tank reactor (CSTR) unit to control the temperature. In this research paper, Proportional-Integral-Derivative (PID) controller are compared with Fuzzy Logic controller for temperature control of CSTR. The control system for temperature non-isothermal of a CSTR will produce a stable response curve to its set point temperature. A mathematical model of a CSTR using the most general operating condition was developed through a set of differential equations into S-function using MATLAB. The reactor model and S-function are developed using m.file. After developing the S-function of CSTR model, User-Defined functions are used to link to SIMULINK file. Results that are obtained from simulation and temperature control were better when using Fuzzy logic control compared to PID control.

Keywords: CSTR, temperature, PID, fuzzy logic

Procedia PDF Downloads 442
3189 Development of a Plug-In Hybrid Powertrain System with Double Continuously Variable Transmissions

Authors: Cheng-Chi Yu, Chi-Shiun Chiou

Abstract:

This study developed a plug-in hybrid powertrain system which consisted of two continuous variable transmissions. By matching between the engine, motor, generator, and dual continuous variable transmissions, this integrated power system can take advantages of the components. The hybrid vehicle can be driven by the internal combustion engine, or electric motor alone, or by these two power sources together when the vehicle is driven in hard acceleration or high load. The energy management of this integrated hybrid system controls the power systems based on rule-based control strategy to achieve better fuel economy. When the vehicle driving power demand is low, the internal combustion engine is operating in the low efficiency region, so the internal combustion engine is shut down, and the vehicle is driven by motor only. When the vehicle driving power demand is high, internal combustion engine would operate in the high efficiency region; then the vehicle could be driven by internal combustion engine. This strategy would operate internal combustion engine only in optimal efficiency region to improve the fuel economy. In this research, the vehicle simulation model was built in MATLAB/ Simulink environment. The analysis results showed that the power coupled efficiency of the hybrid powertrain system with dual continuous variable transmissions was better than that of the Honda hybrid system on the market.

Keywords: plug-in hybrid power system, fuel economy, performance, continuously variable transmission

Procedia PDF Downloads 277
3188 A Combinatorial Representation for the Invariant Measure of Diffusion Processes on Metric Graphs

Authors: Michele Aleandri, Matteo Colangeli, Davide Gabrielli

Abstract:

We study a generalization to a continuous setting of the classical Markov chain tree theorem. In particular, we consider an irreducible diffusion process on a metric graph. The unique invariant measure has an atomic component on the vertices and an absolutely continuous part on the edges. We show that the corresponding density at x can be represented by a normalized superposition of the weights associated to metric arborescences oriented toward the point x. A metric arborescence is a metric tree oriented towards its root. The weight of each oriented metric arborescence is obtained by the product of the exponential of integrals of the form ∫a/b², where b is the drift and σ² is the diffusion coefficient, along the oriented edges, for a weight for each node determined by the local orientation of the arborescence around the node and for the inverse of the diffusion coefficient at x. The metric arborescences are obtained by cutting the original metric graph along some edges.

Keywords: diffusion processes, metric graphs, invariant measure, reversibility

Procedia PDF Downloads 157
3187 The Structural Pattern: An Event-Related Potential Study on Tang Poetry

Authors: ShuHui Yang, ChingChing Lu

Abstract:

Measuring event-related potentials (ERPs) has been fundamental to our understanding of how people process language. One specific ERP component, a P600, has been hypothesized to be associated with syntactic reanalysis processes. We, however, propose that the P600 is not restricted to reanalysis processes, but is the index of the structural pattern processing. To investigate the structural pattern processing, we utilized the effects of stimulus degradation in structural priming. To put it another way, there was no P600 effect if the structure of the prime was the same with the structure of the target. Otherwise, there would be a P600 effect if the structure were different between the prime and the target. In the experiment, twenty-two participants were presented with four sentences of Tang poetry. All of the first two sentences, being prime, were conducted with SVO+VP. The last two sentences, being the target, were divided into three types. Type one of the targets was SVO+VP. Type two of the targets was SVO+VPVP. Type three of the targets was VP+VP. The result showed that both of the targets, SVO+VPVP and VP+VP, elicited positive-going brainwave, a P600 effect, at 600~900ms time window. Furthermore, the P600 component was lager for the target’ VP+VP’ than the target’ SVO+VPVP’. That meant the more dissimilar the structure was, the lager the P600 effect we got. These results indicate that P600 was the index of the structure processing, and it would affect the P600 effect intensity with the degrees of structural heterogeneity.

Keywords: ERPs, P600, structural pattern, structural priming, Tang poetry

Procedia PDF Downloads 125
3186 Levels of Students’ Understandings of Electric Field Due to a Continuous Charged Distribution: A Case Study of a Uniformly Charged Insulating Rod

Authors: Thanida Sujarittham, Narumon Emarat, Jintawat Tanamatayarat, Kwan Arayathanitkul, Suchai Nopparatjamjomras

Abstract:

Electric field is an important fundamental concept in electrostatics. In high-school, generally Thai students have already learned about definition of electric field, electric field due to a point charge, and superposition of electric fields due to multiple-point charges. Those are the prerequisite basic knowledge students holding before entrancing universities. In the first-year university level, students will be quickly revised those basic knowledge and will be then introduced to a more complicated topic—electric field due to continuous charged distributions. We initially found that our freshman students, who were from the Faculty of Science and enrolled in the introductory physic course (SCPY 158), often seriously struggled with the basic physics concepts—superposition of electric fields and inverse square law and mathematics being relevant to this topic. These also then resulted on students’ understanding of advanced topics within the course such as Gauss's law, electric potential difference, and capacitance. Therefore, it is very important to determine students' understanding of electric field due to continuous charged distributions. The open-ended question about sketching net electric field vectors from a uniformly charged insulating rod was administered to 260 freshman science students as pre- and post-tests. All of their responses were analyzed and classified into five levels of understandings. To get deep understanding of each level, 30 students were interviewed toward their individual responses. The pre-test result found was that about 90% of students had incorrect understanding. Even after completing the lectures, there were only 26.5% of them could provide correct responses. Up to 50% had confusions and irrelevant ideas. The result implies that teaching methods in Thai high schools may be problematic. In addition for our benefit, these students’ alternative conceptions identified could be used as a guideline for developing the instructional method currently used in the course especially for teaching electrostatics.

Keywords: alternative conceptions, electric field of continuous charged distributions, inverse square law, levels of student understandings, superposition principle

Procedia PDF Downloads 283
3185 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks

Authors: Van Trieu, Shouhuai Xu, Yusheng Feng

Abstract:

Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.

Keywords: causality, multilevel graph, cyber-attacks, prediction

Procedia PDF Downloads 148
3184 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models

Authors: Yungtai Lo

Abstract:

Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.

Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve

Procedia PDF Downloads 336
3183 E-Learning in Life-Long Learning: Best Practices from the University of the Aegean

Authors: Chryssi Vitsilaki, Apostolos Kostas, Ilias Efthymiou

Abstract:

This paper presents selected best practices on online learning and teaching derived from a novel and innovating Lifelong Learning program through e-Learning, which has during the last five years been set up at the University of the Aegean in Greece. The university, capitalizing on an award-winning, decade-long experience in e-learning and blended learning in undergraduate and postgraduate studies, recently expanded into continuous education and vocational training programs in various cutting-edge fields. So, in this article we present: (a) the academic structure/infrastructure which has been developed for the administrative, organizational and educational support of the e-Learning process, including training the trainers, (b) the mode of design and implementation based on a sound pedagogical framework of open and distance education, and (c) the key results of the assessment of the e-learning process by the participants, as they are used to feedback on continuous organizational and teaching improvement and quality control.

Keywords: distance education, e-learning, life-long programs, synchronous/asynchronous learning

Procedia PDF Downloads 319
3182 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 268
3181 A Semiparametric Approach to Estimate the Mode of Continuous Multivariate Data

Authors: Tiee-Jian Wu, Chih-Yuan Hsu

Abstract:

Mode estimation is an important task, because it has applications to data from a wide variety of sources. We propose a semi-parametric approach to estimate the mode of an unknown continuous multivariate density function. Our approach is based on a weighted average of a parametric density estimate using the Box-Cox transform and a non-parametric kernel density estimate. Our semi-parametric mode estimate improves both the parametric- and non-parametric- mode estimates. Specifically, our mode estimate solves the non-consistency problem of parametric mode estimates (at large sample sizes) and reduces the variability of non-parametric mode estimates (at small sample sizes). The performance of our method at practical sample sizes is demonstrated by simulation examples and two real examples from the fields of climatology and image recognition.

Keywords: Box-Cox transform, density estimation, mode seeking, semiparametric method

Procedia PDF Downloads 271
3180 Socioeconomic Burden of Life Long Disease: A Case of Diabetes Care in Bangladesh

Authors: Samira Humaira Habib

Abstract:

Diabetes has profound effects on individuals and their families. If diabetes is not well monitored and managed, then it leads to long-term complications and a large and growing cost to the health care system. Prevalence and socioeconomic burden of diabetes and relative return of investment for the elimination or the reduction of the burden are much more important regarding its cost burden. Various studies regarding the socioeconomic cost burden of diabetes are well explored in developed countries but almost absent in developing countries like Bangladesh. The main objective of the study is to estimate the total socioeconomic burden of diabetes. It is a prospective longitudinal follow up study which is analytical in nature. Primary and secondary data are collected from patients who are undergoing treatment for diabetes at the out-patient department of Bangladesh Institute of Research & Rehabilitation in Diabetes, Endocrine & Metabolic Disorders (BIRDEM). Of the 2115 diabetic subjects, females constitute around 50.35% of the study subject, and the rest are male (49.65%). Among the subjects, 1323 are controlled, and 792 are uncontrolled diabetes. Cost analysis of 2115 diabetic patients shows that the total cost of diabetes management and treatment is US$ 903018 with an average of US$ 426.95 per patient. In direct cost, the investigation and medical treatment at hospital along with investigation constitute most of the cost in diabetes. The average cost of a hospital is US$ 311.79, which indicates an alarming warn for diabetic patients. The indirect cost shows that cost of productivity loss (US$ 51110.1) is higher among the all indirect item. All constitute total indirect cost as US$ 69215.7. The incremental cost of intensive management of uncontrolled diabetes is US$ 101.54 per patient and event-free time gained in this group is 0.55 years and the life years gain is 1.19 years. The incremental cost per event-free year gained is US$ 198.12. The incremental cost of intensive management of the controlled group is US$ 89.54 per patient and event-free time gained is 0.68 years, and the life year gain is 1.12 years. The incremental cost per event-free year gained is US$ 223.34. The EuroQoL difference between the groups is found to be 64.04. The cost-effective ratio is found to be US$ 1.64 cost per effect in case of controlled diabetes and US$ 1.69 cost per effect in case of uncontrolled diabetes. So management of diabetes is much more cost-effective. Cost of young type 1 diabetic patient showed upper socioeconomic class, and with the increase of the duration of diabetes, the cost increased also. The dietary pattern showed macronutrients intake and cost are significantly higher in the uncontrolled group than their counterparts. Proper management and control of diabetes can decrease the cost of care for the long term.

Keywords: cost, cost-effective, chronic diseases, diabetes care, burden, Bangladesh

Procedia PDF Downloads 137
3179 Causes and Impacts of Marine Heatwaves in the Bay of Bengal Region in the Recent Period

Authors: Sudhanshu Kumar, Raghvendra Chandrakar, Arun Chakraborty

Abstract:

In the ocean, the temperature extremes have the potential to devastate marine habitats, ecosystems together with ensuing socioeconomic consequences. In recent years, these extreme events are more frequent and intense globally and their increasing trend is expected to continue in the upcoming decades. It recently attracted public interest, as well as scientific researchers, which motivates us to analyze the current marine heatwave (MHW) events in the Bay of Bengal region. we have isolated 107 MHW events (above 90th percentile threshold) in this region of the Indian Ocean and investigated the variation in duration, intensity, and frequency of MHW events during our test period (1982-2021). Our study reveals that in the study region the average of three MHW events per year with an increasing linear trend of 1.11 MHW events per decade. In the analysis, we found the longest MHW event which lasted about 99 days, which is far greater than an average MHW event duration. The maximum intensity was 5.29°C (above the climatology-mean), while the mean intensity was 2.03°C. In addition, we observed net heat flux accompanied by anticyclonic eddies to be the primary cause of these events. Moreover, we concluded that these events affect sea surface height and oceanic productivity, highlighting the adverse impact of MHWs on marine ecosystems.

Keywords: marine heatwaves, global warming, climate change, sea surface temperature, marine ecosystem

Procedia PDF Downloads 113
3178 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil

Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee

Abstract:

Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.

Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy

Procedia PDF Downloads 234
3177 Development of Trigger Tool to Identify Adverse Drug Events From Warfarin Administered to Patient Admitted in Medical Wards of Chumphae Hospital

Authors: Puntarikorn Rungrattanakasin

Abstract:

Objectives: To develop the trigger tool to warn about the risk of bleeding as an adverse event from warfarin drug usage during admission in Medical Wards of Chumphae Hospital. Methods: A retrospective study was performed by reviewing the medical records for the patients admitted between June 1st,2020- May 31st, 2021. ADEs were evaluated by Naranjo’s algorithm. The international normalized ratio (INR) and events of bleeding during admissions were collected. Statistical analyses, including Chi-square test and Reciever Operating Characteristic (ROC) curve for optimal INR threshold, were used for the study. Results: Among the 139 admissions, the INR range was found to vary between 0.86-14.91, there was a total of 15 bleeding events, out of which 9 were mild, and 6 were severe. The occurrence of bleeding started whenever the INR was greater than 2.5 and reached the statistical significance (p <0.05), which was in concordance with the ROC curve and yielded 100 % sensitivity and 60% specificity in the detection of a bleeding event. In this regard, the INR greater than 2.5 was considered to be an optimal threshold to alert promptly for bleeding tendency. Conclusions: The INR value of greater than 2.5 (>2.5) would be an appropriate trigger tool to warn of the risk of bleeding for patients taking warfarin in Chumphae Hospital.

Keywords: trigger tool, warfarin, risk of bleeding, medical wards

Procedia PDF Downloads 136
3176 Robust Model Predictive Controller for Uncertain Nonlinear Wheeled Inverted Pendulum Systems: A Tube-Based Approach

Authors: Tran Gia Khanh, Dao Phuong Nam, Do Trong Tan, Nguyen Van Huong, Mai Xuan Sinh

Abstract:

This work presents the problem of tube-based robust model predictive controller for a class of continuous-time systems in the presence of input disturbances. The main objective is to point out the state trajectory of closed system being maintained inside a sequence of tubes. An estimation of attraction region of the closed system is pointed out based on input state stability (ISS) theory and linearized model in each time interval. The theoretical analysis and simulation results demonstrate the performance of the proposed algorithm for a wheeled inverted pendulum system.

Keywords: input state stability (ISS), tube-based robust MPC, continuous-time nonlinear systems, wheeled inverted pendulum

Procedia PDF Downloads 208
3175 Initial Palaeotsunami and Historical Tsunami in the Makran Subduction Zone of the Northwest Indian Ocean

Authors: Mohammad Mokhtari, Mehdi Masoodi, Parvaneh Faridi

Abstract:

history of tsunami generating earthquakes along the Makran Subduction Zone provides evidence of the potential tsunami hazard for the whole coastal area. In comparison with other subduction zone in the world, the Makran region of southern Pakistan and southeastern Iran remains low seismicity. Also, it is one of the least studied area in the northwest of the Indian Ocean regarding tsunami studies. We present a review of studies dealing with the historical /and ongoing palaeotsunamis supported by IGCP of UNESCO in the Makran Subduction Zone. The historical tsunami presented here includes about nine tsunamis in the Makran Subduction Zone, of which over 7 tsunamis occur in the eastern Makran. Tsunami is not as common in the western Makran as in the eastern Makran, where a database of historical events exists. The historically well-documented event is related to the 1945 earthquake with a magnitude of 8.1moment magnitude and tsunami in the western and eastern Makran. There are no details as to whether a tsunami was generated by a seismic event before 1945 off western Makran. But several potentially large tsunamigenic events in the MSZ before 1945 occurred in 325 B.C., 1008, 1483, 1524, 1765, 1851, 1864, and 1897. Here we will present new findings from a historical point of view, immediately, we would like to emphasize that the area needs to be considered with higher research investigation. As mentioned above, a palaeotsunami (geological evidence) is now being planned, and here we will present the first phase result. From a risk point of view, the study shows as preliminary achievement within 20 minutes the wave reaches to Iranian as well Pakistan and Oman coastal zone with very much destructive tsunami waves capable of inundating destructive effect. It is important to note that all the coastal areas of all states surrounding the MSZ are being developed very rapidly, so any event would have a devastating effect on this region. Although several papers published about modelling, seismology, tsunami deposits in the last decades; as Makran is a forgotten subduction zone, more data such as the main crustal structure, fault location, and its related parameter are required.

Keywords: historical tsunami, Indian ocean, makran subduction zone, palaeotsunami

Procedia PDF Downloads 120