Search results for: arrival time prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19837

Search results for: arrival time prediction

18517 Improving Part-Time Instructors’ Academic Outcomes with Gamification

Authors: Jared R. Chapman

Abstract:

This study introduces a type of motivational information system called an educational engagement information system (EEIS). An EEIS draws on principles of behavioral economics, motivation theory, and learning cognition theory to design information systems that help students want to improve their performance. This study compares academic outcomes for course sections taught by part- and full-time instructors both with and without an EEIS. Without an EEIS, students in the part-time instructor's course sections demonstrated significantly higher failure rates (a 143.8% increase) and dropout rates (a 110.4% increase) with significantly fewer students scoring a B- or higher (39.8% decrease) when compared to students in the course sections taught by a full-time instructor. It is concerning that students in the part-time instructor’s course without an EEIS had significantly lower academic outcomes, suggesting less understanding of the course content. This could impact retention and continuation in a major. With an EEIS, when comparing part- and full-time instructors, there was no significant difference in failure and dropout rates or in the number of students scoring a B- or higher in the course. In fact, with an EEIS, the failure and dropout rates were statistically identical for part- and full-time instructor courses. When using an EEIS (compared with not using an EEIS), the part-time instructor showed a 62.1% decrease in failures, a 61.4% decrease in dropouts, and a 41.7% increase in the number of students scoring a B- or higher in the course. We are unaware of other interventions that yield such large improvements in academic performance. This suggests that using an EEIS such as Delphinium may compensate for part-time instructors’ limitations of expertise, time, or rewards that can have a negative impact on students’ academic outcomes. The EEIS had only a minimal impact on failure rates (7.7% decrease) and dropout rates (18.8% decrease) for the full-time instructor. This suggests there is a ceiling effect for the improvements that an EEIS can make in student performance. This may be because experienced instructors are already doing the kinds of things that an EEIS does, such as motivating students, tracking grades, and providing feedback about progress. Additionally, full-time instructors have more time to dedicate to students outside of class than part-time instructors and more rewards for doing so. Using adjunct and other types of part-time instructors will likely remain a prevalent practice in higher education management courses. Given that using part-time instructors can have a negative impact on student graduation and persistence in a field of study, it is important to identify ways we can augment part-time instructors’ performance. We demonstrated that when part-time instructors use an EEIS, it can result in significantly lower students’ failure and dropout rates and an increase in the rate of students earning a B- or above; and bring their students’ performance to parity with the performance of students taught by a full-time instructor.

Keywords: gamification, engagement, motivation, academic outcomes

Procedia PDF Downloads 69
18516 Performance Comparison of Space-Time Block and Trellis Codes under Rayleigh Channels

Authors: Jing Qingfeng, Wu Jiajia

Abstract:

Due to the crowded orbits and shortage of frequency resources, utilizing of MIMO technology to improve spectrum efficiency and increase the capacity has become a necessary trend of broadband satellite communication. We analyze the main influenced factors and compare the BER performance of space-time block code (STBC) scheme and space-time trellis code (STTC) scheme. This paper emphatically studies the bit error rate (BER) performance of STTC and STBC under Rayleigh channel. The main emphasis is placed on the effects of the factors, such as terminal environment and elevation angles, on the BER performance of STBC and STTC schemes. Simulation results indicate that performance of STTC under Rayleigh channel is obviously improved with the increasing of transmitting and receiving antennas numbers, but the encoder state has little impact on the performance. Under Rayleigh channel, performance of Alamouti code is better than that of STTC.

Keywords: MIMO, space time block code (STBC), space time trellis code (STTC), Rayleigh channel

Procedia PDF Downloads 348
18515 A Discourse Analysis of Syrian Refugee Representations in Canadian News Media

Authors: Pamela Aimee Rigor

Abstract:

This study aims to examine the representation of Syrian refugees resettled in Vancouver and the Lower Mainland in local community and major newspapers. While there is strong support for immigration in Canada, public opinion towards refugees and asylum seekers is a bit more varied. Concerns about the legitimacy of refugee claims are among the common concerns of Canadians, and hateful or negative narratives are still present in Canadian media discourse which affects how people view refugees. To counter the narratives, these Syrian refugees must publicly declare how grateful they are because they are resettled in Canada. The dominant media discourse is that these refugees should be grateful as they have been graciously accepted by Canada and Canadians, once again upholding the image of Canada being a generous and humanitarian nation. The study examined the representation of Syrian refugees and the Syrian refugee resettlement in Canadian newspapers from September 2015 to October 2017 – around the time Prime Minister Trudeau came into power up until the present. Using a combination of content and discourse analysis, it aimed to uncover how local community and major newspapers in Vancouver covered the Syrian refugee ‘crisis’ – more particularly, the arrival and resettlement of the refugees in the country. Using the qualitative data analysis software Nvivo 12, the newspapers were analyzed and sorted into themes. Based on the initial findings, the discourse of Canada being a humanitarian country and Canadians being generous, as well as the idea of Syrian refugees having to publicly announce how grateful they are, is still present in the local community newspapers. This seems to be done to counter the hateful narratives of citizens who might view them as people who are abusing help provided by the community or the services provided by the government. However, compared to the major and national newspapers in Canada, many these local community newspapers are very inclusive of Syrian refugee voices. Most of the News and Community articles interview Syrian refugees and ask them their personal stories of plight, survival, resettlement and starting a ‘new life’ in Canada. They are not seen as potential threats nor are they dismissed – the refugees were named and were allowed to share their personal experiences in these news articles. These community newspapers, even though their representations are far from perfect, actually address some aspects of the refugee resettlement issue and respond to their community’s needs. There are quite a number of news articles that announce community meetings and orientations about the Syrian refugee crisis, ways to help in the resettlement process, as well as community fundraising activities to help sponsor refugees or resettle newly arrived refugees. This study aims to promote awareness of how these individuals are socially constructed so we can, in turn, be aware of the certain biases and stereotypes present, and its implications on refugee laws and public response to the issue.

Keywords: forced migration and conflict, media representations, race and multiculturalism, refugee studies

Procedia PDF Downloads 250
18514 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 157
18513 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries

Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand

Abstract:

Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.

Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.

Procedia PDF Downloads 75
18512 Applying a Noise Reduction Method to Reveal Chaos in the River Flow Time Series

Authors: Mohammad H. Fattahi

Abstract:

Chaotic analysis has been performed on the river flow time series before and after applying the wavelet based de-noising techniques in order to investigate the noise content effects on chaotic nature of flow series. In this study, 38 years of monthly runoff data of three gauging stations were used. Gauging stations were located in Ghar-e-Aghaj river basin, Fars province, Iran. The noise level of time series was estimated with the aid of Gaussian kernel algorithm. This step was found to be crucial in preventing removal of the vital data such as memory, correlation and trend from the time series in addition to the noise during de-noising process.

Keywords: chaotic behavior, wavelet, noise reduction, river flow

Procedia PDF Downloads 468
18511 An Approach to Secure Mobile Agent Communication in Multi-Agent Systems

Authors: Olumide Simeon Ogunnusi, Shukor Abd Razak, Michael Kolade Adu

Abstract:

Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Until now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a 2-phase policy enforcement system was developed. The first phase constrains the external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.

Keywords: agent communication, introspective agent, isolation of agent, policy enforcement system

Procedia PDF Downloads 297
18510 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science

Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier

Abstract:

Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and compared

Keywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis

Procedia PDF Downloads 116
18509 Optimization of Machining Parameters of Wire Electric Discharge Machining (WEDM) of Inconel 625 Super Alloy

Authors: Amitesh Goswami, Vishal Gulati, Annu Yadav

Abstract:

In this paper, WEDM has been used to investigate the machining characteristics of Inconel-625 alloy. The machining characteristics namely material removal rate (MRR) and surface roughness (SR) have been investigated along with surface microstructure analysis using SEM and EDS of the machined surface. Taguchi’s L27 Orthogonal array design has been used by considering six varying input parameters viz. Pulse-on time (Ton), Pulse-off time (Toff), Spark Gap Set Voltage (SV), Peak Current (IP), Wire Feed (WF) and Wire Tension (WT) for the responses of interest. It has been found out that Pulse-on time (Ton) and Spark Gap Set Voltage (SV) are the most significant parameters affecting material removal rate (MRR) and surface roughness (SR) are. Microstructure analysis of workpiece was also done using Scanning Electron Microscope (SEM). It was observed that, variations in pulse-on time and pulse-off time causes varying discharge energy and as a result of which deep craters / micro cracks and large/ small number of debris were formed. These results were helpful in studying the effects of pulse-on time and pulse-off time on MRR and SR. Energy Dispersive Spectrometry (EDS) was also done to check the compositional analysis of the material and it was observed that Copper and Zinc which were initially not present in the Inconel 625, later migrated on the material surface from the brass wire electrode during machining

Keywords: MRR, SEM, SR, taguchi, Wire Electric Discharge Machining

Procedia PDF Downloads 353
18508 Investigation of the Fading Time Effects on Microstructure and Mechanical Properties in Vermicular Cast Iron

Authors: Mehmet Ekici

Abstract:

In this study, the fading time affecting the mechanical properties and microstructures of vermicular cast iron were studied. Pig iron and steel scrap weighing about 12 kg were charged into the high-frequency induction furnace crucible and completely melted for production of vermicular cast iron. The slag was skimmed using a common flux. After fading time was set at 1. 3 and 5 minutes. In this way, three vermicular cast iron was produced that same composition but different phase structures. The microstructure of specimens was investigated, and uni-axial tensile test and the Charpy impact test were performed, and their micro-hardness measurements were done in order to characterize the mechanical behaviours of vermicular cast iron.

Keywords: vermicular cast iron, fading time, hardness, tensile test and impact test

Procedia PDF Downloads 348
18507 Case-Based Reasoning for Build Order in Real-Time Strategy Games

Authors: Ben G. Weber, Michael Mateas

Abstract:

We present a case-based reasoning technique for selecting build orders in a real-time strategy game. The case retrieval process generalizes features of the game state and selects cases using domain-specific recall methods, which perform exact matching on a subset of the case features. We demonstrate the performance of the technique by implementing it as a component of the integrated agent framework of McCoy and Mateas. Our results demonstrate that the technique outperforms nearest-neighbor retrieval when imperfect information is enforced in a real-time strategy game.

Keywords: case based reasoning, real time strategy systems, requirements elicitation, requirement analyst, artificial intelligence

Procedia PDF Downloads 441
18506 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures

Procedia PDF Downloads 355
18505 Solution Approaches for Some Scheduling Problems with Learning Effect and Job Dependent Delivery Times

Authors: M. Duran Toksari, Berrin Ucarkus

Abstract:

In this paper, we propose two algorithms to optimally solve makespan and total completion time scheduling problems with learning effect and job dependent delivery times in a single machine environment. The delivery time is the extra time to eliminate adverse effect between the main processing and delivery to the customer. In this paper, we introduce the job dependent delivery times for some single machine scheduling problems with position dependent learning effect, which are makespan are total completion. The results with respect to two algorithms proposed for solving of the each problem are compared with LINGO solutions for 50-jobs, 100-jobs and 150-jobs problems. The proposed algorithms can find the same results in shorter time.

Keywords: delivery Times, learning effect, makespan, scheduling, total completion time

Procedia PDF Downloads 469
18504 IT-Aided Business Process Enabling Real-Time Analysis of Candidates for Clinical Trials

Authors: Matthieu-P. Schapranow

Abstract:

Recruitment of participants for clinical trials requires the screening of a big number of potential candidates, i.e. the testing for trial-specific inclusion and exclusion criteria, which is a time-consuming and complex task. Today, a significant amount of time is spent on identification of adequate trial participants as their selection may affect the overall study results. We introduce a unique patient eligibility metric, which allows systematic ranking and classification of candidates based on trial-specific filter criteria. Our web application enables real-time analysis of patient data and assessment of candidates using freely definable inclusion and exclusion criteria. As a result, the overall time required for identifying eligible candidates is tremendously reduced whilst additional degrees of freedom for evaluating the relevance of individual candidates are introduced by our contribution.

Keywords: in-memory technology, clinical trials, screening, eligibility metric, data analysis, clustering

Procedia PDF Downloads 493
18503 In vivo Antiplatelet Activity Test of Wet Extract of Mimusops elengi L.'s Leaves on DDY Strain Mice as an Effort to Treat Atherosclerosis

Authors: Dewi Tristantini, Jason Jonathan

Abstract:

Coronary Artery Disease (CAD) is one of the deathliest diseases which is caused by atherosclerosis. Atherosclerosis is a disease that plaque builds up inside the arteries. Plaque is made up of fat, cholesterol, calcium, platelet, and other substances found in blood. The current treatment of atherosclerosis is to provide antiplatelet therapy treatment, but such treatments often cause gastrointestinal irritation, muscle pain and hormonal imbalance. Mimusops elengi L.’s leaves can be utilized as a natural and cheap antiplatelet’s source because it contains flavonoids such as quertecin. Antiplatelet aggregation effect of Mimusops elengi L.’s leaves’ wet extract was measured by bleeding time on DDY strain mice with the test substances were given orally during the period of 8 days. The bleeding time was measured on first day and 9th day. Empirically, the dose which is used for humans is 8.5 g of leaves in 600 ml of water. This dose is equivalent to 2.1 g of leaves in 350 ml of water for mice. The extract was divided into 3 doses for mice: 0.05 ml/day; 0.1 ml/day; 0.2 ml/day. After getting the percentage of the increase in bleeding time, data were analyzed by analysis of variance test (Anova), followed by individual comparison within the groups by LSD test. The test substances above respectively increased bleeding time 21%, 62%, and 128%. As the conclusion, the 0.02 ml/day dose of Mimusops elengi L.’s leaves’ wet extract could increase bleeding time better than clopidogrel as positive controls with 110% increase in bleeding time.

Keywords: antiplatelets, atheroschlerosis, bleeding time, Mimusops elengi

Procedia PDF Downloads 264
18502 Investigating the Factors Affecting on One Time Passwords Technology Acceptance: A Case Study in Banking Environment

Authors: Sajad Shokohuyar, Mahsa Zomorrodi Anbaji, Saghar Pouyan Shad

Abstract:

According to fast technology growth, modern banking tries to decrease going to banks’ branches and increase customers’ consent. One of the problems which banks face is securing customer’s password. The banks’ solution is one time password creation system. In this research by adapting from acceptance of technology model theory, assesses factors that are effective on banking in Iran especially in using one time password machine by one of the private banks of Iran customers. The statistical population is all of this bank’s customers who use electronic banking service and one time password technology and the questionnaires were distributed among members of statistical population in 5 selected groups of north, south, center, east and west of Tehran. Findings show that confidential preservation, education, ease of utilization and advertising and informing has positive relations and distinct hardware and age has negative relations.

Keywords: security, electronic banking, one time password, information technology

Procedia PDF Downloads 453
18501 Asymmetric of the Segregation-Enhanced Brazil Nut Effect

Authors: Panupat Chaiworn, Soraya lama

Abstract:

We study the motion of particles in cylinders which are subjected to a sinusoidal vertical vibration. We measure the rising time of a large intruder from the bottom of the container to free surface of the bed particles and find that the rising time as a function of intruder density increases to a maximum and then decreases monotonically. The result is qualitatively accord to the previous findings in experiments using relative humidity of the bed particles and found speed convection of the bed particles containers it moving slowly, and the rising time of the intruder where a minimal instead of maximal rising time in the small density region was found. Our experimental results suggest that the topology of the container plays an important role in the Brazil nut effect.

Keywords: granular particles, Brazil nut effect, cylinder container, vertical vibration, convection

Procedia PDF Downloads 528
18500 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka

Procedia PDF Downloads 296
18499 Agegraphic Dark Energy with GUP

Authors: H. R. Fazlollahi

Abstract:

Dark Energy origin is unknown and so describing this mysterious component in large scale structure needs to manipulate our theories in general relativity. Although in most models, dark energy arises from extra terms through modifying Einstein-Hilbert action, maybe its origin traces back to fundamental aspects of ground energy of space-time given in quantum mechanics. Hence, diluting space-time in general relativity with quantum mechanics properties leads to the Karolyhazy relation corresponding energy density of quantum fluctuations of space-time. Through generalized uncertainty principle and an eye to Karolyhazy approach in this study we extend energy density of quantum fluctuations of space-time. Also, the application of this idea is considered in late time evolution and we have shown how extra term in generalized uncertainty principle plays as a plausible interaction term role in suggested model.

Keywords: generalized uncertainty principle, karolyhazy approach, agegraphic dark energy, cosmology

Procedia PDF Downloads 73
18498 LiDAR Based Real Time Multiple Vehicle Detection and Tracking

Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt

Abstract:

Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.

Keywords: lidar, segmentation, clustering, tracking

Procedia PDF Downloads 423
18497 Effect of Common Yoga Protocol on Reaction Time of Football Players

Authors: Vikram Singh

Abstract:

The objective of the study was to study the effectiveness of common yoga protocol on reaction time (simple visual reaction time-SVRT measured in milliseconds/seconds) of male football players in the age group of 15 to 21 years. The 40 boys were randomly assigned into two groups i.e. control and experimental. SVRT for both the groups were measured on day-1 and post intervention (common yoga protocol here) was measured after 45 days of training to the experimental group only. One way ANOVA (Univariate analysis) and Independent t-test using SPSS 23 statistical package was applied to get and analyze the results. There was a significant difference after 45 days of yoga protocol in simple visual reaction time of experimental group (p = .032), t (33.05) = 3.881, p = .000 (two-tailed). Null hypothesis (that there would be no post measurement differences in reaction times of control and experimental groups) was rejected. Where p<.05. Therefore alternate hypothesis was accepted.

Keywords: footballers, t-test, yoga protocol, reaction time

Procedia PDF Downloads 253
18496 Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses

Authors: G. Tamulevičius, A. Serackis, T. Sledevič, D. Navakauskas

Abstract:

We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved.

Keywords: transient noise pulses, noise reduction, dynamic time warping, speech recognition

Procedia PDF Downloads 559
18495 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 184
18494 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 252
18493 An Improved Approach to Solve Two-Level Hierarchical Time Minimization Transportation Problem

Authors: Kalpana Dahiya

Abstract:

This paper discusses a two-level hierarchical time minimization transportation problem, which is an important class of transportation problems arising in industries. This problem has been studied by various researchers, and a number of polynomial time iterative algorithms are available to find its solution. All the existing algorithms, though efficient, have some shortcomings. The current study proposes an alternate solution algorithm for the problem that is more efficient in terms of computational time than the existing algorithms. The results justifying the underlying theory of the proposed algorithm are given. Further, a detailed comparison of the computational behaviour of all the algorithms for randomly generated instances of this problem of different sizes validates the efficiency of the proposed algorithm.

Keywords: global optimization, hierarchical optimization, transportation problem, concave minimization

Procedia PDF Downloads 162
18492 Design of Multi-Loop Controller for Minimization of Energy Consumption in the Distillation Column

Authors: Vinayambika S. Bhat, S. Shanmuga Priya, I. Thirunavukkarasu, Shreeranga Bhat

Abstract:

An attempt has been made to design a decoupling controller for systems with more inputs more outputs with dead time in it. The de-coupler is designed for the chemical process industry 3×3 plant transfer function with dead time. The Quantitative Feedback Theory (QFT) based controller has also been designed here for the 2×2 distillation column transfer function. The developed control techniques were simulated using the MATLAB/Simulink. Also, the stability of the process was analyzed, together with the presence of various perturbations in it. Time domain specifications like setting time along with overshoot and oscillations were analyzed to prove the efficiency of the de-coupler method. The load disturbance rejection was tested along with its performance. The QFT control technique was synthesized based on the stability and performance specifications in the presence of uncertainty in time constant of the plant transfer function through sequential loop shaping technique. Further, the energy efficiency of the distillation column was improved by proper tuning of the controller. A distillation column consumes 3% of the total energy consumption of the world. A suitable control technique is very important from an economic point of view. The real time implementation of the process is under process in our laboratory.

Keywords: distillation, energy, MIMO process, time delay, robust stability

Procedia PDF Downloads 414
18491 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 124
18490 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction

Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan

Abstract:

Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.

Keywords: decision trees, neural network, myocardial infarction, Data Mining

Procedia PDF Downloads 429
18489 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima

Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez

Abstract:

Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.

Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis

Procedia PDF Downloads 321
18488 Neural Networks Underlying the Generation of Neural Sequences in the HVC

Authors: Zeina Bou Diab, Arij Daou

Abstract:

The neural mechanisms of sequential behaviors are intensively studied, with songbirds a focus for learned vocal production. We are studying the premotor nucleus HVC at a nexus of multiple pathways contributing to song learning and production. The HVC consists of multiple classes of neuronal populations, each has its own cellular, electrophysiological and functional properties. During singing, a large subset of motor cortex analog-projecting HVCRA neurons emit a single 6-10 ms burst of spikes at the same time during each rendition of song, a large subset of basal ganglia-projecting HVCX neurons fire 1 to 4 bursts that are similarly time locked to vocalizations, while HVCINT neurons fire tonically at average high frequency throughout song with prominent modulations whose timing in relation to song remains unresolved. This opens the opportunity to define models relating explicit HVC circuitry to how these neurons work cooperatively to control learning and singing. We developed conductance-based Hodgkin-Huxley models for the three classes of HVC neurons (based on the ion channels previously identified from in vitro recordings) and connected them in several physiologically realistic networks (based on the known synaptic connectivity and specific glutaminergic and gabaergic pharmacology) via different architecture patterning scenarios with the aim to replicate the in vivo firing patterning behaviors. We are able, through these networks, to reproduce the in vivo behavior of each class of HVC neurons, as shown by the experimental recordings. The different network architectures developed highlight different mechanisms that might be contributing to the propagation of sequential neural activity (continuous or punctate) in the HVC and to the distinctive firing patterns that each class exhibits during singing. Examples of such possible mechanisms include: 1) post-inhibitory rebound in HVCX and their population patterns during singing, 2) different subclasses of HVCINT interacting via inhibitory-inhibitory loops, 3) mono-synaptic HVCX to HVCRA excitatory connectivity, and 4) structured many-to-one inhibitory synapses from interneurons to projection neurons, and others. Replication is only a preliminary step that must be followed by model prediction and testing.

Keywords: computational modeling, neural networks, temporal neural sequences, ionic currents, songbird

Procedia PDF Downloads 70