Search results for: consensus algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2476

Search results for: consensus algorithms

1306 A Quantitative Analysis of Rural to Urban Migration in Morocco

Authors: Donald Wright

Abstract:

The ultimate goal of this study is to reinvigorate the philosophical underpinnings the study of urbanization with scientific data with the goal of circumventing what seems an inevitable future clash between rural and urban populations. To that end urban infrastructure must be sustainable economically, politically and ecologically over the course of several generations as cities continue to grow with the incorporation of climate refugees. Our research will provide data concerning the projected increase in population over the coming two decades in Morocco, and the population will shift from rural areas to urban centers during that period of time. As a result, urban infrastructure will need to be adapted, developed or built to fit the demand of future internal migrations from rural to urban centers in Morocco. This paper will also examine how past experiences of internally displaced people give insight into the challenges faced by future migrants and, beyond the gathering of data, how people react to internal migration. This study employs four different sets of research tools. First, a large part of this study is archival, which involves compiling the relevant literature on the topic and its complex history. This step also includes gathering data bout migrations in Morocco from public data sources. Once the datasets are collected, the next part of the project involves populating the attribute fields and preprocessing the data to make it understandable and usable by machine learning algorithms. In tandem with the mathematical interpretation of data and projected migrations, this study benefits from a theoretical understanding of the critical apparatus existing around urban development of the 20th and 21st centuries that give us insight into past infrastructure development and the rationale behind it. Once the data is ready to be analyzed, different machine learning algorithms will be experimented (k-clustering, support vector regression, random forest analysis) and the results compared for visualization of the data. The final computational part of this study involves analyzing the data and determining what we can learn from it. This paper helps us to understand future trends of population movements within and between regions of North Africa, which will have an impact on various sectors such as urban development, food distribution and water purification, not to mention the creation of public policy in the countries of this region. One of the strengths of this project is the multi-pronged and cross-disciplinary methodology to the research question, which enables an interchange of knowledge and experiences to facilitate innovative solutions to this complex problem. Multiple and diverse intersecting viewpoints allow an exchange of methodological models that provide fresh and informed interpretations of otherwise objective data.

Keywords: climate change, machine learning, migration, Morocco, urban development

Procedia PDF Downloads 150
1305 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks

Authors: Ameen Jameel Alawneh

Abstract:

A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.

Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets

Procedia PDF Downloads 390
1304 Cryptography and Cryptosystem a Panacea to Security Risk in Wireless Networking

Authors: Modesta E. Ezema, Chikwendu V. Alabekee, Victoria N. Ishiwu, Ifeyinwa NwosuArize, Chinedu I. Nwoye

Abstract:

The advent of wireless networking in computing technology cannot be overemphasized, it opened up easy accessibility to information resources, networking made easier and brought internet accessibility to our doorsteps, but despite all these, some mishap came in with it that is causing mayhem in today ‘s overall information security. The cyber criminals will always compromise the integrity of a message that is not encrypted or that is encrypted with a weak algorithm.In other to correct the mayhem, this study focuses on cryptosystem and cryptography. This ensures end to end crypt messaging. The study of various cryptographic algorithms, as well as the techniques and applications of the cryptography for efficiency, were all considered in the work., present and future applications of cryptography were dealt with as well as Quantum Cryptography was exposed as the current and the future area in the development of cryptography. An empirical study was conducted to collect data from network users.

Keywords: algorithm, cryptography, cryptosystem, network

Procedia PDF Downloads 347
1303 Malware Detection in Mobile Devices by Analyzing Sequences of System Calls

Authors: Jorge Maestre Vidal, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

With the increase in popularity of mobile devices, new and varied forms of malware have emerged. Consequently, the organizations for cyberdefense have echoed the need to deploy more effective defensive schemes adapted to the challenges posed by these recent monitoring environments. In order to contribute to their development, this paper presents a malware detection strategy for mobile devices based on sequence alignment algorithms. Unlike the previous proposals, only the system calls performed during the startup of applications are studied. In this way, it is possible to efficiently study in depth, the sequences of system calls executed by the applications just downloaded from app stores, and initialize them in a secure and isolated environment. As demonstrated in the performed experimentation, most of the analyzed malicious activities were successfully identified in their boot processes.

Keywords: android, information security, intrusion detection systems, malware, mobile devices

Procedia PDF Downloads 298
1302 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 124
1301 The Boundary Element Method in Excel for Teaching Vector Calculus and Simulation

Authors: Stephen Kirkup

Abstract:

This paper discusses the implementation of the boundary element method (BEM) on an Excel spreadsheet and how it can be used in teaching vector calculus and simulation. There are two separate spreadheets, within which Laplace equation is solved by the BEM in two dimensions (LIBEM2) and axisymmetric three dimensions (LBEMA). The main algorithms are implemented in the associated programming language within Excel, Visual Basic for Applications (VBA). The BEM only requires a boundary mesh and hence it is a relatively accessible method. The BEM in the open spreadsheet environment is demonstrated as being useful as an aid to teaching and learning. The application of the BEM implemented on a spreadsheet for educational purposes in introductory vector calculus and simulation is explored. The development of assignment work is discussed, and sample results from student work are given. The spreadsheets were found to be useful tools in developing the students’ understanding of vector calculus and in simulating heat conduction.

Keywords: boundary element method, Laplace’s equation, vector calculus, simulation, education

Procedia PDF Downloads 160
1300 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks

Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf

Abstract:

Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.

Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks

Procedia PDF Downloads 167
1299 An Enhanced Particle Swarm Optimization Algorithm for Multiobjective Problems

Authors: Houda Abadlia, Nadia Smairi, Khaled Ghedira

Abstract:

Multiobjective Particle Swarm Optimization (MOPSO) has shown an effective performance for solving test functions and real-world optimization problems. However, this method has a premature convergence problem, which may lead to lack of diversity. In order to improve its performance, this paper presents a hybrid approach which embedded the MOPSO into the island model and integrated a local search technique, Variable Neighborhood Search, to enhance the diversity into the swarm. Experiments on two series of test functions have shown the effectiveness of the proposed approach. A comparison with other evolutionary algorithms shows that the proposed approach presented a good performance in solving multiobjective optimization problems.

Keywords: particle swarm optimization, migration, variable neighborhood search, multiobjective optimization

Procedia PDF Downloads 165
1298 Parallelization by Domain Decomposition for 1-D Sugarcane Equation with Message Passing Interface

Authors: Ewedafe Simon Uzezi

Abstract:

In this paper we presented a method based on Domain Decomposition (DD) for parallelization of 1-D Sugarcane Equation on parallel platform with parallel paradigms on Master-Slave platform using Message Passing Interface (MPI). The 1-D Sugarcane Equation was discretized using explicit method of discretization requiring evaluation nof temporal and spatial distribution of temperature. This platform gives better predictions of the effects of temperature distribution of the sugarcane problem. This work presented parallel overheads with overlapping communication and communication across parallel computers with numerical results across different block sizes with scalability. However, performance improvement strategies from the DD on various mesh sizes were compared experimentally and parallel results show speedup and efficiency for the parallel algorithms design.

Keywords: sugarcane, parallelization, explicit method, domain decomposition, MPI

Procedia PDF Downloads 20
1297 Load Balancing Algorithms for SIP Server Clusters in Cloud Computing

Authors: Tanmay Raj, Vedika Gupta

Abstract:

For its groundbreaking and substantial power, cloud computing is today’s most popular breakthrough. It is a sort of Internet-based computing that allows users to request and receive numerous services in a cost-effective manner. Virtualization, grid computing, and utility computing are the most widely employed emerging technologies in cloud computing, making it the most powerful. However, cloud computing still has a number of key challenges, such as security, load balancing, and non-critical failure adaption, to name a few. The massive growth of cloud computing will put an undue strain on servers. As a result, network performance will deteriorate. A good load balancing adjustment can make cloud computing more productive and in- crease client fulfillment execution. Load balancing is an important part of cloud computing because it prevents certain nodes from being overwhelmed while others are idle or have little work to perform. Response time, cost, throughput, performance, and resource usage are all parameters that may be improved using load balancing.

Keywords: cloud computing, load balancing, computing, SIP server clusters

Procedia PDF Downloads 121
1296 Development and Application of the Proctoring System with Face Recognition for User Registration on the Educational Information Portal

Authors: Meruyert Serik, Nassipzhan Duisegaliyeva, Danara Tleumagambetova, Madina Ermaganbetova

Abstract:

This research paper explores the process of creating a proctoring system by evaluating the implementation of practical face recognition algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As an outcome, a proctoring system will be created, enabling the conduction of tests and ensuring academic integrity checks within the system. Due to the correct operation of the system, test works are carried out. The result of the creation of the proctoring system will be the basis for the automation of the informational, educational portal developed by machine learning.

Keywords: artificial intelligence, education portal, face recognition, machine learning, proctoring

Procedia PDF Downloads 123
1295 Principles of Risk Management in Surgery Department

Authors: Mohammad H. Yarmohammadian, Masoud Ferdosi, Abbas Haghshenas, Fatemeh Rezaei

Abstract:

Surgical procedures aim at preserving human life and improving quality of their life. However, there are many potential risk sources that can cause serious harm to patients. For centuries, managers believed that technical competence of a surgeon is the only key to a successful surgery. But over the past decade, risks are considered in terms of process-based safety procedures, teamwork and inter departmental communication. Aims: This study aims to determine how the process- biased surgical risk management should be done in terms of project management tool named ABS (Activity Breakdown Structure). Settings and Design: This study was conducted in two stages. First, literature review and meeting with professors was done to determine principles and framework of surgical risk management. Next, responsible teams for surgical patient journey were involved in following meeting to develop the process- biased surgical risk management. Methods and Material: This study is a qualitative research in which focus groups with the inductive approach is used. Sampling was performed to achieve representativeness through intensity sampling biased on experience and seniority. Analysis Method used: context analysis of interviews and consensus themes extracted from FDG meetings discussion was the analysis tool. Results: we developed the patient journey process in 5 main phases, 24 activities and 108 tasks. Then, responsible teams, transposition and allocated places for performing determined. Some activities and tasks themes were repeated in each phases like patient identification and records review because of their importance. Conclusions: Risk management of surgical departments is significant as this facility is the hospital’s largest cost and revenue center. Good communication between surgical team and other clinical teams outside surgery department through process- biased perspective could improve safety of patient under this procedure.

Keywords: risk management, activity breakdown structure (ABS), surgical department, medical sciences

Procedia PDF Downloads 301
1294 The Characteristics of Withhold Resuscitation in Out-Of-Hospital Cardiac Arrest

Authors: An-Yi Wang, Wei-Fong Kao, Shin-Han Tsai

Abstract:

Introduction: Information as patient characteristics, resuscitation scene, resuscitation provider perspectives and families wish affects on resuscitation decision-making for out-of-hospital cardiac arrest (OHCA). There is no consistency consensus on how families and emergency physicians approach this decision. The main purpose of our study is to evaluate the characteristics of withholding resuscitation efforts arrival at the hospital. Methods: We retrospectively analyzed patients with OHCA without pre-hospital return-of-spontaneous circulation (ROSC) who was sent to our emergency department (ED) between January 2014 and December 2015. Baseline characteristics, pre-hospital course, and causes of the cardiopulmonary arrest among patients were compared. Results: In 2 years, total 155 arrest patients without pre-hospital ROSC was included. 33(21.3%) patients withhold the resuscitation efforts in ED with mean resuscitation duration 4.45 ± 7.04 minutes after ED arrival. In withholding group, the initial rhythm of arrests was all non-shockable. 9 of them received endotracheal intubation before decision-making. None of the patients in withhold resuscitation group survived to discharge. There was no significant difference among gender, underlying cardiovascular disease, malignancy, chronic renal disease, nor witness collapse between withhold and continue resuscitation groups. Univariate analysis showed there was lower percentage of bystander resuscitation (32.3% vs. 50.4%, p=0.071), and the lower percentage of transport via emergency medical service (EMS) (78.8% vs. 91.8%, p=0.054) in withholding group. Multivariate analysis showed old age (adjusted odds ratio=1.06, 95% C.I.=[1.02-1.11], p<0.05), with underlying respiratory insufficiency (adjusted odds ratio=12.16, 95% C.I.=[3.34-44.29], p<0.05), living at home compared with nursing home (adjusted odds ratio=37.75, 95% C.I.=[1.09-1110.70], p<0.05) were more likely to withhold resuscitation. Transport via EMS was more likely to continue resuscitation (adjusted odds ratio=0.11, 95% C.I.=[0.02-0.71], p<0.05). Conclusion: The decision-making for families and emergency physicians to withhold or continue resuscitation for out-of-hospital cardiac arrest is complex and multi-factorial. Continue resuscitation efforts in nursing home residents is high, and further study among this population is warranted.

Keywords: cardiopulmonary resuscitation, out-of-hospital cardiac arrest, termination resuscitation, withhold resuscitation

Procedia PDF Downloads 251
1293 Modelling Railway Noise Over Large Areas, Assisted by GIS

Authors: Conrad Weber

Abstract:

The modelling of railway noise over large projects areas can be very time consuming in terms of preparing the noise models and calculation time. An open-source GIS program has been utilised to assist with the modelling of operational noise levels for 675km of railway corridor. A range of GIS algorithms were utilised to break up the noise model area into manageable calculation sizes. GIS was utilised to prepare and filter a range of noise modelling inputs, including building files, land uses and ground terrain. A spreadsheet was utilised to manage the accuracy of key input parameters, including train speeds, train types, curve corrections, bridge corrections and engine notch settings. GIS was utilised to present the final noise modelling results. This paper explains the noise modelling process and how the spreadsheet and GIS were utilised to accurately model this massive project efficiently.

Keywords: noise, modeling, GIS, rail

Procedia PDF Downloads 120
1292 Application of Deep Learning in Top Pair and Single Top Quark Production at the Large Hadron Collider

Authors: Ijaz Ahmed, Anwar Zada, Muhammad Waqas, M. U. Ashraf

Abstract:

We demonstrate the performance of a very efficient tagger applies on hadronically decaying top quark pairs as signal based on deep neural network algorithms and compares with the QCD multi-jet background events. A significant enhancement of performance in boosted top quark events is observed with our limited computing resources. We also compare modern machine learning approaches and perform a multivariate analysis of boosted top-pair as well as single top quark production through weak interaction at √s = 14 TeV proton-proton Collider. The most relevant known background processes are incorporated. Through the techniques of Boosted Decision Tree (BDT), likelihood and Multlayer Perceptron (MLP) the analysis is trained to observe the performance in comparison with the conventional cut based and count approach

Keywords: top tagger, multivariate, deep learning, LHC, single top

Procedia PDF Downloads 110
1291 Study of Three-Dimensional Computed Tomography of Frontoethmoidal Cells Using International Frontal Sinus Anatomy Classification

Authors: Prabesh Karki, Shyam Thapa Chettri, Bajarang Prasad Sah, Manoj Bhattarai, Sudeep Mishra

Abstract:

Introduction: Frontal sinus is frequently described as the most difficult sinus to access surgically due to its proximity to the cribriform plate, orbit, and anterior ethmoid artery. Frontal sinus surgery requires a detailed understanding of the cellular structure and FSDP unique to each patient, making high-resolution CT scans an indispensable tool to assess the difficulty of planned sinus surgery. International Frontal Sinus Anatomy Classification (IFAC) was developed to provide a more precise nomenclature for cells in the frontal recess, classifying cells based on their anatomic origin. Objectives: To assess the proportion of frontal cell variants defined by IFAC, variation with respect to age and gender. Methods: 54 cases were enrolled after a detailed clinical history, thorough general and physical examinations, and CT a report ordered in a film. Assessment and tabulation of the presence of frontal cells according to the IFAC analyzed. The prevalence of each cell type was calculated, and data were entered in MS Excel and analyzed using Statistical Package for the Social Sciences (SPSS). Descriptive statistics and frequencies were defined for categorical and numerical variables. Frequency, percentage, the mean and standard deviation were calculated. Result: Among 54 patients, 30 (55.6%) were male and 24 (44.4%) were female. The patient enrolled ranged from 18 to 78 years. Majority33.3% (n=18) were in age group of >50 years.According to IFAC, Agger nasi cells (92.6%) were most common, whereas supraorbital ethmoidal cells were least common 16 (29.6%). Prevalence of other frontoethmoidal cells was SAC- 57.4%, SAFC- 38.9%, SBC- 74.1%, SBFC- 33.3%, FSC- 38.9% of 54 cases. Conclusion: IFAC is an international consensus document that describes an anatomically precise nomenclature for classifying frontoethmoidal cells' anatomy. This study has defined the prevalence, symmetry and reliability of frontoethmoidal cells as established by the IFAC system as in other parts of the world.

Keywords: frontal sinus, frontoethmoidal cells, international frontal sinus anatomy classification

Procedia PDF Downloads 98
1290 Diagnostic Clinical Skills in Cardiology: Improving Learning and Performance with Hybrid Simulation, Scripted Histories, Wearable Technology, and Quantitative Grading – The Assimilate Excellence Study

Authors: Daly M. J, Condron C, Mulhall C, Eppich W, O'Neill J.

Abstract:

Introduction: In contemporary clinical cardiology, comprehensive and holistic bedside evaluation including accurate cardiac auscultation is in decline despite having positive effects on patients and their outcomes. Methods: Scripted histories and scoring checklists for three clinical scenarios in cardiology were co-created and refined through iterative consensus by a panel of clinical experts; these were then paired with recordings of auscultatory findings from three actual patients with known valvular heart disease. A wearable vest with embedded pressure-sensitive panel speakers was developed to transmit these recordings when examined at the standard auscultation points. RCSI medical students volunteered for a series of three formative long case examinations in cardiology (LC1 – LC3) using this hybrid simulation. Participants were randomised into two groups: Group 1 received individual teaching from an expert trainer between LC1 and LC2; Group 2 received the same intervention between LC2 and LC3. Each participant’s long case examination performance was recorded and blindly scored by two peer participants and two RCSI examiners. Results: Sixty-eight participants were included in the study (age 27.6 ± 0.1 years; 74% female) and randomised into two groups; there were no significant differences in baseline characteristics between groups. Overall, the median total faculty examiner score was 39.8% (35.8 – 44.6%) in LC1 and increased to 63.3% (56.9 – 66.4%) in LC3, with those in Group 1 showing a greater improvement in LC2 total score than that observed in Group 2 (p < .001). Using the novel checklist, intraclass correlation coefficients (ICC) were excellent between examiners in all cases: ICC .994 – .997 (p < .001); correlation between peers and examiners improved in LC2 following peer grading of LC1 performances: ICC .857 – .867 (p < .001). Conclusion: Hybrid simulation and quantitative grading improve learning, standardisation of assessment, and direct comparisons of both performance and acumen in clinical cardiology.

Keywords: cardiology, clinical skills, long case examination, hybrid simulation, checklist

Procedia PDF Downloads 106
1289 An Improved Genetic Algorithm for Traveling Salesman Problem with Precedence Constraint

Authors: M. F. F. Ab Rashid, A. N. Mohd Rose, N. M. Z. Nik Mohamed, W. S. Wan Harun, S. A. Che Ghani

Abstract:

Traveling salesman problem with precedence constraint (TSPPC) is one of the most complex problems in combinatorial optimization. The existing algorithms to solve TSPPC cost large computational time to find the optimal solution. The purpose of this paper is to present an efficient genetic algorithm that guarantees optimal solution with less number of generations and iterations time. Unlike the existing algorithm that generates priority factor as chromosome, the proposed algorithm directly generates sequence of solution as chromosome. As a result, the proposed algorithm is capable of generating optimal solution with smaller number of generations and iteration time compare to existing algorithm.

Keywords: traveling salesman problem, sequencing, genetic algorithm, precedence constraint

Procedia PDF Downloads 557
1288 Stochastic Modeling of Secretion Dynamics in Inner Hair Cells of the Auditory Pathway

Authors: Jessica A. Soto-Bear, Virginia González-Vélez, Norma Castañeda-Villa, Amparo Gil

Abstract:

Glutamate release of the cochlear inner hair cell (IHC) ribbon synapse is a fundamental step in transferring sound information in the auditory pathway. Otoferlin is the calcium sensor in the IHC and its activity has been related to many auditory disorders. In order to simulate secretion dynamics occurring in the IHC in a few milliseconds timescale and with high spatial resolution, we proposed an active-zone model solved with Monte Carlo algorithms. We included models for calcium buffered diffusion, calcium-binding schemes for vesicle fusion, and L-type voltage-gated calcium channels. Our results indicate that calcium influx and calcium binding is managing IHC secretion as a function of voltage depolarization, which in turn mean that IHC response depends on sound intensity.

Keywords: inner hair cells, Monte Carlo algorithm, Otoferlin, secretion

Procedia PDF Downloads 219
1287 Resource-Constrained Heterogeneous Workflow Scheduling Algorithms in Heterogeneous Computing Clusters

Authors: Lei Wang, Jiahao Zhou

Abstract:

The development of heterogeneous computing clusters provides a strong computility guarantee for large-scale workflows (e.g., scientific computing, artificial intelligence (AI), etc.). However, the tasks within large-scale workflows have also gradually become heterogeneous due to different demands on computing resources, which leads to the addition of a task resource-restricted constraint to the workflow scheduling problem on heterogeneous computing platforms. In this paper, we propose a heterogeneous constrained minimum makespan scheduling algorithm based on the idea of greedy strategy, which provides an efficient solution to the heterogeneous workflow scheduling problem in a heterogeneous platform. In this paper, we test the effectiveness of our proposed scheduling algorithm by randomly generating heterogeneous workflows with heterogeneous computing platform, and the experiments show that our method improves 15.2% over the state-of-the-art methods.

Keywords: heterogeneous computing, workflow scheduling, constrained resources, minimal makespan

Procedia PDF Downloads 32
1286 Adaptive Online Object Tracking via Positive and Negative Models Matching

Authors: Shaomei Li, Yawen Wang, Chao Gao

Abstract:

To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.

Keywords: object tracking, tracking drift, partial least squares analysis, positive and negative models matching

Procedia PDF Downloads 527
1285 Hardware for Genetic Algorithm

Authors: Fariborz Ahmadi, Reza Tati

Abstract:

Genetic algorithm is a soft computing method that works on set of solutions. These solutions are called chromosome and the best one is the absolute solution of the problem. The main problem of this algorithm is that after passing through some generations, it may be produced some chromosomes that had been produced in some generations ago that causes reducing the convergence speed. From another respective, most of the genetic algorithms are implemented in software and less works have been done on hardware implementation. Our work implements genetic algorithm in hardware that doesn’t produce chromosome that have been produced in previous generations. In this work, most of genetic operators are implemented without producing iterative chromosomes and genetic diversity is preserved. Genetic diversity causes that not only do not this algorithm converge to local optimum but also reaching to global optimum. Without any doubts, proposed approach is so faster than software implementations. Evaluation results also show the proposed approach is faster than hardware ones.

Keywords: hardware, genetic algorithm, computer science, engineering

Procedia PDF Downloads 505
1284 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem

Procedia PDF Downloads 231
1283 Ni-W-P Alloy Coating as an Alternate to Electroplated Hard Cr Coating

Authors: S. K. Ghosh, C. Srivastava, P. K. Limaye, V. Kain

Abstract:

Electroplated hard chromium is widely known in coatings and surface finishing, automobile and aerospace industries because of its excellent hardness, wear resistance and corrosion properties. However, its precursor, Cr+6 is highly carcinogenic in nature and a consensus has been adopted internationally to eradicate this coating technology with an alternative one. The search for alternate coatings to electroplated hard chrome is continuing worldwide. Various alloys and nanocomposites like Co-W alloys, Ni-Graphene, Ni-diamond nanocomposites etc. have already shown promising results in this regard. Basically, in this study, electroless Ni-P alloys with excellent corrosion resistance was taken as the base matrix and incorporation of tungsten as third alloying element was considered to improve the hardness and wear resistance of the resultant alloy coating. The present work is focused on the preparation of Ni–W–P coatings by electrodeposition with different content of phosphorous and its effect on the electrochemical, mechanical and tribological performances. The results were also compared with Ni-W alloys. Composition analysis by EDS showed deposition of Ni-32.85 wt% W-3.84 wt% P (designated as Ni-W-LP) and Ni-18.55 wt% W-8.73 wt% P (designated as Ni-W-HP) alloy coatings from electrolytes containing of 0.006 and 0.01M sodium hypophosphite respectively. Inhibition of tungsten deposition in the presence of phosphorous was noted. SEM investigation showed cauliflower like growth along with few microcracks. The as-deposited Ni-W-P alloy coating was amorphous in nature as confirmed by XRD investigation and step-wise crystallization was noticed upon annealing at higher temperatures. For all the coatings, the nanohardness was found to increase after heat-treatment and typical nanonahardness values obtained for 400°C annealed samples were 18.65±0.20 GPa, 20.03±0.25 GPa, and 19.17±0.25 for alloy coatings Ni-W, Ni-W-LP and Ni-W-HP respectively. Therefore, the nanohardness data show very promising results. Wear and coefficient of friction data were recorded by applying a different normal load in reciprocating motion using a ball on plate geometry. Post experiment, the wear mechanism was established by detail investigation of wear-scar morphology. Potentiodynamic measurements showed coating with a high content of phosphorous was most corrosion resistant in 3.5wt% NaCl solution.

Keywords: corrosion, electrodeposition, nanohardness, Ni-W-P alloy coating

Procedia PDF Downloads 347
1282 Evaluation of Quasi-Newton Strategy for Algorithmic Acceleration

Authors: T. Martini, J. M. Martínez

Abstract:

An algorithmic acceleration strategy based on quasi-Newton (or secant) methods is displayed for address the practical problem of accelerating the convergence of the Newton-Lagrange method in the case of convergence to critical multipliers. Since the Newton-Lagrange iteration converges locally at a linear rate, it is natural to conjecture that quasi-Newton methods based on the so called secant equation and some minimal variation principle, could converge superlinearly, thus restoring the convergence properties of Newton's method. This strategy can also be applied to accelerate the convergence of algorithms applied to fixed-points problems. Computational experience is reported illustrating the efficiency of this strategy to solve fixed-point problems with linear convergence rate.

Keywords: algorithmic acceleration, fixed-point problems, nonlinear programming, quasi-newton method

Procedia PDF Downloads 486
1281 A Hybrid Tabu Search Algorithm for the Multi-Objective Job Shop Scheduling Problems

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid Tabu Search (TS) algorithm is suggested for the multi-objective job shop scheduling problems (MO-JSSPs). The algorithm integrates several shifting bottleneck based neighborhood structures with the Giffler & Thompson algorithm, which improve efficiency of the search. Diversification and intensification are provided with local and global left shift algorithms application and also new semi-active, active, and non-delay schedules creation. The suggested algorithm is tested in the MO-JSSPs benchmarks from the literature based on the Pareto optimality concept. Different performances criteria are used for the multi-objective algorithm evaluation. The proposed algorithm is able to find the Pareto solutions of the test problems in shorter time than other algorithm of the literature.

Keywords: tabu search, heuristics, job shop scheduling, multi-objective optimization, Pareto optimality

Procedia PDF Downloads 441
1280 Citation Analysis of New Zealand Court Decisions

Authors: Tobias Milz, L. Macpherson, Varvara Vetrova

Abstract:

The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.

Keywords: case citation network, citation analysis, network analysis, Neo4j

Procedia PDF Downloads 106
1279 Sentiment Analysis of Consumers’ Perceptions on Social Media about the Main Mobile Providers in Jamaica

Authors: Sherrene Bogle, Verlia Bogle, Tyrone Anderson

Abstract:

In recent years, organizations have become increasingly interested in the possibility of analyzing social media as a means of gaining meaningful feedback about their products and services. The aspect based sentiment analysis approach is used to predict the sentiment for Twitter datasets for Digicel and Lime, the main mobile companies in Jamaica, using supervised learning classification techniques. The results indicate an average of 82.2 percent accuracy in classifying tweets when comparing three separate classification algorithms against the purported baseline of 70 percent and an average root mean squared error of 0.31. These results indicate that the analysis of sentiment on social media in order to gain customer feedback can be a viable solution for mobile companies looking to improve business performance.

Keywords: machine learning, sentiment analysis, social media, supervised learning

Procedia PDF Downloads 440
1278 An Early Detection Type 2 Diabetes Using K - Nearest Neighbor Algorithm

Authors: Ng Liang Shen, Ngahzaifa Abdul Ghani

Abstract:

This research aimed at developing an early warning system for pre-diabetic and diabetics by analyzing simple and easily determinable signs and symptoms of diabetes among the people living in Malaysia using Particle Swarm Optimized Artificial. With the skyrocketing prevalence of Type 2 diabetes in Malaysia, the system can be used to encourage affected people to seek further medical attention to prevent the onset of diabetes or start managing it early enough to avoid the associated complications. The study sought to find out the best predictive variables of Type 2 Diabetes Mellitus, developed a system to diagnose diabetes from the variables using Artificial Neural Networks and tested the system on accuracy to find out the patent generated from diabetes diagnosis result in machine learning algorithms even at primary or advanced stages.

Keywords: diabetes diagnosis, Artificial Neural Networks, artificial intelligence, soft computing, medical diagnosis

Procedia PDF Downloads 335
1277 Dream Work: Examining the Effectiveness of Dream Interpretation in Gaining Psychological Insight into Young Adults in Korea

Authors: Ahn Christine Myunghee, Sim Wonjin, Cho Kristina, Ahn Mira, Hong Yeju, Kwok Jihae, Lim Sooyeon, Park Hansol

Abstract:

With a sharp increase in the prevalence rate for mental health issues in Korea, there is a need for specific and effective intervention strategies in counseling and psychotherapy for use with Korean clients. With the cultural emphasis on restraining emotional expression and not disclosing personal and familial problems to outsiders, clients often find it difficult to discuss their emotional issues even to therapists. Exploring a client’s internal psychological processes bypassing this culture-specific mode of therapeutic communication often becomes a challenge in the therapeutic setting. Given this socio-cultural context, the purpose of the current study was to investigate the effectiveness of using dream work to individuals in Korea. The current study conducted one 60-90 minute dream session and analyzed the dream content of 39 Korean young adults to evaluate the effectiveness of the Hill dream model in accessing the intra-psychic materials, determining essential emotional themes, and learning how the individuals interpreted the contents of their dreams. The transcribed data, which included a total of 39 sessions from 39 volunteer university students, were analyzed by the Consensus Qualitative Research (CQR) approach in terms of domains and core ideas. Self-report measures on Dream Salience, Gains from Dream Interpretations and the Session Evaluation Scale were administered before and after each of their dream sessions. The results indicated that dream work appears to be an effective way to understand unconscious motivations, thoughts, and feelings related to a person’s sense of self, and also how these people relate to other people. Current findings need to be replicated with clients referred for counseling and psychotherapy to determine if the dream work is an appropriate and useful intervention in counseling settings. Limitations of the current study and suggestions for future follow-ups are included in the discussion.

Keywords: dream work, dream interpretation, Korean, young adults, CQR

Procedia PDF Downloads 445