Search results for: consensus algorithms
735 Fundamental Theory of the Evolution Force: Gene Engineering utilizing Synthetic Evolution Artificial Intelligence
Authors: L. K. Davis
Abstract:
The effects of the evolution force are observable in nature at all structural levels ranging from small molecular systems to conversely enormous biospheric systems. However, the evolution force and work associated with formation of biological structures has yet to be described mathematically or theoretically. In addressing the conundrum, we consider evolution from a unique perspective and in doing so we introduce the “Fundamental Theory of the Evolution Force: FTEF”. We utilized synthetic evolution artificial intelligence (SYN-AI) to identify genomic building blocks and to engineer 14-3-3 ζ docking proteins by transforming gene sequences into time-based DNA codes derived from protein hierarchical structural levels. The aforementioned served as templates for random DNA hybridizations and genetic assembly. The application of hierarchical DNA codes allowed us to fast forward evolution, while dampening the effect of point mutations. Natural selection was performed at each hierarchical structural level and mutations screened using Blosum 80 mutation frequency-based algorithms. Notably, SYN-AI engineered a set of three architecturally conserved docking proteins that retained motion and vibrational dynamics of native Bos taurus 14-3-3 ζ.Keywords: 14-3-3 docking genes, synthetic protein design, time-based DNA codes, writing DNA code from scratch
Procedia PDF Downloads 112734 Assessment of Runway Micro Texture Using Surface Laser Scanners: An Explorative Study
Authors: Gerard Van Es
Abstract:
In this study, the use of a high resolution surface laser scanner to assess the micro texture of runway surfaces was investigated experimentally. Micro texture is one of the important surface components that helps to provide high braking friction between aircraft tires and a wet runway surface. Algorithms to derive different parameters that characterise micro texture was developed. Surface scans with a high resolution laser scanner were conducted on 40 different runway (like) surfaces. For each surface micro texture parameters were calculated from the laser scan data. These results were correlated with results obtained from a British pendulum tester that was used on the same surface. Results obtained with the British pendulum tester are generally considered to be indicative for the micro texture related friction characteristics. The results show that a meaningful correlation can be found between different parameters that characterise micro texture obtained with the laser scanner and the British pendulum tester results. Surface laser scanners are easier to operate and give more consistent results than a British pendulum tester. Therefore for airport operators surface laser scanners can be a useful tool to determine if their runway becomes slippery when wet due to a smooth micro texture.Keywords: runway friction, micro texture, aircraft braking performance, slippery runways
Procedia PDF Downloads 119733 Decentralized Forest Policy for Natural Sal (Shorea robusta) Forests Management in the Terai Region of Nepal
Authors: Medani Prasad Rijal
Abstract:
The study outlines the impacts of decentralized forest policy on natural Sal (shorea robusta) forests in the Terai region of Nepal. The government has implemented community forestry program to manage the forest resources and improve the livelihood of local people collectively. The forest management authorities such as conserve, manage, develop and use of forest resources were shifted to the local communities, however, the ownership right of the forestland retained by the government. Local communities took the decision on harvesting, distribution, and sell of forest products by fixing the prices independently. The local communities were putting the low value of forest products and distributed among the user households on the name of collective decision. The decision of low valuation is devaluating the worth of forest products. Therefore, the study hypothesized that decision-making capacities are equally prominent next to the decentralized policy and program formulation. To accomplish the study, individual to group level discussions and questionnaire survey methods were applied with executive committee members and user households. The study revealed that the local intuition called Community Forest User Group (CFUG) committee normally took the decisions on consensus basis. Considering to the access and affording capacity of user households having poor economic backgrounds, low pricing mechanism of forest products has been practiced, even though the Sal timber is far expensive in the local market. The local communities thought that low pricing mechanism is accessible to all user households from poor to better off households. However, the analysis of forest products distribution opposed the assumption as most of the Sal timber, which is the most valuable forest product of community forest only purchased by the limited households of better economic conditions. Since the Terai region is heterogeneous by socio-economic conditions, better off households always have higher affording capacity and possibility of taking higher timber benefits because of low price mechanism. On the other hand, the minimum price rate of forest products has poor contribution in community fund collection. Consequently, it has poor support to carry out poverty alleviation activities to poor people. The local communities have been fixed Sal timber price rate around three times cheaper than normal market price, which is a strong evidence of forest product devaluation itself. Finally, the study concluded that the capacity building of local executives as the decision-makers of natural Sal forests is equally indispensable next to the policy and program formulation for effective decentralized forest management. Unilateral decentralized forest policy may devaluate the forest products rather than devolve of power to the local communities and empower to them.Keywords: community forestry program, decentralized forest policy, Nepal, Sal forests, Terai
Procedia PDF Downloads 332732 Emotion Mining and Attribute Selection for Actionable Recommendations to Improve Customer Satisfaction
Authors: Jaishree Ranganathan, Poonam Rajurkar, Angelina A. Tzacheva, Zbigniew W. Ras
Abstract:
In today’s world, business often depends on the customer feedback and reviews. Sentiment analysis helps identify and extract information about the sentiment or emotion of the of the topic or document. Attribute selection is a challenging problem, especially with large datasets in actionable pattern mining algorithms. Action Rule Mining is one of the methods to discover actionable patterns from data. Action Rules are rules that help describe specific actions to be made in the form of conditions that help achieve the desired outcome. The rules help to change from any undesirable or negative state to a more desirable or positive state. In this paper, we present a Lexicon based weighted scheme approach to identify emotions from customer feedback data in the area of manufacturing business. Also, we use Rough sets and explore the attribute selection method for large scale datasets. Then we apply Actionable pattern mining to extract possible emotion change recommendations. This kind of recommendations help business analyst to improve their customer service which leads to customer satisfaction and increase sales revenue.Keywords: actionable pattern discovery, attribute selection, business data, data mining, emotion
Procedia PDF Downloads 199731 Gaits Stability Analysis for a Pneumatic Quadruped Robot Using Reinforcement Learning
Authors: Soofiyan Atar, Adil Shaikh, Sahil Rajpurkar, Pragnesh Bhalala, Aniket Desai, Irfan Siddavatam
Abstract:
Deep reinforcement learning (deep RL) algorithms leverage the symbolic power of complex controllers by automating it by mapping sensory inputs to low-level actions. Deep RL eliminates the complex robot dynamics with minimal engineering. Deep RL provides high-risk involvement by directly implementing it in real-world scenarios and also high sensitivity towards hyperparameters. Tuning of hyperparameters on a pneumatic quadruped robot becomes very expensive through trial-and-error learning. This paper presents an automated learning control for a pneumatic quadruped robot using sample efficient deep Q learning, enabling minimal tuning and very few trials to learn the neural network. Long training hours may degrade the pneumatic cylinder due to jerk actions originated through stochastic weights. We applied this method to the pneumatic quadruped robot, which resulted in a hopping gait. In our process, we eliminated the use of a simulator and acquired a stable gait. This approach evolves so that the resultant gait matures more sturdy towards any stochastic changes in the environment. We further show that our algorithm performed very well as compared to programmed gait using robot dynamics.Keywords: model-based reinforcement learning, gait stability, supervised learning, pneumatic quadruped
Procedia PDF Downloads 314730 Biomimetic Paradigms in Architectural Conceptualization: Science, Technology, Engineering, Arts and Mathematics in Higher Education
Authors: Maryam Kalkatechi
Abstract:
The application of algorithms in architecture has been realized as geometric forms which are increasingly being used by architecture firms. The abstraction of ideas in a formulated algorithm is not possible. There is still a gap between design innovation and final built in prescribed formulas, even the most aesthetical realizations. This paper presents the application of erudite design process to conceptualize biomimetic paradigms in architecture. The process is customized to material and tectonics. The first part of the paper outlines the design process elements within four biomimetic pre-concepts. The pre-concepts are chosen from plants family. These include the pine leaf, the dandelion flower; the cactus flower and the sun flower. The choice of these are related to material qualities and natural pattern of the tectonics of these plants. It then focuses on four versions of tectonic comprehension of one of the biomimetic pre-concepts. The next part of the paper discusses the implementation of STEAM in higher education in architecture. This is shown by the relations within the design process and the manifestation of the thinking processes. The A in the SETAM, in this case, is only achieved by the design process, an engaging event as a performing arts, in which the conceptualization and development is realized in final built.Keywords: biomimetic paradigm, erudite design process, tectonic, STEAM (Science, Technology, Engineering, Arts, Mathematic)
Procedia PDF Downloads 209729 An Improved Convolution Deep Learning Model for Predicting Trip Mode Scheduling
Authors: Amin Nezarat, Naeime Seifadini
Abstract:
Trip mode selection is a behavioral characteristic of passengers with immense importance for travel demand analysis, transportation planning, and traffic management. Identification of trip mode distribution will allow transportation authorities to adopt appropriate strategies to reduce travel time, traffic and air pollution. The majority of existing trip mode inference models operate based on human selected features and traditional machine learning algorithms. However, human selected features are sensitive to changes in traffic and environmental conditions and susceptible to personal biases, which can make them inefficient. One way to overcome these problems is to use neural networks capable of extracting high-level features from raw input. In this study, the convolutional neural network (CNN) architecture is used to predict the trip mode distribution based on raw GPS trajectory data. The key innovation of this paper is the design of the layout of the input layer of CNN as well as normalization operation, in a way that is not only compatible with the CNN architecture but can also represent the fundamental features of motion including speed, acceleration, jerk, and Bearing rate. The highest prediction accuracy achieved with the proposed configuration for the convolutional neural network with batch normalization is 85.26%.Keywords: predicting, deep learning, neural network, urban trip
Procedia PDF Downloads 137728 A Culture-Contrastive Analysis Of The Communication Between Discourse Participants In European Editorials
Authors: Melanie Kerschner
Abstract:
Language is our main means of social interaction. News journalism, especially opinion discourse, holds a powerful position in this context. Editorials can be regarded as encounters of different, partially contradictory relationships between discourse participants constructed through the editorial voice. Their primary goal is to shape public opinion by commenting on events already addressed by other journalistic genres in the given newspaper. In doing so, the author tries to establish a consensus over the negotiated matter (i.e. the news event) with the reader. At the same time, he/she claims authority over the “correct” description and evaluation of an event. Yet, how can the relationship and the interaction between the discourse participants, i.e. the journalist, the reader and the news actors represented in the editorial, be best visualized and studied from a cross-cultural perspective? The present research project attempts to give insights into the role of (media) culture in British, Italian and German editorials. For this purpose the presenter will propose a basic framework: the so called “pyramid of discourse participants”, comprising the author, the reader, two types of news actors and the semantic macro-structure (as meta-level of analysis). Based on this framework, the following questions will be addressed: • Which strategies does the author employ to persuade the reader and to prompt him to give his opinion (in the comment section)? • In which ways (and with which linguistic tools) is editorial opinion expressed? • Does the author use adjectives, adverbials and modal verbs to evaluate news actors, their actions and the current state of affairs or does he/she prefer nominal labels? • Which influence do language choice and the related media culture have on the representation of news events in editorials? • In how far does the social context of a given media culture influence the amount of criticism and the way it is mediated so that it is still culturally-acceptable? The following culture-contrastive study shall examine 45 editorials (i.e. 15 per media culture) from six national quality papers that are similar in distribution, importance and the kind of envisaged readership to make valuable conclusions about culturally-motivated similarities and differences in the coverage and assessment of news events. The thematic orientation of the editorials will be the NSA scandal and the reactions of various countries, as this topic was and still is relevant to each of the three media cultures. Starting out from the “pyramid of discourse participants” as underlying framework, eight different criteria will be assigned to the individual discourse participants in the micro-analysis of the editorials. For the purpose of illustration, a single criterion, referring to the salience of authorial opinion, will be selected to demonstrate how the pyramid of discourse participants can be applied as a basis for empirical analysis. Extracts from the corpus shall furthermore enhance the understanding.Keywords: Micro-analysis of editorials, culture-contrastive research, media culture, interaction between discourse participants, evaluation
Procedia PDF Downloads 515727 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization
Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu
Abstract:
This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection
Procedia PDF Downloads 59726 Developing Artificial Neural Networks (ANN) for Falls Detection
Authors: Nantakrit Yodpijit, Teppakorn Sittiwanchai
Abstract:
The number of older adults is rising rapidly. The world’s population becomes aging. Falls is one of common and major health problems in the elderly. Falls may lead to acute and chronic injuries and deaths. The fall-prone individuals are at greater risk for decreased quality of life, lowered productivity and poverty, social problems, and additional health problems. A number of studies on falls prevention using fall detection system have been conducted. Many available technologies for fall detection system are laboratory-based and can incur substantial costs for falls prevention. The utilization of alternative technologies can potentially reduce costs. This paper presents the new design and development of a wearable-based fall detection system using an Accelerometer and Gyroscope as motion sensors for the detection of body orientation and movement. Algorithms are developed to differentiate between Activities of Daily Living (ADL) and falls by comparing Threshold-based values with Artificial Neural Networks (ANN). Results indicate the possibility of using the new threshold-based method with neural network algorithm to reduce the number of false positive (false alarm) and improve the accuracy of fall detection system.Keywords: aging, algorithm, artificial neural networks (ANN), fall detection system, motion sensorsthreshold
Procedia PDF Downloads 496725 Digital Publics, Analogue Institutions: Everyday Urban Politics in Gated Neighborhoods in India
Authors: Praveen Priyadarshi
Abstract:
What is the nature of the 'political subjects' in the new urban spaces of the Indian cities? How do they become a 'public'? The paper explores these questions by studying the National Capital Region's gated communities in India. Even as the 'gated-ness' of these neighborhoods constantly underlines the definitive spatial boundary of the 'public' that it is constituted within the walls of a particular gated community, the making of this 'public' occurs as much in the digital spaces—in the digital space of online messaging apps and platforms—populated by unique digital identities. It is through constant exchanges of the digital identities that the 'public' is created. However, the institutional framework and the formal rules governing the making of the public are still analogue because they presume and privilege traditional modes of participation for people to constitute a 'public'. The institutions are designed as rules and norms governing people's behavior when they participate in traditional, physical mode, whereas rules and norms designed in the algorithms regulate people's social and political behavior in the digital domain. In exploring this disjuncture between the analogue institutions and the digital public, the paper analytically evaluates the nature of everyday politics in gates neighborhoods in India.Keywords: gated communities, everyday politics, new urban spaces, digital publics
Procedia PDF Downloads 163724 Spatiotemporal Analysis of Land Surface Temperature and Urban Heat Island Evaluation of Four Metropolitan Areas of Texas, USA
Authors: Chunhong Zhao
Abstract:
Remotely sensed land surface temperature (LST) is vital to understand the land-atmosphere energy balance, hydrological cycle, and thus is widely used to describe the urban heat island (UHI) phenomenon. However, due to technical constraints, satellite thermal sensors are unable to provide LST measurement with both high spatial and high temporal resolution. Despite different downscaling techniques and algorithms to generate high spatiotemporal resolution LST. Four major metropolitan areas in Texas, USA: Dallas-Fort Worth, Houston, San Antonio, and Austin all demonstrate UHI effects. Different cities are expected to have varying SUHI effect during the urban development trajectory. With the help of the Landsat, ASTER, and MODIS archives, this study focuses on the spatial patterns of UHIs and the seasonal and annual variation of these metropolitan areas. With Gaussian model, and Local Indicators of Spatial Autocorrelations (LISA), as well as data fusion methods, this study identifies the hotspots and the trajectory of the UHI phenomenon of the four cities. By making comparison analysis, the result can help to alleviate the advent effect of UHI and formulate rational urban planning in the long run.Keywords: spatiotemporal analysis, land surface temperature, urban heat island evaluation, metropolitan areas of Texas, USA
Procedia PDF Downloads 416723 Examining the Impact of Fake News on Mental Health of Residents in Jos Metropolis
Authors: Job Bapyibi Guyson, Bangripa Kefas
Abstract:
The advent of social media has no doubt provided platforms that facilitate the spread of fake news. The devastating impact of this does not only end with the prevalence of rumours and propaganda but also poses potential impact on individuals’ mental well-being. Therefore, this study on examining the impact of fake news on the mental health of residents in Jos metropolis among others interrogates the impact of exposure to fake news on residents' mental health. Anchored on the Cultivation Theory, the study adopted quantitative method and surveyed two the opinions of hundred (200) social media users in Jos metropolis using purposive sampling technique. The findings reveal that a significant majority of respondents perceive fake news as highly prevalent on social media, with associated feelings of anxiety and stress. The majority of the respondents express confidence in identifying fake news, though a notable proportion lacks such confidence. Strategies for managing the mental impact of encountering fake news include ignoring it, fact checking, discussing with others, reporting to platforms, and seeking professional support. Based on these insights, recommendations were proposed to address the challenges posed by fake news. These include promoting media literacy, integrating fact-checking tools, adjusting algorithms and fostering digital well-being features among others.Keywords: fake news, mental health, social media, impact
Procedia PDF Downloads 53722 Distribution, Source Apportionment and Assessment of Pollution Level of Trace Metals in Water and Sediment of a Riverine Wetland of the Brahmaputra Valley
Authors: Kali Prasad Sarma, Sanghita Dutta
Abstract:
Deepor Beel (DB), the lone Ramsar site and an important wetland of the Brahmaputra valley in the state of Assam. The local people from fourteen peripheral villages traditionally utilize the wetland for harvesting vegetables, flowers, aquatic seeds, medicinal plants, fish, molluscs, fodder for domestic cattle etc. Therefore, it is of great importance to understand the concentration and distribution of trace metals in water-sediment system of the beel in order to protect its ecological environment. DB lies between26°05′26′′N to 26°09′26′′N latitudes and 90°36′39′′E to 91°41′25′′E longitudes. Water samples from the surface layer of water up to 40cm deep and sediment samples from the top 5cm layer of surface sediments were collected. The trace metals in waters and sediments were analysed using ICP-OES. The organic Carbon was analysed using the TOC analyser. The different mineral present in the sediments were confirmed by X-ray diffraction method (XRD). SEM images were recorded for the samples using SEM, attached with energy dispersive X-ray unit, with an accelerating voltage of 20 kv. All the statistical analyses were performed using SPSS20.0 for windows. In the present research, distribution, source apportionment, temporal and spatial variability, extent of pollution and the ecological risk of eight toxic trace metals in sediments and water of DB were investigated. The average concentrations of chromium(Cr) (both the seasons), copper(Cu) and lead(Pb) (pre-monsoon) and zinc(Zn) and cadmium(Cd) (post-monsoon) in sediments were higher than the consensus based threshold concentration(TEC). The persistent exposure of toxic trace metals in sediments pose a potential threat, especially to sediment dwelling organisms. The degree of pollution in DB sediments for Pb, Cobalt (Co) Zn, Cd, Cr, Cu and arsenic (As) was assessed using Enrichment Factor (EF), Geo-accumulation index (Igeo) and Pollution Load Index (PLI). The results indicated that contamination of surface sediments in DB is dominated by Pb and Cd and to a lesser extent by Co, Fe, Cu, Cr, As and Zn. A significant positive correlation among the pairs of element Co/Fe, Zn/As in water, and Cr/Zn, Fe/As in sediments indicates similar source of origin of these metals. The effects of interaction among trace metals between water and sediments shows significant variations (F =94.02, P < 0.001), suggesting maximum mobility of trace metals in DB sediments and water. The source apportionment of the heavy metals was carried out using Principal Component Analysis (PCA). SEM-EDS detects the presence of Cd, Cu, Cr, Zn, Pb, As and Fe in the sediment sample. The average concentration of Cd, Zn, Pb and As in the bed sediments of DB are found to be higher than the crustal abundance. The EF values indicate that Cd and Pb are significantly enriched. From source apportionment studies of the eight metals using PCA revealed that Cd was anthropogenic in origin; Pb, As, Cr, and Zn had mixed sources; whereas Co, Cu and Fe were natural in origin.Keywords: Deepor Beel, enrichment factor, principal component analysis, trace metals
Procedia PDF Downloads 287721 Predictive Analytics of Student Performance Determinants
Authors: Mahtab Davari, Charles Edward Okon, Somayeh Aghanavesi
Abstract:
Every institute of learning is usually interested in the performance of enrolled students. The level of these performances determines the approach an institute of study may adopt in rendering academic services. The focus of this paper is to evaluate students' academic performance in given courses of study using machine learning methods. This study evaluated various supervised machine learning classification algorithms such as Logistic Regression (LR), Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors, Linear Discriminant Analysis, and Quadratic Discriminant Analysis, using selected features to predict study performance. The accuracy, precision, recall, and F1 score obtained from a 5-Fold Cross-Validation were used to determine the best classification algorithm to predict students’ performances. SVM (using a linear kernel), LDA, and LR were identified as the best-performing machine learning methods. Also, using the LR model, this study identified students' educational habits such as reading and paying attention in class as strong determinants for a student to have an above-average performance. Other important features include the academic history of the student and work. Demographic factors such as age, gender, high school graduation, etc., had no significant effect on a student's performance.Keywords: student performance, supervised machine learning, classification, cross-validation, prediction
Procedia PDF Downloads 125720 Developing Digital Twins of Steel Hull Processes
Authors: V. Ložar, N. Hadžić, T. Opetuk, R. Keser
Abstract:
The development of digital twins strongly depends on efficient algorithms and their capability to mirror real-life processes. Nowadays, such efforts are required to establish factories of the future faced with new demands of custom-made production. The ship hull processes face these challenges too. Therefore, it is important to implement design and evaluation approaches based on production system engineering. In this study, the recently developed finite state method is employed to describe the stell hull process as a platform for the implementation of digital twinning technology. The application is justified by comparing the finite state method with the analytical approach. This method is employed to rebuild a model of a real shipyard ship hull process using a combination of serial and splitting lines. The key performance indicators such as the production rate, work in process, probability of starvation, and blockade are calculated and compared to the corresponding results obtained through a simulation approach using the software tool Enterprise dynamics. This study confirms that the finite state method is a suitable tool for digital twinning applications. The conclusion highlights the advantages and disadvantages of methods employed in this context.Keywords: digital twin, finite state method, production system engineering, shipyard
Procedia PDF Downloads 99719 Development of International Entry-Level Nursing Competencies to Address the Continuum of Substance Use
Authors: Cheyenne Johnson, Samantha Robinson, Christina Chant, Ann M. Mitchell, Carol Price, Carmel Clancy, Adam Searby, Deborah S. Finnell
Abstract:
Introduction: Substance use along the continuum from at-risk use to a substance use disorder (SUD) contributes substantially to the burden of disease and related harms worldwide. There is a growing body of literature that highlights the lack of substance use related content in nursing curricula. Furthermore, there is also a lack of consensus on key competencies necessary for entry-level nurses. Globally, there is a lack of established nursing competencies related to prevention, health promotion, harm reduction and treatment of at-risk substance use and SUDs. At a critical time in public health, this gap in nursing curricula contributes to a lack of preparation for entry-level nurses to support people along the continuum of substance use. Thus, in practice, early opportunities for screening, support, and interventions may be missed. To address this gap, an international committee was convened to develop international entry-level nursing competencies specifying the knowledge, skills, and abilities that all nurses should possess in order to address the continuum of substance use. Methodology: An international steering committee, including representation from Canada, United States, United Kingdom, and Australia was established to lead this work over a one-year time period. The steering committee conducted a scoping review, undertaken to examine nursing competency frameworks, and to inform a competency structure that would guide this work. The next steps were to outline key competency areas and establish leaders for working groups to develop the competencies. In addition, a larger international committee was gathered to contribute to competency working groups, review the collective work and concur on the final document. Findings: A comprehensive framework was developed with competencies covering a wide spectrum of substance use across the lifespan and in the context of prevention, health promotion, harm reduction and treatment, including special populations. The development of this competency-based framework meets an identified need to provide guidance for universities, health authorities, policy makers, nursing regulators and other organizations that provide and support nursing education which focuses on care for patients and families with at-risk substance use and SUDs. Conclusion: Utilizing these global competencies as expected outcomes of an educational and skill building curricula for entry-level nurses holds great promise for incorporating evidence-informed training in the care and management of people across the continuum of substance use.Keywords: addiction nursing, addiction nursing curriculum, competencies, substance use
Procedia PDF Downloads 175718 Hybrid Bee Ant Colony Algorithm for Effective Load Balancing and Job Scheduling in Cloud Computing
Authors: Thomas Yeboah
Abstract:
Cloud Computing is newly paradigm in computing that promises a delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). As Cloud Computing is a newly style of computing on the internet. It has many merits along with some crucial issues that need to be resolved in order to improve reliability of cloud environment. These issues are related with the load balancing, fault tolerance and different security issues in cloud environment.In this paper the main concern is to develop an effective load balancing algorithm that gives satisfactory performance to both, cloud users and providers. This proposed algorithm (hybrid Bee Ant Colony algorithm) is a combination of two dynamic algorithms: Ant Colony Optimization and Bees Life algorithm. Ant Colony algorithm is used in this hybrid Bee Ant Colony algorithm to solve load balancing issues whiles the Bees Life algorithm is used for optimization of job scheduling in cloud environment. The results of the proposed algorithm shows that the hybrid Bee Ant Colony algorithm outperforms the performances of both Ant Colony algorithm and Bees Life algorithm when evaluated the proposed algorithm performances in terms of Waiting time and Response time on a simulator called CloudSim.Keywords: ant colony optimization algorithm, bees life algorithm, scheduling algorithm, performance, cloud computing, load balancing
Procedia PDF Downloads 626717 Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models
Authors: Aditya Karade, Sharada Falane, Dhananjay Deshmukh, Vijaykumar Mantri
Abstract:
Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%.Keywords: brain tumour, MRI image, detecting and classifying tumour, pre-trained models, transfer learning, image segmentation, data augmentation
Procedia PDF Downloads 73716 Single Pole-To-Earth Fault Detection and Location on the Tehran Railway System Using ICA and PSO Trained Neural Network
Authors: Masoud Safarishaal
Abstract:
Detecting the location of pole-to-earth faults is essential for the safe operation of the electrical system of the railroad. This paper aims to use a combination of evolutionary algorithms and neural networks to increase the accuracy of single pole-to-earth fault detection and location on the Tehran railroad power supply system. As a result, the Imperialist Competitive Algorithm (ICA) and Particle Swarm Optimization (PSO) are used to train the neural network to improve the accuracy and convergence of the learning process. Due to the system's nonlinearity, fault detection is an ideal application for the proposed method, where the 600 Hz harmonic ripple method is used in this paper for fault detection. The substations were simulated by considering various situations in feeding the circuit, the transformer, and typical Tehran metro parameters that have developed the silicon rectifier. Required data for the network learning process has been gathered from simulation results. The 600Hz component value will change with the change of the location of a single pole to the earth's fault. Therefore, 600Hz components are used as inputs of the neural network when fault location is the output of the network system. The simulation results show that the proposed methods can accurately predict the fault location.Keywords: single pole-to-pole fault, Tehran railway, ICA, PSO, artificial neural network
Procedia PDF Downloads 122715 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data
Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin
Abstract:
The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test
Procedia PDF Downloads 297714 Attributes That Influence Respondents When Choosing a Mate in Internet Dating Sites: An Innovative Matching Algorithm
Authors: Moti Zwilling, Srečko Natek
Abstract:
This paper aims to present an innovative predictive analytics analysis in order to find the best combination between two consumers who strive to find their partner or in internet sites. The methodology shown in this paper is based on analysis of consumer preferences and involves data mining and machine learning search techniques. The study is composed of two parts: The first part examines by means of descriptive statistics the correlations between a set of parameters that are taken between man and women where they intent to meet each other through the social media, usually the internet. In this part several hypotheses were examined and statistical analysis were taken place. Results show that there is a strong correlation between the affiliated attributes of man and woman as long as concerned to how they present themselves in a social media such as "Facebook". One interesting issue is the strong desire to develop a serious relationship between most of the respondents. In the second part, the authors used common data mining algorithms to search and classify the most important and effective attributes that affect the response rate of the other side. Results exhibit that personal presentation and education background are found as most affective to achieve a positive attitude to one's profile from the other mate.Keywords: dating sites, social networks, machine learning, decision trees, data mining
Procedia PDF Downloads 293713 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications
Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong
Abstract:
This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.Keywords: asymptotically quasi-nonexpansive nonself-mapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space
Procedia PDF Downloads 259712 Spatio-Temporal Data Mining with Association Rules for Lake Van
Authors: Tolga Aydin, M. Fatih Alaeddinoğlu
Abstract:
People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.Keywords: apriori algorithm, association rules, data mining, spatio-temporal data
Procedia PDF Downloads 372711 Pilot Induced Oscillations Adaptive Suppression in Fly-By-Wire Systems
Authors: Herlandson C. Moura, Jorge H. Bidinotto, Eduardo M. Belo
Abstract:
The present work proposes the development of an adaptive control system which enables the suppression of Pilot Induced Oscillations (PIO) in Digital Fly-By-Wire (DFBW) aircrafts. The proposed system consists of a Modified Model Reference Adaptive Control (M-MRAC) integrated with the Gain Scheduling technique. The PIO oscillations are detected using a Real Time Oscillation Verifier (ROVER) algorithm, which then enables the system to switch between two reference models; one in PIO condition, with low proneness to the phenomenon and another one in normal condition, with high (or medium) proneness. The reference models are defined in a closed loop condition using the Linear Quadratic Regulator (LQR) control methodology for Multiple-Input-Multiple-Output (MIMO) systems. The implemented algorithms are simulated in software implementations with state space models and commercial flight simulators as the controlled elements and with pilot dynamics models. A sequence of pitch angles is considered as the reference signal, named as Synthetic Task (Syntask), which must be tracked by the pilot models. The initial outcomes show that the proposed system can detect and suppress (or mitigate) the PIO oscillations in real time before it reaches high amplitudes.Keywords: adaptive control, digital Fly-By-Wire, oscillations suppression, PIO
Procedia PDF Downloads 132710 A Selection Approach: Discriminative Model for Nominal Attributes-Based Distance Measures
Authors: Fang Gong
Abstract:
Distance measures are an indispensable part of many instance-based learning (IBL) and machine learning (ML) algorithms. The value difference metrics (VDM) and inverted specific-class distance measure (ISCDM) are among the top-performing distance measures that address nominal attributes. VDM performs well in some domains owing to its simplicity and poorly in others that exist missing value and non-class attribute noise. ISCDM, however, typically works better than VDM on such domains. To maximize their advantages and avoid disadvantages, in this paper, a selection approach: a discriminative model for nominal attributes-based distance measures is proposed. More concretely, VDM and ISCDM are built independently on a training dataset at the training stage, and the most credible one is recorded for each training instance. At the test stage, its nearest neighbor for each test instance is primarily found by any of VDM and ISCDM and then chooses the most reliable model of its nearest neighbor to predict its class label. It is simply denoted as a discriminative distance measure (DDM). Experiments are conducted on the 34 University of California at Irvine (UCI) machine learning repository datasets, and it shows DDM retains the interpretability and simplicity of VDM and ISCDM but significantly outperforms the original VDM and ISCDM and other state-of-the-art competitors in terms of accuracy.Keywords: distance measure, discriminative model, nominal attributes, nearest neighbor
Procedia PDF Downloads 112709 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis
Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame
Abstract:
Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain
Procedia PDF Downloads 85708 Estimation of Optimum Parameters of Non-Linear Muskingum Model of Routing Using Imperialist Competition Algorithm (ICA)
Authors: Davood Rajabi, Mojgan Yazdani
Abstract:
Non-linear Muskingum model is an efficient method for flood routing, however, the efficiency of this method is influenced by three applied parameters. Therefore, efficiency assessment of Imperialist Competition Algorithm (ICA) to evaluate optimum parameters of non-linear Muskingum model was addressed through this study. In addition to ICA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) were also used aiming at an available criterion to verdict ICA. In this regard, ICA was applied for Wilson flood routing; then, routing of two flood events of DoAab Samsami River was investigated. In case of Wilson flood that the target function was considered as the sum of squared deviation (SSQ) of observed and calculated discharges. Routing two other floods, in addition to SSQ, another target function was also considered as the sum of absolute deviations of observed and calculated discharge. For the first floodwater based on SSQ, GA indicated the best performance, however, ICA was on first place, based on SAD. For the second floodwater, based on both target functions, ICA indicated a better operation. According to the obtained results, it can be said that ICA could be used as an appropriate method to evaluate the parameters of Muskingum non-linear model.Keywords: Doab Samsami river, genetic algorithm, imperialist competition algorithm, meta-exploratory algorithms, particle swarm optimization, Wilson flood
Procedia PDF Downloads 502707 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 210706 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s Problem solves the problem of fitting 3 circles into a right triangle such that these 3 circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles inside the triangle with special tangency properties among circles and triangle sides; we call it extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving Tri(Tn) problem, n>2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary Carc. We call these problems the Carc(Tn) problems. The CPU time it takes for Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties, is less than one second.Keywords: circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem
Procedia PDF Downloads 109