Search results for: K-means clustering algorithm
736 Blockchain-Resilient Framework for Cloud-Based Network Devices within the Architecture of Self-Driving Cars
Authors: Mirza Mujtaba Baig
Abstract:
Artificial Intelligence (AI) is evolving rapidly, and one of the areas in which this field has influenced is automation. The automobile, healthcare, education, and robotic industries deploy AI technologies constantly, and the automation of tasks is beneficial to allow time for knowledge-based tasks and also introduce convenience to everyday human endeavors. The paper reviews the challenges faced with the current implementations of autonomous self-driving cars by exploring the machine learning, robotics, and artificial intelligence techniques employed for the development of this innovation. The controversy surrounding the development and deployment of autonomous machines, e.g., vehicles, begs the need for the exploration of the configuration of the programming modules. This paper seeks to add to the body of knowledge of research assisting researchers in decreasing the inconsistencies in current programming modules. Blockchain is a technology of which applications are mostly found within the domains of financial, pharmaceutical, manufacturing, and artificial intelligence. The registering of events in a secured manner as well as applying external algorithms required for the data analytics are especially helpful for integrating, adapting, maintaining, and extending to new domains, especially predictive analytics applications.Keywords: artificial intelligence, automation, big data, self-driving cars, machine learning, neural networking algorithm, blockchain, business intelligence
Procedia PDF Downloads 120735 Field-Programmable Gate Arrays Based High-Efficiency Oriented Fast and Rotated Binary Robust Independent Elementary Feature Extraction Method Using Feature Zone Strategy
Authors: Huang Bai-Cheng
Abstract:
When deploying the Oriented Fast and Rotated Binary Robust Independent Elementary Feature (BRIEF) (ORB) extraction algorithm on field-programmable gate arrays (FPGA), the access of global storage for 31×31 pixel patches of the features has become the bottleneck of the system efficiency. Therefore, a feature zone strategy has been proposed. Zones are searched as features are detected. Pixels around the feature zones are extracted from global memory and distributed into patches corresponding to feature coordinates. The proposed FPGA structure is targeted on a Xilinx FPGA development board of Zynq UltraScale+ series, and multiple datasets are tested. Compared with the streaming pixel patch extraction method, the proposed architecture obtains at least two times acceleration consuming extra 3.82% Flip-Flops (FFs) and 7.78% Look-Up Tables (LUTs). Compared with the non-streaming one, the proposed architecture saves 22.3% LUT and 1.82% FF, causing a latency of only 0.2ms and a drop in frame rate for 1. Compared with the related works, the proposed strategy and hardware architecture have the superiority of keeping a balance between FPGA resources and performance.Keywords: feature extraction, real-time, ORB, FPGA implementation
Procedia PDF Downloads 122734 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect
Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy
Abstract:
Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.Keywords: genetic algorithms, economic dispatch, pattern search
Procedia PDF Downloads 446733 Experimental and Numerical Investigation of Heat Transfer in THTL Test Loop Shell and Tube Heat Exchanger
Authors: M. Moody, R. Mahmoodi, A. R. Zolfaghari, A. Aminottojari
Abstract:
In this study, flow inside the shell side of a shell-and-tube heat exchanger is simulated numerically for laminar and turbulent flows in both steady state and transient mode. Governing equations of fluid flow are discrete using finite volume method and central difference scheme and solved with simple algorithm which is staggered grid by using MATLAB programming language. The heat transfer coefficient is obtained using velocity field from equation Dittus-Bolter. In comparison with, heat exchanger is simulated with ANSYS CFX software and experimental data measured in the THTL test loop. Numerical results obtained from the study show good agreement with experimental data and ANSYS CFX results. In addition, by deliberation the effect of the baffle spacing and the baffle cut on the heat transfer rate for turbulent flow, it is illustrated that the heat transfer rate depends on the baffle spacing and the baffle cut directly. In other word in spied of large turbulence, if these two parameters are not selected properly in the heat exchanger, the heat transfer rate can reduce.Keywords: shell-and-tube heat exchanger, flow and heat transfer, laminar and turbulence flow, turbulence model, baffle spacing, baffle cut
Procedia PDF Downloads 538732 Comparative Study on Manet Using Soft Computing Techniques
Authors: Amarjit Singh, Tripatdeep Singh Dua, Vikas Attri
Abstract:
Mobile Ad-hoc Network is a combination of several nodes that create dynamically a specific network without using any base infrastructure. In this study all the mobile nodes can depended upon each other to send any data. Mobile host can pick up data and forwarding to their destination path. Basically MANET depend upon their Quality of Service which is highly constraints to the user. To give better services we need to improve the QOS. In these days MANET QOS requirement to use soft computing techniques. These techniques depend upon their specific requirement and which exists using MANET concepts. Using a soft computing techniques various protocol and algorithms may be considered. In this paper, we provide comparative study review of existing work done in MANET using various kind of soft computing techniques. Our review research is based on their specific protocol or algorithm which provide concern solution of QOS need. We discuss about various protocol through which routing in MANET. In Second section we clear the concepts of Soft Computing and their types. In third section we review the MANET using different kind of soft computing techniques work done before. In forth section we need to understand the concept of QoS requirement which exists in MANET and we done comparative study on different protocol used before and last we conclude the purpose of using MANET with soft computing techniques metrics.Keywords: mobile ad-hoc network, fuzzy improved genetic approach, neural network, routing protocol, wireless mesh network
Procedia PDF Downloads 351731 Modeling the Philippine Stock Exchange Index Closing Value Using Artificial Neural Network
Authors: Frankie Burgos, Emely Munar, Conrado Basa
Abstract:
This paper aimed at developing an artificial neural network (ANN) model specifically for the Philippine Stock Exchange index closing value. The inputs to the ANN are US Dollar and Philippine Peso(USD-PHP) exchange rate, GDP growth of the country, quarterly inflation rate, 10-year bond yield, credit rating of the country, previous open, high, low, close values and volume of trade of the Philippine Stock Exchange Index (PSEi), gold price of the previous day, National Association of Securities Dealers Automated Quotations (NASDAQ), Standard and Poor’s 500 (S & P 500) and the iShares MSCI Philippines ETF (EPHE) previous closing value. The target is composed of the closing value of the PSEi during the 627 trading days from November 3, 2011, to May 30, 2014. MATLAB’s Neural Network toolbox was employed to create, train and simulate the network using multi-layer feed forward neural network with back-propagation algorithm. The results satisfactorily show that the neural network developed has the ability to model the PSEi, which is affected by both internal and external economic factors. It was found out that the inputs used are the main factors that influence the movement of the PSEi closing value.Keywords: artificial neural networks, artificial intelligence, philippine stocks exchange index, stocks trading
Procedia PDF Downloads 298730 An Application of Meta-Modeling Methods for Surrogating Lateral Dynamics Simulation in Layout-Optimization for Electric Drivetrains
Authors: Christian Angerer, Markus Lienkamp
Abstract:
Electric vehicles offer a high variety of possible drivetrain topologies with up to 4 motors. Multi-motor-designs can have several advantages regarding traction, vehicle dynamics, safety and even efficiency. With a rising number of motors, the whole drivetrain becomes more complex. All permutations of gearings, drivetrain-layouts, motor-types and –sizes lead up in a very large solution space. Single elements of this solution space can be analyzed by simulation methods. In addition to longitudinal vehicle behavior, which most optimization-approaches are restricted to, also lateral dynamics are important for vehicle dynamics, stability and efficiency. In order to compete large solution spaces and to find an optimal result, genetic algorithm based optimization is state-of-the-art. As lateral dynamics simulation is way more CPU-intensive, optimization takes much more time than in case of longitudinal-only simulation. Therefore, this paper shows an approach how to create meta-models from a 14-degree of freedom vehicle model in order to enable a numerically efficient drivetrain-layout optimization process under consideration of lateral dynamics. Different meta-modelling approaches such as neural networks or DoE are implemented and comparatively discussed.Keywords: driving dynamics, drivetrain layout, genetic optimization, meta-modeling, lateral dynamicx
Procedia PDF Downloads 418729 Blood Chemo-Profiling in Workers Exposed to Occupational Pyrethroid Pesticides to Identify Associated Diseases
Authors: O. O. Sufyani, M. E. Oraiby, S. A. Qumaiy, A. I. Alaamri, Z. M. Eisa, A. M. Hakami, M. A. Attafi, O. M. Alhassan, W. M. Elsideeg, E. M. Noureldin, Y. A. Hobani, Y. Q. Majrabi, I. A. Khardali, A. B. Maashi, A. A. Al Mane, A. H. Hakami, I. M. Alkhyat, A. A. Sahly, I. M. Attafi
Abstract:
According to the Food and Agriculture Organization (FAO) Pesticides Use Database, pesticide use in agriculture in Saudi Arabia has more than doubled from 4539 tons in 2009 to 10496 tons in 2019. Among pesticides, pyrethroids is commonly used in Saudi Arabia. Pesticides may increase susceptibility to a variety of diseases, particularly among pesticide workers, due to their extensive use, indiscriminate use, and long-term exposure. Therefore, analyzing blood chemo-profiles and evaluating the detected substances as biomarkers for pyrethroid pesticide exposure may assist to identify and predicting adverse effects of exposure, which may be used for both preventative and risk assessment purposes. The purpose of this study was to (a) analyze chemo-profiling by Gas Chromatography-Mass Spectrometry (GC-MS) analysis, (b) identify the most commonly detected chemicals in a time-exposure-dependent manner using a Venn diagram, and (c) identify their associated disease among pesticide workers using analyzer tools on the Comparative Toxicogenomics Database (CTD) website, (250 healthy male volunteers (20-60 years old) who deal with pesticides in the Jazan region of Saudi Arabia (exposure intervals: 1-2, 4-6, 6-8, more than 8 years) were included in the study. A questionnaire was used to collect demographic information, the duration of pesticide exposure, and the existence of chronic conditions. Blood samples were collected for biochemistry analysis and extracted by solid-phase extraction for gas chromatography-mass spectrometry (GC-MS) analysis. Biochemistry analysis reveals no significant changes in response to the exposure period; however, an inverse association between the albumin level and the exposure interval was observed. The blood chemo-profiling was differentially expressed in an exposure time-dependent manner. This analysis identified the common chemical set associated with each group and their associated significant occupational diseases. While some of these chemicals are associated with a variety of diseases, the distinguishing feature of these chemically associated disorders is their applicability for prevention measures. The most interesting finding was the identification of several chemicals; erucic acid, pelargonic acid, alpha-linolenic acid, dibutyl phthalate, diisobutyl phthalate, dodecanol, myristic Acid, pyrene, and 8,11,14-eicosatrienoic acid, associated with pneumoconiosis, asbestosis, asthma, silicosis and berylliosis. Chemical-disease association study also found that cancer, digestive system disease, nervous system disease, and metabolic disease were the most often recognized disease categories in the common chemical set. The hierarchical clustering approach was used to compare the expression patterns and exposure intervals of the chemicals found commonly. More study is needed to validate these chemicals as early markers of pyrethroid insecticide-related occupational disease, which might assist evaluate and reducing risk. The current study contributes valuable data and recommendations to public health.Keywords: occupational, toxicology, chemo-profiling, pesticide, pyrethroid, GC-MS
Procedia PDF Downloads 104728 Intrusion Detection in Computer Networks Using a Hybrid Model of Firefly and Differential Evolution Algorithms
Authors: Mohammad Besharatloo
Abstract:
Intrusion detection is an important research topic in network security because of increasing growth in the use of computer network services. Intrusion detection is done with the aim of detecting the unauthorized use or abuse in the networks and systems by the intruders. Therefore, the intrusion detection system is an efficient tool to control the user's access through some predefined regulations. Since, the data used in intrusion detection system has high dimension, a proper representation is required to show the basis structure of this data. Therefore, it is necessary to eliminate the redundant features to create the best representation subset. In the proposed method, a hybrid model of differential evolution and firefly algorithms was employed to choose the best subset of properties. In addition, decision tree and support vector machine (SVM) are adopted to determine the quality of the selected properties. In the first, the sorted population is divided into two sub-populations. These optimization algorithms were implemented on these sub-populations, respectively. Then, these sub-populations are merged to create next repetition population. The performance evaluation of the proposed method is done based on KDD Cup99. The simulation results show that the proposed method has better performance than the other methods in this context.Keywords: intrusion detection system, differential evolution, firefly algorithm, support vector machine, decision tree
Procedia PDF Downloads 94727 Implementation of Inference Fuzzy System as a Valuation Subsidiary is Based Particle Swarm Optimization for Solves the Issue of Decision Making in Middle Size Soccer Robot League
Authors: Zahra Abdolkarimi, Naser Zouri
Abstract:
Nowadays, there is unbelievable growing of Robots created a collection of complex and motivate subject in robotic and intellectual ornate, also it made a mechatronics style base of theoretical and technical way in Robocop. Additionally, robotics system recommended RoboCup factor as a provider of some standardization and testing method in case of computer discussion widely. The actual purpose of RoboCup is creating independent team of robots in 2050 based of FiFa roles to bring the victory in compare of world star team. In addition, decision making of robots depends to environment reaction, self-player and rival player with using inductive Fuzzy system valuation subsidiary to solve issue of robots in land game. The measure of selection in compare with other methods depends to amount of victories percentage in the same team that plays accidently. Consequences, shows method of our discussion is the best way for Particle Swarm Optimization and Fuzzy system compare to other decision of robotics algorithmic.Keywords: PSO algorithm, inference fuzzy system, chaos theory, soccer robot league
Procedia PDF Downloads 405726 Maternal and Newborn Health Care Program Implementation and Integration by Maternal Community Health Workers, Africa: An Integrative Review
Authors: Nishimwe Clemence, Mchunu Gugu, Mukamusoni Dariya
Abstract:
Background: Community health workers and extension workers can play an important role in supporting families to adopt health practices, encourage delivery in a health care facility, and ensure time referral of mothers and newborns if needed. Saving the lives of neonates should, therefore, be a significant health outcome in any maternal and newborn health program that is being implemented. Furthermore, about half of a million mothers die from pregnancy-related causes. Maternal and newborn deaths related to the period of postnatal care are neglected. Some authors emphasized that in developing countries, newborn mortality rates have been reduced much more slowly because of the lack of many necessary facility-based and outreach service. The aim of this review was to critically analyze the implementation and integration process of the maternal and newborn health care program by maternal community health workers, into the health care system, in Africa. Furthermore, it aims to reduce maternal and newborn mortality. We addressed the following review question: (1) what process is involved in the implementation and integration of the maternal and newborn health care program by maternal community health workers during antenatal, delivery and postnatal care into health system care in Africa? Methods: The database searched was from Health Source: Nursing/Academic Edition through academic search complete via EBSCO Host. An iterative approach was used to go through Google scholarly papers. The reviewers considered adapted Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidance, and the Mixed Methods Appraisal Tool (MMAT) was used. Synthesis method in integrative review following elements of noting patterns and themes, seeing plausibility, clustering, counting, making contrasts and comparisons, discerning commons and unusual patterns, subsuming particulars into general, noting relations between variability, finding intervening factors and building a logical chain of evidence, using data–based convergent synthesis design. Results: From the seventeen of studies included, results focused on three dimensions inspired by the literature on antenatal, delivery, and postnatal interventions. From this, further conceptual framework was elaborated. The conceptual framework process of implementation and integration of maternal and newborn health care program by maternal community health workers was elaborated in order to ensure the sustainability of community based intervention. Conclusions: the review revealed that the implementation and integration of maternal and newborn health care program require planning. We call upon governments, non-government organizations, the global health community, all stakeholders including policy makers, program managers, evaluators, educators, and providers to be involved in implementation and integration of maternal and newborn health program in updated policy and community-based intervention. Furthermore, emphasis should be placed on competence, responsibility, and accountability of maternal community health workers, their training and payment, collaboration with health professionals in health facilities, and reinforcement of outreach service. However, the review was limited in focus to the African context, where the process of maternal and newborn health care program has been poorly implemented.Keywords: Africa, implementation of integration, maternal, newborn
Procedia PDF Downloads 164725 Geographic Mapping of Tourism in Rural Areas: A Case Study of Cumbria, United Kingdom
Authors: Emma Pope, Demos Parapanos
Abstract:
Rural tourism has become more obvious and prevalent, with tourists’ increasingly seeking authentic experiences. This movement accelerated post-Covid, putting destinations in danger of reaching levels of saturation called ‘overtourism’. Whereas the phenomenon of overtourism has been frequently discussed in the urban context by academics and practitioners over recent years, it has hardly been referred to in the context of rural tourism, where perhaps it is even more difficult to manage. Rural tourism was historically considered small-scale, marked by its traditional character and by having little impact on nature and rural society. The increasing number of rural areas experiencing overtourism, however, demonstrates the need for new approaches, especially as the impacts and enablers of overtourism are context specific. Cumbria, with approximately 47 million visitors each year, and 23,000 operational enterprises, is one of these rural areas experiencing overtourism in the UK. Using the county of Cumbria as an example, this paper aims to explore better planning and management in rural destinations by clustering the area into rural and ‘urban-rural’ tourism zones. To achieve the aim, this study uses secondary data from a variety of sources to identify variables relating to visitor economy development and demand. These data include census data relating to population and employment, tourism industry-specific data including tourism revenue, visitor activities, and accommodation stock, and big data sources such as Trip Advisor and All Trails. The combination of these data sources provides a breadth of tourism-related variables. The subsequent analysis of this data draws upon various validated models. For example, tourism and hospitality employment density, territorial tourism pressure, and accommodation density. In addition to these statistical calculations, other data are utilized to further understand the context of these zones, for example, tourist services, attractions, and activities. The data was imported into ARCGIS where the density of the different variables is visualized on maps. This study aims to provide an understanding of the geographical context of visitor economy development and tourist behavior in rural areas. The findings contribute to an understanding of the spatial dynamics of tourism within the region of Cumbria through the creation of thematized maps. Different zones of tourism industry clusters are identified, which include elements relating to attractions, enterprises, infrastructure, tourism employment and economic impact. These maps visualize hot and cold spots relating to a variety of tourism contexts. It is believed that the strategy used to provide a visual overview of tourism development and demand in Cumbria could provide a strategic tool for rural areas to better plan marketing opportunities and avoid overtourism. These findings can inform future sustainability policy and destination management strategies within the areas through an understanding of the processes behind the emergence of both hot and cold spots. It may mean that attract and disperse needs to be reviewed in terms of a strategic option. In other words, to use sector or zonal policies for the individual hot or cold areas with transitional zones dependent upon local economic, social and environmental factors.Keywords: overtourism, rural tourism, sustainable tourism, tourism planning, tourism zones
Procedia PDF Downloads 75724 Landsat 8-TIRS NEΔT at Kīlauea Volcano and the Active East Rift Zone, Hawaii
Authors: Flora Paganelli
Abstract:
The radiometric performance of remotely sensed images is important for volcanic monitoring. The Thermal Infrared Sensor (TIRS) on-board Landsat 8 was designed with specific requirements in regard to the noise-equivalent change in temperature (NEΔT) at ≤ 0.4 K at 300 K for the two thermal infrared bands B10 and B11. This study investigated the on-orbit NEΔT of the TIRS two bands from a scene-based method using clear-sky images over the volcanic activity of Kīlauea Volcano and the active East Rift Zone (Hawaii), in order to optimize the use of TIRS data. Results showed that the NEΔTs of the two bands exceeded the design specification by an order of magnitude at 300 K. Both separate bands and split window algorithm were examined to estimate the effect of NEΔT on the land surface temperature (LST) retrieval, and NEΔT contribution to the final LST error. These results were also useful in the current efforts to assess the requirements for volcanology research campaign using the Hyperspectral Infrared Imager (HyspIRI) whose airborne prototype MODIS/ASTER instruments is plan to be flown by NASA as a single campaign to the Hawaiian Islands in support of volcanology and coastal area monitoring in 2016.Keywords: landsat 8, radiometric performance, thermal infrared sensor (TIRS), volcanology
Procedia PDF Downloads 242723 Multi Object Tracking for Predictive Collision Avoidance
Authors: Bruk Gebregziabher
Abstract:
The safe and efficient operation of Autonomous Mobile Robots (AMRs) in complex environments, such as manufacturing, logistics, and agriculture, necessitates accurate multiobject tracking and predictive collision avoidance. This paper presents algorithms and techniques for addressing these challenges using Lidar sensor data, emphasizing ensemble Kalman filter. The developed predictive collision avoidance algorithm employs the data provided by lidar sensors to track multiple objects and predict their velocities and future positions, enabling the AMR to navigate safely and effectively. A modification to the dynamic windowing approach is introduced to enhance the performance of the collision avoidance system. The overall system architecture encompasses object detection, multi-object tracking, and predictive collision avoidance control. The experimental results, obtained from both simulation and real-world data, demonstrate the effectiveness of the proposed methods in various scenarios, which lays the foundation for future research on global planners, other controllers, and the integration of additional sensors. This thesis contributes to the ongoing development of safe and efficient autonomous systems in complex and dynamic environments.Keywords: autonomous mobile robots, multi-object tracking, predictive collision avoidance, ensemble Kalman filter, lidar sensors
Procedia PDF Downloads 84722 Visualization of Corrosion at Plate-Like Structures Based on Ultrasonic Wave Propagation Images
Authors: Aoqi Zhang, Changgil Lee Lee, Seunghee Park
Abstract:
A non-contact nondestructive technique using laser-induced ultrasonic wave generation method was applied to visualize corrosion damage at aluminum alloy plate structures. The ultrasonic waves were generated by a Nd:YAG pulse laser, and a galvanometer-based laser scanner was used to scan specific area at a target structure. At the same time, wave responses were measured at a piezoelectric sensor which was attached on the target structure. The visualization of structural damage was achieved by calculating logarithmic values of root mean square (RMS). Damage-sensitive feature was defined as the scattering characteristics of the waves that encounter corrosion damage. The corroded damage was artificially formed by hydrochloric acid. To observe the effect of the location where the corrosion was formed, the both sides of the plate were scanned with same scanning area. Also, the effect on the depth of the corrosion was considered as well as the effect on the size of the corrosion. The results indicated that the damages were successfully visualized for almost cases, whether the damages were formed at the front or back side. However, the damage could not be clearly detected because the depth of the corrosion was shallow. In the future works, it needs to develop signal processing algorithm to more clearly visualize the damage by improving signal-to-noise ratio.Keywords: non-destructive testing, corrosion, pulsed laser scanning, ultrasonic waves, plate structure
Procedia PDF Downloads 300721 The 10-year Risk of Major Osteoporotic and Hip Fractures Among Indonesian People Living with HIV
Authors: Iqbal Pramukti, Mamat Lukman, Hasniatisari Harun, Kusman Ibrahim
Abstract:
Introduction: People living with HIV had a higher risk of osteoporotic fracture than the general population. The purpose of this study was to predict the 10-year risk of fracture among people living with HIV (PLWH) using FRAX™ and to identify characteristics related to the fracture risk. Methodology: This study consisted of 75 subjects. The ten-year probability of major osteoporotic fractures (MOF) and hip fractures was assessed using the FRAX™ algorithm. A cross-tabulation was used to identify the participant’s characteristics related to fracture risk. Results: The overall mean 10-year probability of fracture was 2.4% (1.7) for MOF and 0.4% (0.3) for hip fractures. For MOF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use showed a higher MOF score than those who were not (3.1 vs. 2.5; 4.6 vs 2.5; and 3.4 vs 2.5, respectively). For HF score, participants with parents’ hip fracture history, smoking behavior and glucocorticoid use also showed a higher HF score than those who were not (0.5 vs. 0.3; 0.8 vs. 0.3; and 0.5 vs. 0.3, respectively). Conclusions: The 10-year risk of fracture was higher among PLWH with several factors, including the parent’s hip. Fracture history, smoking behavior and glucocorticoid used. Further analysis on determining factors using multivariate regression analysis with a larger sample size is required to confirm the factors associated with the high fracture risk.Keywords: HIV, PLWH, osteoporotic fractures, hip fractures, 10-year risk of fracture, FRAX
Procedia PDF Downloads 49720 River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach
Authors: Farhad Alizadeh, Alireza Faregh Gharamaleki, Mojtaba Jalilzadeh, Houshang Gholami, Ali Akhoundzadeh
Abstract:
This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.Keywords: river stage-discharge process, LSSVM, discrete wavelet transform, Ensemble Empirical Decomposition Mode, multi-station modeling
Procedia PDF Downloads 175719 Consumer Welfare in the Platform Economy
Authors: Prama Mukhopadhyay
Abstract:
Starting from transport to food, today’s world platform economy and digital markets have taken over almost every sphere of consumers’ lives. Sellers and buyers are getting connected through platforms, which is acting as an intermediary. It has made consumer’s life easier in terms of time, price, choice and other factors. Having said that, there are several concerns regarding platforms. There are competition law concerns like unfair pricing, deep discounting by the platforms which affect the consumer welfare. Apart from that, the biggest problem is lack of transparency with respect to the business models, how it operates, price calculation, etc. In most of the cases, consumers are unaware of how their personal data are being used. In most of the cases, they are unaware of how algorithm uses their personal data to determine the price of the product or even to show the relevant products using their previous searches. Using personal or non-personal data without consumer’s consent is a huge legal concern. In addition to this, another major issue lies with the question of liability. If a dispute arises, who will be responsible? The seller or the platform? For example, if someone ordered food through a food delivery app and the food was bad, in this situation who will be liable: the restaurant or the food delivery platform? In this paper, the researcher tries to examine the legal concern related to platform economy from the consumer protection and consumer welfare perspectives. The paper analyses the cases from different jurisdictions and approach taken by the judiciaries. The author compares the existing legislation of EU, US and other Asian Countries and tries to highlight the best practices.Keywords: competition, consumer, data, platform
Procedia PDF Downloads 146718 A Method for Identifying Unusual Transactions in E-commerce Through Extended Data Flow Conformance Checking
Authors: Handie Pramana Putra, Ani Dijah Rahajoe
Abstract:
The proliferation of smart devices and advancements in mobile communication technologies have permeated various facets of life with the widespread influence of e-commerce. Detecting abnormal transactions holds paramount significance in this realm due to the potential for substantial financial losses. Moreover, the fusion of data flow and control flow assumes a critical role in the exploration of process modeling and data analysis, contributing significantly to the accuracy and security of business processes. This paper introduces an alternative approach to identify abnormal transactions through a model that integrates both data and control flows. Referred to as the Extended Data Petri net (DPNE), our model encapsulates the entire process, encompassing user login to the e-commerce platform and concluding with the payment stage, including the mobile transaction process. We scrutinize the model's structure, formulate an algorithm for detecting anomalies in pertinent data, and elucidate the rationale and efficacy of the comprehensive system model. A case study validates the responsive performance of each system component, demonstrating the system's adeptness in evaluating every activity within mobile transactions. Ultimately, the results of anomaly detection are derived through a thorough and comprehensive analysis.Keywords: database, data analysis, DPNE, extended data flow, e-commerce
Procedia PDF Downloads 57717 A Development of Holonomic Mobile Robot Using Fuzzy Multi-Layered Controller
Authors: Seungwoo Kim, Yeongcheol Cho
Abstract:
In this paper, a holonomic mobile robot is designed in omnidirectional wheels and an adaptive fuzzy controller is presented for its precise trajectories. A kind of adaptive controller based on fuzzy multi-layered algorithm is used to solve the big parametric uncertainty of motor-controlled dynamic system of 3-wheels omnidirectional mobile robot. The system parameters such as a tracking force are so time-varying due to the kinematic structure of omnidirectional wheels. The fuzzy adaptive control method is able to solve the problems of classical adaptive controller and conventional fuzzy adaptive controllers. The basic idea of new adaptive control scheme is that an adaptive controller can be constructed with parallel combination of robust controllers. This new adaptive controller uses a fuzzy multi-layered architecture which has several independent fuzzy controllers in parallel, each with different robust stability area. Out of several independent fuzzy controllers, the most suited one is selected by a system identifier which observes variations in the controlled system parameter. This paper proposes a design procedure which can be carried out mathematically and systematically from the model of a controlled system. Finally, the good performance of a holonomic mobile robot is confirmed through live tests of the tracking control task.Keywords: fuzzy adaptive control, fuzzy multi-layered controller, holonomic mobile robot, omnidirectional wheels, robustness and stability.
Procedia PDF Downloads 362716 Power System Stability Enhancement Using Self Tuning Fuzzy PI Controller for TCSC
Authors: Salman Hameed
Abstract:
In this paper, a self-tuning fuzzy PI controller (STFPIC) is proposed for thyristor controlled series capacitor (TCSC) to improve power system dynamic performance. In a STFPIC controller, the output scaling factor is adjusted on-line by an updating factor (α). The value of α is determined from a fuzzy rule-base defined on error (e) and change of error (Δe) of the controlled variable. The proposed self-tuning controller is designed using a very simple control rule-base and the most natural and unbiased membership functions (MFs) (symmetric triangles with equal base and 50% overlap with neighboring MFs). The comparative performances of the proposed STFPIC and the standard fuzzy PI controller (FPIC) have been investigated on a multi-machine power system (namely, 4 machine two area system) through detailed non-linear simulation studies using MATLAB/SIMULINK. From the simulation studies it has been found out that for damping oscillations, the performance of the proposed STFPIC is better than that obtained by the standard FPIC. Moreover, the proposed STFPIC as well as the FPIC have been found to be quite effective in damping oscillations over a wide range of operating conditions and are quite effective in enhancing the power carrying capability of the power system significantly.Keywords: genetic algorithm, power system stability, self-tuning fuzzy controller, thyristor controlled series capacitor
Procedia PDF Downloads 424715 Characterizing the Spatially Distributed Differences in the Operational Performance of Solar Power Plants Considering Input Volatility: Evidence from China
Authors: Bai-Chen Xie, Xian-Peng Chen
Abstract:
China has become the world's largest energy producer and consumer, and its development of renewable energy is of great significance to global energy governance and the fight against climate change. The rapid growth of solar power in China could help achieve its ambitious carbon peak and carbon neutrality targets early. However, the non-technical costs of solar power in China are much higher than at international levels, meaning that inefficiencies are rooted in poor management and improper policy design and that efficiency distortions have become a serious challenge to the sustainable development of the renewable energy industry. Unlike fossil energy generation technologies, the output of solar power is closely related to the volatile solar resource, and the spatial unevenness of solar resource distribution leads to potential efficiency spatial distribution differences. It is necessary to develop an efficiency evaluation method that considers the volatility of solar resources and explores the mechanism of the influence of natural geography and social environment on the spatially varying characteristics of efficiency distribution to uncover the root causes of managing inefficiencies. The study sets solar resources as stochastic inputs, introduces a chance-constrained data envelopment analysis model combined with the directional distance function, and measures the solar resource utilization efficiency of 222 solar power plants in representative photovoltaic bases in northwestern China. By the meta-frontier analysis, we measured the characteristics of different power plant clusters and compared the differences among groups, discussed the mechanism of environmental factors influencing inefficiencies, and performed statistical tests through the system generalized method of moments. Rational localization of power plants is a systematic project that requires careful consideration of the full utilization of solar resources, low transmission costs, and power consumption guarantee. Suitable temperature, precipitation, and wind speed can improve the working performance of photovoltaic modules, reasonable terrain inclination can reduce land cost, and the proximity to cities strongly guarantees the consumption of electricity. The density of electricity demand and high-tech industries is more important than resource abundance because they trigger the clustering of power plants to result in a good demonstration and competitive effect. To ensure renewable energy consumption, increased support for rural grids and encouraging direct trading between generators and neighboring users will provide solutions. The study will provide proposals for improving the full life-cycle operational activities of solar power plants in China to reduce high non-technical costs and improve competitiveness against fossil energy sources.Keywords: solar power plants, environmental factors, data envelopment analysis, efficiency evaluation
Procedia PDF Downloads 92714 Motion Performance Analyses and Trajectory Planning of the Movable Leg-Foot Lander
Authors: Shan Jia, Jinbao Chen, Jinhua Zhou, Jiacheng Qian
Abstract:
In response to the functional limitations of the fixed landers, those are to expand the detection range by the use of wheeled rovers with unavoidable path-repeatability in deep space exploration currently, a movable lander based on the leg-foot walking mechanism is presented. Firstly, a quadruped landing mechanism based on pushrod-damping is proposed. The configuration is of the bionic characteristics such as hip, knee and ankle joints, and the multi-function main/auxiliary buffers based on the crumple-energy absorption and screw-nut mechanism. Secondly, the workspace of the end of the leg-foot mechanism is solved by Monte Carlo method, and the key points on the desired trajectory of the end of the leg-foot mechanism are fitted by cubic spline curve. Finally, an optimal time-jerk trajectory based on weight coefficient is planned and analyzed by an adaptive genetic algorithm (AGA). The simulation results prove the rationality and stability of walking motion of the movable leg-foot lander in the star catalogue. In addition, this research can also provide a technical solution integrating of soft-landing, large-scale inspection and material transfer for future star catalogue exploration, and can even serve as the technical basis for developing the reusable landers.Keywords: motion performance, trajectory planning, movable, leg-foot lander
Procedia PDF Downloads 142713 Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks
Authors: Yao-Hong Tsai
Abstract:
Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.Keywords: unmanned aerial vehicle, object tracking, deep learning, collision avoidance
Procedia PDF Downloads 161712 Influence of Solenoid Configuration on Electromagnetic Acceleration of Plunger
Authors: Shreyansh Bharadwaj, Raghavendra Kollipara, Sijoy C. D., R. K. Mittal
Abstract:
Utilizing the Lorentz force to propel an electrically conductive plunger through a solenoid represents a fundamental application in electromagnetism. The parameters of the solenoid significantly influence the force exerted on the plunger, impacting its response. A parametric study has been done to understand the effect of these parameters on the force acting on the plunger. This study is done to determine the most optimal combination of parameters to obtain the fast response. Analysis has been carried out using an algorithm capable of simulating the scenario of a plunger undergoing acceleration within a solenoid. Authors have conducted an analysis focusing on several key configuration parameters of the solenoid. These parameters include the inter-layer gap (in the case of a multi-turn solenoid), different conductor diameters, varying numbers of turns, and diverse numbers of layers. Primary objective of this paper is to discern how alterations in these parameters affect the force applied to the plunger. Through extensive numerical simulations, a dataset has been generated and utilized to construct informative plots. These plots provide visual representations of the relationships between the solenoid configuration parameters and the resulting force exerted on the plunger, which can further be used to deduce scaling laws. This research endeavors to offer valuable insights into optimizing solenoid configurations for enhanced electromagnetic acceleration, thereby contributing to advancements in electromagnetic propulsion technology.Keywords: Lorentz force, solenoid configuration, electromagnetic acceleration, parametric analysis, simulation
Procedia PDF Downloads 51711 Time/Temperature-Dependent Finite Element Model of Laminated Glass Beams
Authors: Alena Zemanová, Jan Zeman, Michal Šejnoha
Abstract:
The polymer foil used for manufacturing of laminated glass members behaves in a viscoelastic manner with temperature dependence. This contribution aims at incorporating the time/temperature-dependent behavior of interlayer to our earlier elastic finite element model for laminated glass beams. The model is based on a refined beam theory: each layer behaves according to the finite-strain shear deformable formulation by Reissner and the adjacent layers are connected via the Lagrange multipliers ensuring the inter-layer compatibility of a laminated unit. The time/temperature-dependent behavior of the interlayer is accounted for by the generalized Maxwell model and by the time-temperature superposition principle due to the Williams, Landel, and Ferry. The resulting system is solved by the Newton method with consistent linearization and the viscoelastic response is determined incrementally by the exponential algorithm. By comparing the model predictions against available experimental data, we demonstrate that the proposed formulation is reliable and accurately reproduces the behavior of the laminated glass units.Keywords: finite element method, finite-strain Reissner model, Lagrange multipliers, generalized Maxwell model, laminated glass, Newton method, Williams-Landel-Ferry equation
Procedia PDF Downloads 433710 An Automatic Speech Recognition of Conversational Telephone Speech in Malay Language
Authors: M. Draman, S. Z. Muhamad Yassin, M. S. Alias, Z. Lambak, M. I. Zulkifli, S. N. Padhi, K. N. Baharim, F. Maskuriy, A. I. A. Rahim
Abstract:
The performance of Malay automatic speech recognition (ASR) system for the call centre environment is presented. The system utilizes Kaldi toolkit as the platform to the entire library and algorithm used in performing the ASR task. The acoustic model implemented in this system uses a deep neural network (DNN) method to model the acoustic signal and the standard (n-gram) model for language modelling. With 80 hours of training data from the call centre recordings, the ASR system can achieve 72% of accuracy that corresponds to 28% of word error rate (WER). The testing was done using 20 hours of audio data. Despite the implementation of DNN, the system shows a low accuracy owing to the varieties of noises, accent and dialect that typically occurs in Malaysian call centre environment. This significant variation of speakers is reflected by the large standard deviation of the average word error rate (WERav) (i.e., ~ 10%). It is observed that the lowest WER (13.8%) was obtained from recording sample with a standard Malay dialect (central Malaysia) of native speaker as compared to 49% of the sample with the highest WER that contains conversation of the speaker that uses non-standard Malay dialect.Keywords: conversational speech recognition, deep neural network, Malay language, speech recognition
Procedia PDF Downloads 323709 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 44708 Evaluating Machine Learning Techniques for Activity Classification in Smart Home Environments
Authors: Talal Alshammari, Nasser Alshammari, Mohamed Sedky, Chris Howard
Abstract:
With the widespread adoption of the Internet-connected devices, and with the prevalence of the Internet of Things (IoT) applications, there is an increased interest in machine learning techniques that can provide useful and interesting services in the smart home domain. The areas that machine learning techniques can help advance are varied and ever-evolving. Classifying smart home inhabitants’ Activities of Daily Living (ADLs), is one prominent example. The ability of machine learning technique to find meaningful spatio-temporal relations of high-dimensional data is an important requirement as well. This paper presents a comparative evaluation of state-of-the-art machine learning techniques to classify ADLs in the smart home domain. Forty-two synthetic datasets and two real-world datasets with multiple inhabitants are used to evaluate and compare the performance of the identified machine learning techniques. Our results show significant performance differences between the evaluated techniques. Such as AdaBoost, Cortical Learning Algorithm (CLA), Decision Trees, Hidden Markov Model (HMM), Multi-layer Perceptron (MLP), Structured Perceptron and Support Vector Machines (SVM). Overall, neural network based techniques have shown superiority over the other tested techniques.Keywords: activities of daily living, classification, internet of things, machine learning, prediction, smart home
Procedia PDF Downloads 358707 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals
Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam
Abstract:
The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study
Procedia PDF Downloads 322