Search results for: K-means clustering algorithm
1996 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 811995 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance
Authors: Abdullah Al Farwan, Ya Zhang
Abstract:
In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance
Procedia PDF Downloads 1691994 Estimation of Genetic Diversity in Sorghum Accessions Using Agro-Mophological and Nutritional Traits
Authors: Maletsema Alina Mofokeng, Nemera Shargie
Abstract:
Sorghum is one of the most important cereal crops grown as a source of calories for many people in tropics and sub-tropics of the world. Proper characterisation and evaluation of crop germplasm is an important component for effective management of genetic resources and their utilisation in the improvement of the crop through plant breeding. The objective of the study was to estimate the genetic diversity present in sorghum accessions grown in South Africa using agro-morphological traits and some nutritional contents. The experiment was carried out in Potchefstroom. Data were subjected to correlations, principal components analysis, and hierarchical clustering using GenStat statistical software. There were highly significance differences among the accessions based on agro-morphological and nutritional quality traits. Grain yield was highly positively correlated with panicle weight. Plant height was highly significantly correlated with internode length, leaf length, leaf number, stem diameter, the number of nodes and starch content. The Principal component analysis revealed three most important PCs with a total variation of 78.6%. The protein content ranged from 7.7 to 14.7%, and starch ranged from 58.52 to 80.44%. The accessions that had high protein and starch content were AS16cyc and MP4277. There was vast genetic diversity observed among the accessions assessed that can be used by plant breeders to improve yield and nutritional traits.Keywords: accessions, genetic diversity, nutritional quality, sorghum
Procedia PDF Downloads 2631993 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 1161992 Aerodynamic Optimum Nose Shape Change of High-Speed Train by Design Variable Variation
Authors: Minho Kwak, Suhwan Yun, Choonsoo Park
Abstract:
Nose shape optimizations of high-speed train are performed for the improvement of aerodynamic characteristics. Based on the commercial train, KTX-Sancheon, multi-objective optimizations are conducted for the improvement of the side wind stability and the micro-pressure wave following the optimization for the reduction of aerodynamic drag. 3D nose shapes are modelled by the Vehicle Modeling Function. Aerodynamic drag and side wind stability are calculated by three-dimensional compressible Navier-Stokes solver, and micro pressure wave is done by axi-symmetric compressible Navier-Stokes solver. The Maxi-min Latin Hypercube Sampling method is used to extract sampling points to construct the approximation model. The kriging model is constructed for the approximation model and the NSGA-II algorithm was used as the multi-objective optimization algorithm. Nose length, nose tip height, and lower surface curvature are design variables. Because nose length is a dominant variable for aerodynamic characteristics of train nose, two optimization processes are progressed respectively with and without the design variable, nose length. Each pareto set was obtained and each optimized nose shape is selected respectively considering Honam high-speed rail line infrastructure in South Korea. Through the optimization process with the nose length, when compared to KTX Sancheon, aerodynamic drag was reduced by 9.0%, side wind stability was improved by 4.5%, micro-pressure wave was reduced by 5.4% whereas aerodynamic drag by 7.3%, side wind stability by 3.9%, micro-pressure wave by 3.9%, without the nose length. As a result of comparison between two optimized shapes, similar shapes are extracted other than the effect of nose length.Keywords: aerodynamic characteristics, design variable, multi-objective optimization, train nose shape
Procedia PDF Downloads 3481991 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System
Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko
Abstract:
Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic
Procedia PDF Downloads 621990 Genomic and Proteomic Variation in Glycine Max Genotypes towards Salinity
Authors: Faheema Khan
Abstract:
In order to investigate the influence of genetic background on salt tolerance in Soybean (Glycine max) ten soybean genotypes released/notified in India were selected. (Pusa-20, Pusa-40, Pusa-37, Pusa-16, Pusa-24, Pusa-22, BRAGG, PK-416, PK-1042, and DS-9712). The 10-day-old seedlings were subjected to 0, 25, 50, 75, 100, 125, and 150 mM NaCl for 15 days. Plant growth, leaf osmotic adjustment, and RAPD analysis were studied. In comparison to control plants, the plant growth in all genotypes was decreased by salt stress, respectively. Salt stress decreased leaf osmotic potential in all genotypes however the maximum reduction was observed in genotype Pusa-24 followed by PK-416 and Pusa-20. The difference in osmotic adjustment between all the genotypes was correlated with the concentrations of ion examined such as Na+ and the leaf proline concentration. These results suggest that the genotypic variation for salt tolerance can be partially accounted for by plant physiological measures. The genetic polymorphisms between soybean genotypes differing in response to salt stress were characterized using 25 RAPD primers. These primers generated a total of 1640 amplification products, among which 1615 were found to be polymorphic. A very high degree of polymorphism (98.30%) was observed. UPGMA cluster analysis of genetic similarity indices grouped all the genotypes into two major clusters. Intra-clustering within the two clusters precisely grouped the 10 genotypes in sub-cluster as expected from their physiological findings. Our results show that RAPD technique is a sensitive, precise and efficient tool for genomic analysis in soybean genotypes.Keywords: glycine max, NaCl, RAPD, proteomics
Procedia PDF Downloads 5861989 Analysis of Accessibility of Tourism Transportation in Banyuwangi
Authors: Lilla Anjani, Ervina Ahyudanari
Abstract:
Tourism is one of the contributors to regional economic income. Banyuwangi has made rapid developments related to the tourism sector, especially since 2010. There are 25 tourist visit locations that can become tourist destinations. Banyuwangi has tourism transportation to support the ease of reaching tourist places. This transportation operates with six routes, namely the final destination of Ijen Crater, Glenmore, Bajangan, Bangsring, Red Island, and Pine Forest. Despite having tourism transportation, tourists tend to choose to use a private car or rent a car because there is no access to tourist places using public transportation. Tourism transportation is also one form of sustainable tourism development in the future, such as the Sustainable Development Goals. The Banyuwangi government has a special program for tourism development that is supported by all sectors in Banyuwangi. To support the development of tourism in Banyuwangi, it is necessary to analyze existing tourism transportation as well as suggestions regarding new routes to reach all tourism locations in Banyuwangi Regency. The analysis reviewed in this study is an analysis of accessibility, distance, and time to the tourism location. There are 30 tourism destination points from 39 ODTW references from the transportation service, and the tourism office of Banyuwangi Regency Banyuwangi tourism objects can be divided into six zones based on travel time and distance. The highest accessibility value for Zone A is 51.96, and the lowest is 11.989. The highest accessibility value for Zone B is 33.4269, and the lowest is 21.737. The highest accessibility value for Zone C is 33,407, and the lowest is 14,848. The highest accessibility value for Zone D is 58,967, and the lowest is 14,742. The highest accessibility value for Zone E is 56,401, and the lowest is 14.1. The highest accessibility value for Zone F is 176.14, and the lowest is 44.1. There are two tourist transportation routes with six sessions every day. The resulting new route is in the form of grouping based on locations that can be reached in one particular area.Keywords: accessibility, tourism clustering, Banyuwangi tourism, sustainable development goals
Procedia PDF Downloads 921988 Effect of Baffles on the Cooling of Electronic Components
Authors: O. Bendermel, C. Seladji, M. Khaouani
Abstract:
In this work, we made a numerical study of the thermal and dynamic behaviour of air in a horizontal channel with electronic components. The influence to use baffles on the profiles of velocity and temperature is discussed. The finite volume method and the algorithm Simple are used for solving the equations of conservation of mass, momentum and energy. The results found show that baffles improve heat transfer between the cooling air and electronic components. The velocity will increase from 3 times per rapport of the initial velocity.Keywords: electronic components, baffles, cooling, fluids engineering
Procedia PDF Downloads 2981987 Ecological Ice Hockey Butterfly Motion Assessment Using Inertial Measurement Unit Capture System
Authors: Y. Zhang, J. Perez, S. Marnier
Abstract:
To date, no study on goaltending butterfly motion has been completed in real conditions, during an ice hockey game or training practice, to the author's best knowledge. This motion, performed to save score, is unnatural, intense, and repeated. The target of this research activity is to identify representative biomechanical criteria for this goaltender-specific movement pattern. Determining specific physical parameters may allow to will identify the risk of hip and groin injuries sustained by goaltenders. Four professional or academic goalies were instrumented during ice hockey training practices with five inertial measurement units. These devices were inserted in dedicated pockets located on each thigh and shank, and the fifth on the lumbar spine. A camera was also installed close to the ice to observe and record the goaltenders' activities, especially the butterfly motions, in order to synchronize the captured data and the behavior of the goaltender. Each data recorded began with a calibration of the inertial units and a calibration of the fully equipped goaltender on the ice. Three butterfly motions were recorded out of the training practice to define referential individual butterfly motions. Then, a data processing algorithm based on the Madgwick filter computed hip and knee joints joint range of motion as well as angular specific angular velocities. The developed algorithm software automatically identified and analyzed all the butterfly motions executed by the four different goaltenders. To date, it is still too early to show that the analyzed criteria are representative of the trauma generated by the butterfly motion as the research is only at its beginning. However, this descriptive research activity is promising in its ecological assessment, and once the criteria are found, the tools and protocols defined will allow the prevention of as many injuries as possible. It will thus be possible to build a specific training program for each goalie.Keywords: biomechanics, butterfly motion, human motion analysis, ice hockey, inertial measurement unit
Procedia PDF Downloads 1251986 Optimal Design of Wind Turbine Blades Equipped with Flaps
Authors: I. Kade Wiratama
Abstract:
As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero
Procedia PDF Downloads 3371985 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kumar Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform
Procedia PDF Downloads 1151984 Procedure to Optimize the Performance of Chemical Laser Using the Genetic Algorithm Optimizations
Authors: Mohammedi Ferhate
Abstract:
This work presents details of the study of the entire flow inside the facility where the exothermic chemical reaction process in the chemical laser cavity is analyzed. In our paper we will describe the principles of chemical lasers where flow reversal is produced by chemical reactions. We explain the device for converting chemical potential energy laser energy. We see that the phenomenon thus has an explosive trend. Finally, the feasibility and effectiveness of the proposed method is demonstrated by computer simulationKeywords: genetic, lasers, nozzle, programming
Procedia PDF Downloads 951983 Impact of Combined Heat and Power (CHP) Generation Technology on Distribution Network Development
Authors: Sreto Boljevic
Abstract:
In the absence of considerable investment in electricity generation, transmission and distribution network (DN) capacity, the demand for electrical energy will quickly strain the capacity of the existing electrical power network. With anticipated growth and proliferation of Electric vehicles (EVs) and Heat pump (HPs) identified the likelihood that the additional load from EV changing and the HPs operation will require capital investment in the DN. While an area-wide implementation of EVs and HPs will contribute to the decarbonization of the energy system, they represent new challenges for the existing low-voltage (LV) network. Distributed energy resources (DER), operating both as part of the DN and in the off-network mode, have been offered as a means to meet growing electricity demand while maintaining and ever-improving DN reliability, resiliency and power quality. DN planning has traditionally been done by forecasting future growth in demand and estimating peak load that the network should meet. However, new problems are arising. These problems are associated with a high degree of proliferation of EVs and HPs as load imposes on DN. In addition to that, the promotion of electricity generation from renewable energy sources (RES). High distributed generation (DG) penetration and a large increase in load proliferation at low-voltage DNs may have numerous impacts on DNs that create issues that include energy losses, voltage control, fault levels, reliability, resiliency and power quality. To mitigate negative impacts and at a same time enhance positive impacts regarding the new operational state of DN, CHP system integration can be seen as best action to postpone/reduce capital investment needed to facilitate promotion and maximize benefits of EVs, HPs and RES integration in low-voltage DN. The aim of this paper is to generate an algorithm by using an analytical approach. Algorithm implementation will provide a way for optimal placement of the CHP system in the DN in order to maximize the integration of RES and increase in proliferation of EVs and HPs.Keywords: combined heat & power (CHP), distribution networks, EVs, HPs, RES
Procedia PDF Downloads 2031982 Cooperative Agents to Prevent and Mitigate Distributed Denial of Service Attacks of Internet of Things Devices in Transportation Systems
Authors: Borhan Marzougui
Abstract:
Road and Transport Authority (RTA) is moving ahead with the implementation of the leader’s vision in exploring all avenues that may bring better security and safety services to the community. Smart transport means using smart technologies such as IoT (Internet of Things). This technology continues to affirm its important role in the context of Information and Transportation Systems. In fact, IoT is a network of Internet-connected objects able to collect and exchange different data using embedded sensors. With the growth of IoT, Distributed Denial of Service (DDoS) attacks is also growing exponentially. DDoS attacks are the major and a real threat to various transportation services. Currently, the defense mechanisms are mainly passive in nature, and there is a need to develop a smart technique to handle them. In fact, new IoT devices are being used into a botnet for DDoS attackers to accumulate for attacker purposes. The aim of this paper is to provide a relevant understanding of dangerous types of DDoS attack related to IoT and to provide valuable guidance for the future IoT security method. Our methodology is based on development of the distributed algorithm. This algorithm manipulates dedicated intelligent and cooperative agents to prevent and to mitigate DDOS attacks. The proposed technique ensure a preventive action when a malicious packets start to be distributed through the connected node (Network of IoT devices). In addition, the devices such as camera and radio frequency identification (RFID) are connected within the secured network, and the data generated by it are analyzed in real time by intelligent and cooperative agents. The proposed security system is based on a multi-agent system. The obtained result has shown a significant reduction of a number of infected devices and enhanced the capabilities of different security dispositives.Keywords: IoT, DDoS, attacks, botnet, security, agents
Procedia PDF Downloads 1451981 An Investigation Enhancing E-Voting Application Performance
Authors: Aditya Verma
Abstract:
E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.Keywords: blockchain, parallel bft, consensus algorithms, performance
Procedia PDF Downloads 1681980 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 681979 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)
Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula
Abstract:
This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.Keywords: MINLP, mixed-integer non-linear programming, optimization, structures
Procedia PDF Downloads 481978 Design, Analysis and Obstacle Avoidance Control of an Electric Wheelchair with Sit-Sleep-Seat Elevation Functions
Authors: Waleed Ahmed, Huang Xiaohua, Wilayat Ali
Abstract:
The wheelchair users are generally exposed to physical and psychological health problems, e.g., pressure sores and pain in the hip joint, associated with seating posture or being inactive in a wheelchair for a long time. Reclining Wheelchair with back, thigh, and leg adjustment helps in daily life activities and health preservation. The seat elevating function of an electric wheelchair allows the user (lower limb amputation) to reach different heights. An electric wheelchair is expected to ease the lives of the elderly and disable people by giving them mobility support and decreasing the percentage of accidents caused by users’ narrow sight or joystick operation errors. Thus, this paper proposed the design, analysis and obstacle avoidance control of an electric wheelchair with sit-sleep-seat elevation functions. A 3D model of a wheelchair is designed in SolidWorks that was later used for multi-body dynamic (MBD) analysis and to verify driving control system. The control system uses the fuzzy algorithm to avoid the obstacle by getting information in the form of distance from the ultrasonic sensor and user-specified direction from the joystick’s operation. The proposed fuzzy driving control system focuses on the direction and velocity of the wheelchair. The wheelchair model has been examined and proven in MSC Adams (Automated Dynamic Analysis of Mechanical Systems). The designed fuzzy control algorithm is implemented on Gazebo robotic 3D simulator using Robotic Operating System (ROS) middleware. The proposed wheelchair design enhanced mobility and quality of life by improving the user’s functional capabilities. Simulation results verify the non-accidental behavior of the electric wheelchair.Keywords: fuzzy logic control, joystick, multi body dynamics, obstacle avoidance, scissor mechanism, sensor
Procedia PDF Downloads 1291977 A Benchmark System for Testing Medium Voltage Direct Current (MVDC-CB) Robustness Utilizing Real Time Digital Simulation and Hardware-In-Loop Theory
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
The integration of green energy resources is a major focus, and the role of Medium Voltage Direct Current (MVDC) systems is exponentially expanding. However, the protection of MVDC systems against DC faults is a challenge that can have consequences on reliable and safe grid operation. This challenge reveals the need for MVDC circuit breakers (MVDC CB), which are in infancies of their improvement. Therefore will be a lack of MVDC CBs standards, including thresholds for acceptable power losses and operation speed. To establish a baseline for comparison purposes, a benchmark system for testing future MVDC CBs is vital. The literatures just give the timing sequence of each switch and the emphasis is on the topology, without in-depth study on the control algorithm of DCCB, as the circuit breaker control system is not yet systematic. A digital testing benchmark is designed for the Proof-of-concept of simulation studies using software models. It can validate studies based on real-time digital simulators and Transient Network Analyzer (TNA) models. The proposed experimental setup utilizes data accusation from the accurate sensors installed on the tested MVDC CB and through general purpose input/outputs (GPIO) from the microcontroller and PC Prototype studies in the laboratory-based models utilizing Hardware-in-the-Loop (HIL) equipment connected to real-time digital simulators is achieved. The improved control algorithm of the circuit breaker can reduce the peak fault current and avoid arc resignation, helping the coordination of DCCB in relay protection. Moreover, several research gaps are identified regarding case studies and evaluation approaches.Keywords: DC circuit breaker, hardware-in-the-loop, real time digital simulation, testing benchmark
Procedia PDF Downloads 811976 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function
Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio
Abstract:
Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).Keywords: algorithm, diabetes, laboratory medicine, non-invasive
Procedia PDF Downloads 361975 A Trends Analysis of Yatch Simulator
Authors: Jae-Neung Lee, Keun-Chang Kwak
Abstract:
This paper describes an analysis of Yacht Simulator international trends and also explains about Yacht. Examples of yacht Simulator using Yacht Simulator include image processing for totaling the total number of vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT (scale invariant features transform) matching, and application of median filter and thresholding.Keywords: yacht simulator, simulator, trends analysis, SIFT
Procedia PDF Downloads 4341974 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method
Authors: Rui Wu
Abstract:
In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning
Procedia PDF Downloads 1091973 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning
Authors: Richard O’Riordan, Saritha Unnikrishnan
Abstract:
Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection
Procedia PDF Downloads 1071972 Investigating Spatial Disparities in Health Status and Access to Health-Related Interventions among Tribals in Jharkhand
Authors: Parul Suraia, Harshit Sosan Lakra
Abstract:
Indigenous communities represent some of the most marginalized populations globally, with India labeled as tribals, experiencing particularly pronounced marginalization and a concerning decline in their numbers. These communities often inhabit geographically challenging regions characterized by low population densities, posing significant challenges to providing essential infrastructure services. Jharkhand, a Schedule 5 state, is infamous for its low-level health status due to disparities in access to health care. The primary objective of this study is to investigate the spatial inequalities in healthcare accessibility among tribal populations within the state and pinpoint critical areas requiring immediate attention. Health indicators were selected based on the tribal perspective and association of Sustainable Goal 3 (Good Health and Wellbeing) with other SDGs. Focused group discussions in which tribal people and tribal experts were done in order to finalize the indicators. Employing Principal Component Analysis, two essential indices were constructed: the Tribal Health Index (THI) and the Tribal Health Intervention Index (THII). Index values were calculated based on the district-wise secondary data for Jharkhand. The bivariate spatial association technique, Moran’s I was used to assess the spatial pattern of the variables to determine if there is any clustering (positive spatial autocorrelation) or dispersion (negative spatial autocorrelation) of values across Jharkhand. The results helped in facilitating targeting policy interventions in deprived areas of Jharkhand.Keywords: tribal health, health spatial disparities, health status, Jharkhand
Procedia PDF Downloads 971971 Performance Study of Classification Algorithms for Consumer Online Shopping Attitudes and Behavior Using Data Mining
Authors: Rana Alaa El-Deen Ahmed, M. Elemam Shehab, Shereen Morsy, Nermeen Mekawie
Abstract:
With the growing popularity and acceptance of e-commerce platforms, users face an ever increasing burden in actually choosing the right product from the large number of online offers. Thus, techniques for personalization and shopping guides are needed by users. For a pleasant and successful shopping experience, users need to know easily which products to buy with high confidence. Since selling a wide variety of products has become easier due to the popularity of online stores, online retailers are able to sell more products than a physical store. The disadvantage is that the customers might not find products they need. In this research the customer will be able to find the products he is searching for, because recommender systems are used in some ecommerce web sites. Recommender system learns from the information about customers and products and provides appropriate personalized recommendations to customers to find the needed product. In this paper eleven classification algorithms are comparatively tested to find the best classifier fit for consumer online shopping attitudes and behavior in the experimented dataset. The WEKA knowledge analysis tool, which is an open source data mining workbench software used in comparing conventional classifiers to get the best classifier was used in this research. In this research by using the data mining tool (WEKA) with the experimented classifiers the results show that decision table and filtered classifier gives the highest accuracy and the lowest accuracy classification via clustering and simple cart.Keywords: classification, data mining, machine learning, online shopping, WEKA
Procedia PDF Downloads 3521970 A Robust Spatial Feature Extraction Method for Facial Expression Recognition
Authors: H. G. C. P. Dinesh, G. Tharshini, M. P. B. Ekanayake, G. M. R. I. Godaliyadda
Abstract:
This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods.Keywords: facial expression recognition, principle component analysis (PCA), fisher discernment analysis (FDA), eigen-filter, cosine similarity, bayesian classifier, f-measure
Procedia PDF Downloads 4261969 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 1771968 RNA-Seq Based Transcriptomic Analysis of Wheat Cultivars for Unveiling of Genomic Variations and Isolation of Drought Tolerant Genes for Genome Editing
Authors: Ghulam Muhammad Ali
Abstract:
Unveiling of genes involved in drought and root architecture using transcriptomic analyses remained fragmented for further improvement of wheat through genome editing. The purpose of this research endeavor was to unveil the variations in different genes implicated in drought tolerance and root architecture in wheat through RNA-seq data analysis. In this study seedlings of 8 days old, 6 cultivars of wheat namely, Batis, Blue Silver, Local White, UZ888, Chakwal 50 and Synthetic wheat S22 were subjected to transcriptomic analysis for root and shoot genes. Total of 12 RNA samples was sequenced by Illumina. Using updated wheat transcripts from Ensembl and IWGC references with 54,175 gene models, we found that 49,621 out of 54,175 (91.5%) genes are expressed at an RPKM of 0.1 or more (in at least 1 sample). The number of genes expressed was higher in Local White than Batis. Differentially expressed genes (DEG) were higher in Chakwal 50. Expression-based clustering indicated conserved function of DRO1and RPK1 between Arabidopsis and wheat. Dendrogram showed that Local White is sister to Chakwal 50 while Batis is closely related to Blue Silver. This study flaunts transcriptomic sequence variations in different cultivars that showed mutations in genes associated with drought that may directly contribute to drought tolerance. DRO1 and RPK1 genes were fetched/isolated for genome editing. These genes are being edited in wheat through CRISPR-Cas9 for yield enhancement.Keywords: transcriptomic, wheat, genome editing, drought, CRISPR-Cas9, yield enhancement
Procedia PDF Downloads 1471967 Molecular Comparison of HEV Isolates from Sewage & Humans at Western India
Authors: Nidhi S. Chandra, Veena Agrawal, Debprasad Chattopadhyay
Abstract:
Background: Hepatitis E virus (HEV) is a major cause of acute viral hepatitis in developing countries. It spreads feco orally mainly due to contamination of drinking water by sewage. There is limited data on the genotypic comparison of HEV isolates from sewage water and humans. The aim of this study was to identify genotype and conduct phylogenetic analysis of HEV isolates from sewage water and humans. Materials and Methods: 14 sewage water and 60 serum samples from acute sporadic hepatitis E cases (negative for hepatitis A, B, C) were tested for HEV-RNA by nested polymerase chain reaction (RTnPCR) using primers designed with in RdRp (RNA dependent RNA polymerase) region of open reading frame-1 (ORF-1). Sequencing was done by ABI prism 310. The sequences (343 nucleotides) were compared with each other and were aligned with previously reported HEV sequences obtained from GeneBank, using Clustal W software. A Phylogenetic tree was constructed by using PHYLIP version 3.67 software. Results: HEV-RNA was detected in 49/ 60 (81.67%) serum and 5/14 (35.71%) sewage samples. The sequences obtained from 17 serums and 2 sewage specimens belonged to genotype I with 85% similarity and clustering with previously reported human HEV sequences from India. HEV isolates from human and sewage in North West India are genetically closely related to each other. Conclusion: These finding suggest that sewage acts as reservoir of HEV. Therefore it is important that measures are taken for proper waste disposal and treatment of drinking water to prevent outbreaks and epidemics due to HEV.Keywords: hepatitis E virus, nested polymerase chain reaction, open reading frame-1, nucleotidies
Procedia PDF Downloads 379