Search results for: deep Q networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4416

Search results for: deep Q networks

2826 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam

Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen

Abstract:

In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.

Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks

Procedia PDF Downloads 190
2825 Optrix: Energy Aware Cross Layer Routing Using Convex Optimization in Wireless Sensor Networks

Authors: Ali Shareef, Aliha Shareef, Yifeng Zhu

Abstract:

Energy minimization is of great importance in wireless sensor networks in extending the battery lifetime. One of the key activities of nodes in a WSN is communication and the routing of their data to a centralized base-station or sink. Routing using the shortest path to the sink is not the best solution since it will cause nodes along this path to fail prematurely. We propose a cross-layer energy efficient routing protocol Optrix that utilizes a convex formulation to maximize the lifetime of the network as a whole. We further propose, Optrix-BW, a novel convex formulation with bandwidth constraint that allows the channel conditions to be accounted for in routing. By considering this key channel parameter we demonstrate that Optrix-BW is capable of congestion control. Optrix is implemented in TinyOS, and we demonstrate that a relatively large topology of 40 nodes can converge to within 91% of the optimal routing solution. We describe the pitfalls and issues related with utilizing a continuous form technique such as convex optimization with discrete packet based communication systems as found in WSNs. We propose a routing controller mechanism that allows for this transformation. We compare Optrix against the Collection Tree Protocol (CTP) and we found that Optrix performs better in terms of convergence to an optimal routing solution, for load balancing and network lifetime maximization than CTP.

Keywords: wireless sensor network, Energy Efficient Routing

Procedia PDF Downloads 377
2824 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional

Procedia PDF Downloads 215
2823 Hybrid Localization Schemes for Wireless Sensor Networks

Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir

Abstract:

This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.

Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks

Procedia PDF Downloads 456
2822 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 109
2821 Wind Power Forecasting Using Echo State Networks Optimized by Big Bang-Big Crunch Algorithm

Authors: Amir Hossein Hejazi, Nima Amjady

Abstract:

In recent years, due to environmental issues traditional energy sources had been replaced by renewable ones. Wind energy as the fastest growing renewable energy shares a considerable percent of energy in power electricity markets. With this fast growth of wind energy worldwide, owners and operators of wind farms, transmission system operators, and energy traders need reliable and secure forecasts of wind energy production. In this paper, a new forecasting strategy is proposed for short-term wind power prediction based on Echo State Networks (ESN). The forecast engine utilizes state-of-the-art training process including dynamical reservoir with high capability to learn complex dynamics of wind power or wind vector signals. The study becomes more interesting by incorporating prediction of wind direction into forecast strategy. The Big Bang-Big Crunch (BB-BC) evolutionary optimization algorithm is adopted for adjusting free parameters of ESN-based forecaster. The proposed method is tested by real-world hourly data to show the efficiency of the forecasting engine for prediction of both wind vector and wind power output of aggregated wind power production.

Keywords: wind power forecasting, echo state network, big bang-big crunch, evolutionary optimization algorithm

Procedia PDF Downloads 556
2820 Expression Level of Dehydration-Responsive Element Binding/DREB Gene of Some Local Corn Cultivars from Kisar Island-Maluku Indonesia Using Quantitative Real-Time PCR

Authors: Hermalina Sinay, Estri L. Arumingtyas

Abstract:

The research objective was to determine the expression level of dehydration responsive element binding/DREB gene of local corn cultivars from Kisar Island Maluku. The study design was a randomized block design with single factor consist of six local corn cultivars obtained from farmers in Kisar Island and one reference varieties wich has been released by the government as a drought-tolerant varieties and obtained from Cereal Crops Research Institute (ICERI) Maros South Sulawesi. Leaf samples were taken is the second leaf after the flag leaf at the 65 days after planting. Isolation of total RNA from leaf samples was carried out according to the protocols of the R & A-BlueTM Total RNA Extraction Kit and was used as a template for cDNA synthesis. The making of cDNA from total RNA was carried out according to the protocol of One-Step Reverse Transcriptase PCR Premix Kit. Real Time-PCR was performed on cDNA from reverse transcription followed the procedures of Real MODTM Green Real-Time PCR Master Mix Kit. Data obtained from the real time-PCR results were analyzed using relative quantification method based on the critical point / Cycle Threshold (CP / CT). The results of gene expression analysis of DREB gene showed that the expression level of the gene was highest obtained at Deep Yellow local corn cultivar, and the lowest one was obtained at the Rubby Brown Cob cultivar. It can be concluded that the expression level of DREB gene of Deep Yellow local corn cultivar was highest than other local corn cultivars and Srikandi variety as a reference variety.

Keywords: expression, level, DREB gene, local corn cultivars, Kisar Island, Maluku

Procedia PDF Downloads 288
2819 A Constructed Wetland as a Reliable Method for Grey Wastewater Treatment in Rwanda

Authors: Hussein Bizimana, Osman Sönmez

Abstract:

Constructed wetlands are current the most widely recognized waste water treatment option, especially in developing countries where they have the potential for improving water quality and creating valuable wildlife habitat in ecosystem with treatment requirement relatively simple for operation and maintenance cost. Lack of grey waste water treatment facilities in Kigali İnstitute of Science and Technology in Rwanda, causes pollution in the surrounding localities of Rugunga sector, where already a problem of poor sanitation is found. In order to treat grey water produced at Kigali İnstitute of Science and Technology, with high BOD concentration, high nutrients concentration and high alkalinity; a Horizontal Sub-surface Flow pilot-scale constructed wetland was designed and can operate in Kigali İnstitute of Science and Technology. The study was carried out in a sedimentation tank of 5.5 m x 1.42 m x 1.2 m deep and a Horizontal Sub-surface constructed wetland of 4.5 m x 2.5 m x 1.42 m deep. The grey waste water flow rate of 2.5 m3/d flew through vegetated wetland and sandy pilot plant. The filter media consisted of 0.6 to 2 mm of coarse sand, 0.00003472 m/s of hydraulic conductivity and cattails (Typha latifolia spp) were used as plants species. The effluent flow rate of the plant is designed to be 1.5 m3/ day and the retention time will be 24 hrs. 72% to 79% of BOD, COD, and TSS removals are estimated to be achieved, while the nutrients (Nitrogen and Phosphate) removal is estimated to be in the range of 34% to 53%. Every effluent characteristic will meet exactly the Rwanda Utility Regulatory Agency guidelines primarily because the retention time allowed is enough to make the reduction of contaminants within effluent raw waste water. Treated water reuse system was developed where water will be used in the campus irrigation system again.

Keywords: constructed wetlands, hydraulic conductivity, grey waste water, cattails

Procedia PDF Downloads 591
2818 Modeling and Minimizing the Effects of Ferroresonance for Medium Voltage Transformers

Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Arian Amirnia, Atena Taheri, Mohammadreza Arabi, Mahmud Fotuhi-Firuzabad

Abstract:

Ferroresonance effects cause overvoltage in medium voltage transformers and isolators used in electrical networks. Ferroresonance effects are nonlinear and occur between the network capacitor and the nonlinear inductance of the voltage transformer during saturation. This phenomenon is unwanted for transformers since it causes overheating, introduction of high dynamic forces in primary coils, and rise of voltage in primary coils for the voltage transformer. Furthermore, it results in electrical and thermal failure of the transformer. Expansion of distribution lines, design of the transformer in smaller sizes, and the increase of harmonics in distribution networks result in an increase of ferroresonance. There is limited literature available to improve the effects of ferroresonance; therefore, optimizing its effects for voltage transformers is of great importance. In this study, comprehensive modeling of a medium voltage block-type voltage transformer is performed. In addition, a recent model is proposed to improve the performance of voltage transformers during the occurrence of ferroresonance using damping oscillations. Also, transformer design optimization is presented in this study to show further improvements in the performance of the voltage transformer. The recently proposed model is experimentally tested and verified on a medium voltage transformer in the laboratory, and simulation results show a large reduction of the effects of ferroresonance.

Keywords: optimization, voltage transformer, ferroresonance, modeling, damper

Procedia PDF Downloads 80
2817 Reliable and Error-Free Transmission through Multimode Polymer Optical Fibers in House Networks

Authors: Tariq Ahamad, Mohammed S. Al-Kahtani, Taisir Eldos

Abstract:

Optical communications technology has made enormous and steady progress for several decades, providing the key resource in our increasingly information-driven society and economy. Much of this progress has been in finding innovative ways to increase the data carrying capacity of a single optical fiber. In this research article we have explored basic issues in terms of security and reliability for secure and reliable information transfer through the fiber infrastructure. Conspicuously, one potentially enormous source of improvement has however been left untapped in these systems: fibers can easily support hundreds of spatial modes, but today’s commercial systems (single-mode or multi-mode) make no attempt to use these as parallel channels for independent signals. Bandwidth, performance, reliability, cost efficiency, resiliency, redundancy, and security are some of the demands placed on telecommunications today. Since its initial development, fiber optic systems have had the advantage of most of these requirements over copper-based and wireless telecommunications solutions. The largest obstacle preventing most businesses from implementing fiber optic systems was cost. With the recent advancements in fiber optic technology and the ever-growing demand for more bandwidth, the cost of installing and maintaining fiber optic systems has been reduced dramatically. With so many advantages, including cost efficiency, there will continue to be an increase of fiber optic systems replacing copper-based communications. This will also lead to an increase in the expertise and the technology needed to tap into fiber optic networks by intruders. As ever before, all technologies have been subject to hacking and criminal manipulation, fiber optics is no exception. Researching fiber optic security vulnerabilities suggests that not everyone who is responsible for their networks security is aware of the different methods that intruders use to hack virtually undetected into fiber optic cables. With millions of miles of fiber optic cables stretching across the globe and carrying information including but certainly not limited to government, military, and personal information, such as, medical records, banking information, driving records, and credit card information; being aware of fiber optic security vulnerabilities is essential and critical. Many articles and research still suggest that fiber optics is expensive, impractical and hard to tap. Others argue that it is not only easily done, but also inexpensive. This paper will briefly discuss the history of fiber optics, explain the basics of fiber optic technologies and then discuss the vulnerabilities in fiber optic systems and how they can be better protected. Knowing the security risks and knowing the options available may save a company a lot embarrassment, time, and most importantly money.

Keywords: in-house networks, fiber optics, security risk, money

Procedia PDF Downloads 410
2816 Fish Markets in Sierra Leone: Size, Structure, Distribution Networks and Opportunities for Aquaculture Development

Authors: Milton Jusu, Moses Koroma

Abstract:

Efforts by the Ministry of Fisheries and Marine Resources and its development partners to introduce “modern” aquaculture in Sierra Leone since the 1970s have not been successful. A number of reasons have been hypothesized, including the suggestion that the market infrastructure and demand for farmed fish were inadequate to stimulate large-scale and widespread aquaculture production in the country. We have assessed the size, structure, networks and opportunities in fish markets using a combination of Participatory Rural Appraisals (PRAs) and questionnaire surveys conducted in a sample of 29 markets (urban, weekly, wholesale and retail) and two hundred traders. The study showed that the local fish markets were dynamic, with very high variations in demand and supply. The markets sampled supplied between 135.2 and 9947.6 tonnes/year. Mean prices for fresh fish varied between US$1.12 and US$3.89/kg depending on species, with smoked catfish and shrimps commanding prices as high as US$7.4/kg. It is unlikely that marine capture fisheries can increase their current production levels, and these may, in fact, already be over-exploited and declining. Marine fish supplies are particularly low between July and September. More careful attention to the timing of harvests (rainy season, not dry season) and to species (catfish, not tilapia) (could help in the successful adoption of aquaculture.

Keywords: fisheries and aquaculture, fish market, marine fish supplies, harvests

Procedia PDF Downloads 47
2815 Suicide Conceptualization in Adolescents through Semantic Networks

Authors: K. P. Valdés García, E. I. Rodríguez Fonseca, L. G. Juárez Cantú

Abstract:

Suicide is a global, multidimensional and dynamic problem of mental health, which requires a constant study for its understanding and prevention. When research of this phenomenon is done, it is necessary to consider the different characteristics it may have because of the individual and sociocultural variables, the importance of this consideration is related to the generation of effective treatments and interventions. Adolescents are a vulnerable population due to the characteristics of the development stage. The investigation was carried out with the objective of identifying and describing the conceptualization of adolescents of suicide, and in this process, we find possible differences between men and women. The study was carried out in Saltillo, Coahuila, Mexico. The sample was composed of 418 volunteer students aged between 11 and 18 years. The ethical aspects of the research were reviewed and considered in all the processes of the investigation with the participants, their parents and the schools to which they belonged, psychological attention was offered to the participants and preventive workshops were carried in the educational institutions. Natural semantic networks were the instrument used, since this hybrid method allows to find and analyze the social concept of a phenomenon; in this case, the word suicide was used as an evocative stimulus and participants were asked to evoke at least five words and a maximum 10 that they thought were related to suicide, and then hierarchize them according to the closeness with the construct. The subsequent analysis was carried with Excel, yielding the semantic weights, affective loads and the distances between each of the semantic fields established according to the words reported by the subjects. The results showed similarities in the conceptualization of suicide in adolescents, men and women. Seven semantic fields were generated; the words were related in the discourse analysis: 1) death, 2) possible triggering factors, 3) associated moods, 4) methods used to carry it out, 5) psychological symptomatology that could affect, 6) words associated with a rejection of suicide, and finally, 7) specific objects to carry it out. One of the necessary aspects to consider in the investigations of complex issues such as suicide is to have a diversity of instruments and techniques that adjust to the characteristics of the population and that allow to understand the phenomena from the social constructs and not only theoretical. The constant study of suicide is a pressing need, the loss of a life from emotional difficulties that can be solved through psychiatry and psychological methods requires governments and professionals to pay attention and work with the risk population.

Keywords: adolescents, psychological construct, semantic networks, suicide

Procedia PDF Downloads 99
2814 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework

Authors: Abbas Raza Ali

Abstract:

Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.

Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation

Procedia PDF Downloads 164
2813 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 180
2812 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 28
2811 A Review on Medical Image Registration Techniques

Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry

Abstract:

This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.

Keywords: image registration techniques, medical images, neural networks, optimisaztion, transformation

Procedia PDF Downloads 167
2810 Hybrid Hunger Games Search Optimization Based on the Neural Networks Approach Applied to UAVs

Authors: Nadia Samantha Zuñiga-Peña, Norberto Hernández-Romero, Omar Aguilar-Mejia, Salatiel García-Nava

Abstract:

Using unmanned aerial vehicles (UAVs) for load transport has gained significant importance in various sectors due to their ability to improve efficiency, reduce costs, and access hard-to-reach areas. Although UAVs offer numerous advantages for load transport, several complications and challenges must be addressed to exploit their potential fully. Complexity relays on UAVs are underactuated, non-linear systems with a high degree of coupling between their variables and are subject to forces with uncertainty. One of the biggest challenges is modeling and controlling the system formed by UAVs carrying a load. In order to solve the controller problem, in this work, a hybridization of Neural Network and Hunger Games Search (HGS) metaheuristic algorithm is developed and implemented to find the parameters of the Super Twisting Sliding Mode Controller for the 8 degrees of freedom model of UAV with payload. The optimized controller successfully tracks the UAV through the three-dimensional desired path, demonstrating the effectiveness of the proposed solution. A comparison of performance shows the superiority of the neural network HGS (NNHGS) over the HGS algorithm, minimizing the tracking error by 57.5 %.

Keywords: neural networks, hunger games search, super twisting sliding mode controller, UAVs.

Procedia PDF Downloads 12
2809 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 69
2808 Genome-Wide Functional Analysis of Phosphatase in Cryptococcus neoformans

Authors: Jae-Hyung Jin, Kyung-Tae Lee, Yee-Seul So, Eunji Jeong, Yeonseon Lee, Dongpil Lee, Dong-Gi Lee, Yong-Sun Bahn

Abstract:

Cryptococcus neoformans causes cryptococcal meningoencephalitis mainly in immunocompromised patients as well as immunocompetent people. But therapeutic options are limited to treat cryptococcosis. Some signaling pathways including cyclic AMP pathway, MAPK pathway, and calcineurin pathway play a central role in the regulation of the growth, differentiation, and virulence of C. neoformans. To understand signaling networks regulating the virulence of C. neoformans, we selected the 114 putative phosphatase genes, one of the major components of signaling networks, in the genome of C. neoformans. We identified putative phosphatases based on annotation in C. neoformans var. grubii genome database provided by the Broad Institute and National Center for Biotechnology Information (NCBI) and performed a BLAST search of phosphatases of Saccharomyces cerevisiae, Aspergillus nidulans, Candida albicans and Fusarium graminearum to Cryptococcus neoformans. We classified putative phosphatases into 14 groups based on InterPro phosphatase domain annotation. Here, we constructed 170 signature-tagged gene-deletion strains through homologous recombination methods for 91 putative phosphatases. We examined their phenotypic traits under 30 different in vitro conditions, including growth, differentiation, stress response, antifungal resistance and virulence-factor production.

Keywords: human fungal pathogen, phosphatase, deletion library, functional genomics

Procedia PDF Downloads 347
2807 Aire-Dependent Transcripts have Shortened 3’UTRs and Show Greater Stability by Evading Microrna-Mediated Repression

Authors: Clotilde Guyon, Nada Jmari, Yen-Chin Li, Jean Denoyel, Noriyuki Fujikado, Christophe Blanchet, David Root, Matthieu Giraud

Abstract:

Aire induces ectopic expression of a large repertoire of tissue-specific antigen (TSA) genes in thymic medullary epithelial cells (MECs), driving immunological self-tolerance in maturing T cells. Although important mechanisms of Aire-induced transcription have recently been disclosed through the identification and the study of Aire’s partners, the fine transcriptional functions underlied by a number of them and conferred to Aire are still unknown. Alternative cleavage and polyadenylation (APA) is an essential mRNA processing step regulated by the termination complex consisting of 85 proteins, 10 of them have been related to Aire. We evaluated APA in MECs in vivo by microarray analysis with mRNA-spanning probes and RNA deep sequencing. We uncovered the preference of Aire-dependent transcripts for short-3’UTR isoforms and for proximal poly(A) site selection marked by the increased binding of the cleavage factor Cstf-64. RNA interference of the 10 Aire-related proteins revealed that Clp1, a member of the core termination complex, exerts a profound effect on short 3’UTR isoform preference. Clp1 is also significantly upregulated in the MECs compared to 25 mouse tissues in which we found that TSA expression is associated with longer 3’UTR isoforms. Aire-dependent transcripts escape a global 3’UTR lengthening associated with MEC differentiation, thereby potentiating the repressive effect of microRNAs that are globally upregulated in mature MECs. Consistent with these findings, RNA deep sequencing of actinomycinD-treated MECs revealed the increased stability of short 3’UTR Aire-induced transcripts, resulting in TSA transcripts accumulation and contributing for their enrichment in the MECs.

Keywords: Aire, central tolerance, miRNAs, transcription termination

Procedia PDF Downloads 370
2806 Characteristics and Challenges of Post-Burn Contractures in Adults and Children: A Descriptive Study

Authors: Hardisiswo Soedjana, Inne Caroline

Abstract:

Deep dermal or full thickness burns are inevitably lead to post-burn contractures. These contractures remain to be one of the most concerning late complications of burn injuries. Surgical management includes releasing the contracture followed by resurfacing the defect accompanied by post-operative rehabilitation. Optimal treatment of post-burn contractures depends on the characteristics of the contractures. This study is aimed to describe clinical characteristics, problems, and management of post-burn contractures in adults and children. A retrospective analysis was conducted from medical records of patients suffered from contractures after burn injuries admitted to Hasan Sadikin general hospital between January 2016 and January 2018. A total of 50 patients with post burn contractures were included in the study. There were 17 adults and 33 children. Most patients were male, whose age range within 15-59 years old and 5-9 years old. Educational background was mostly senior high school among adults, while there was only one third of children who have entered school. Etiology of burns was predominantly flame in adults (82.3%); whereas flame and scald were the leading cause of burn injury in children (11%). Based on anatomical regions, hands were the most common affected both in adults (35.2%) and children (48.5%). Contractures were identified in 6-12 months since the initial burns. Most post-burn hand contractures were resurfaced with full-thickness skin graft (FTSG) both in adults and children. There were 11 patients who presented with recurrent contracture after previous history of contracture release. Post-operative rehabilitation was conducted for all patients; however, it is important to highlight that it is still challenging to control splinting and exercise when patients are discharged and especially the compliance in children. In order to improve quality of life in patients with history of deep burn injuries, prevention of contractures should begin right after acute care has been established. Education for the importance of splinting and exercise should be administered as comprehensible as possible for adult patients and parents of pediatric patients.

Keywords: burn, contracture, education, exercise, splinting

Procedia PDF Downloads 112
2805 Deconstructing Local Area Networks Using MaatPeace

Authors: Gerald Todd

Abstract:

Recent advances in random epistemologies and ubiquitous theory have paved the way for web services. Given the current status of linear-time communication, cyberinformaticians compellingly desire the exploration of link-level acknowledgements. In order to realize this purpose, we concentrate our efforts on disconfirming that DHTs and model checking are mostly incompatible.

Keywords: LAN, cyberinformatics, model checking, communication

Procedia PDF Downloads 386
2804 Tracking of Intramuscular Stem Cells by Magnetic Resonance Diffusion Weighted Imaging

Authors: Balakrishna Shetty

Abstract:

Introduction: Stem Cell Imaging is a challenging field since the advent of Stem Cell treatment in humans. Series of research on tagging and tracking the stem cells has not been very effective. The present study is an effort by the authors to track the stem cells injected into calf muscles by Magnetic Resonance Diffusion Weighted Imaging. Materials and methods: Stem Cell injection deep into the calf muscles of patients with peripheral vascular disease is one of the recent treatment modalities followed in our institution. 5 patients who underwent deep intramuscular injection of stem cells as treatment were included for this study. Pre and two hours Post injection MRI of bilateral calf regions was done using 1.5 T Philips Achieva, 16 channel system using 16 channel torso coils. Axial STIR, Axial Diffusion weighted images with b=0 and b=1000 values with back ground suppression (DWIBS sequence of Philips MR Imaging Systems) were obtained at 5 mm interval covering the entire calf. The invert images were obtained for better visualization. 120ml of autologous bone marrow derived stem cells were processed and enriched under c-GMP conditions and reduced to 40ml solution containing mixture of above stem cells. Approximately 40 to 50 injections, each containing 0.75ml of processed stem cells, was injected with marked grids over the calf region. Around 40 injections, each of 1ml normal saline, is injected into contralateral leg as control. Results: Significant Diffusion hyper intensity is noted at the site of injected stem cells. No hyper intensity noted before the injection and also in the control side where saline was injected conclusion: This is one of the earliest studies in literature showing diffusion hyper intensity in intramuscularly injected stem cells. The advantages and deficiencies in this study will be discussed during the presentation.

Keywords: stem cells, imaging, DWI, peripheral vascular disease

Procedia PDF Downloads 61
2803 Identification of Deposition Sequences of the Organic Content of Lower Albian-Cenomanian Age in Northern Tunisia: Correlation between Molecular and Stratigraphic Fossils

Authors: Tahani Hallek, Dhaou Akrout, Riadh Ahmadi, Mabrouk Montacer

Abstract:

The present work is an organic geochemical study of the Fahdene Formation outcrops at the Mahjouba region belonging to the Eastern part of the Kalaat Senan structure in northwestern Tunisia (the Kef-Tedjerouine area). The analytical study of the organic content of the samples collected, allowed us to point out that the Formation in question is characterized by an average to good oil potential. This fossilized organic matter has a mixed origin (type II and III), as indicated by the relatively high values of hydrogen index. This origin is confirmed by the C29 Steranes abundance and also by tricyclic terpanes C19/(C19+C23) and tetracyclic terpanes C24/(C24+C23) ratios, that suggest a marine environment of deposit with high plants contribution. We have demonstrated that the heterogeneity of organic matter between the marine aspect, confirmed by the presence of foraminifera, and the continental contribution, is the result of an episodic anomaly in relation to the sequential stratigraphy. Given that the study area is defined as an outer platform forming a transition zone between a stable continental domain to the south and a deep basin to the north, we have explained the continental contribution by successive forced regressions, having blocked the albian transgression, allowing the installation of the lowstand system tracts. This aspect is represented by the incised valleys filling, in direct contact with the pelagic and deep sea facies. Consequently, the Fahdene Formation, in the Kef-Tedjerouine area, consists of transgressive system tracts (TST) brutally truncated by extras of continental progradation; resulting in a mixed influence deposition having retained a heterogeneous organic material.

Keywords: molecular geochemistry, biomarkers, forced regression, deposit environment, mixed origin, Northern Tunisia

Procedia PDF Downloads 238
2802 Urban Networks as Model of Sustainable Design

Authors: Agryzkov Taras, Oliver Jose L., Tortosa Leandro, Vicent Jose

Abstract:

This paper aims to demonstrate how the consideration of cities as a special kind of complex network, called urban network, may lead to the use of design tools coming from network theories which, in fact, results in a quite sustainable approach. There is no doubt that the irruption in contemporary thought of Gaia as an essential political agent proposes a narrative that has been extended to the field of creative processes in which, of course, the activity of Urban Design is found. The rationalist paradigm is put in crisis, and from the so-called sciences of complexity, its way of describing reality and of intervening in it is questioned. Thus, a new way of understanding reality surges, which has to do with a redefinition of the human being's own place in what is now understood as a delicate and complex network. In this sense, we know that in these systems of connected and interdependent elements, the influences generated by them originate emergent properties and behaviors for the whole that, individually studied, would not make sense. We believe that the design of cities cannot remain oblivious to these principles, and therefore this research aims to demonstrate the potential that they have for decision-making in the urban environment. Thus, we will see an example of action in the field of public mobility, another example in the design of commercial areas, and a third example in the field of redensification of sprawl areas, in which different aspects of network theory have been applied to change the urban design. We think that even though these actions have been developed in European cities, and more specifically in the Mediterranean area in Spain, the reflections and tools could have a broader scope of action.

Keywords: graphs, complexity sciences, urban networks, urban design

Procedia PDF Downloads 141
2801 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method

Authors: Rui Wu

Abstract:

In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.

Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning

Procedia PDF Downloads 91
2800 Closed Incision Negative Pressure Therapy Dressing as an Approach to Manage Closed Sternal Incisions in High-Risk Cardiac Patients: A Multi-Centre Study in the UK

Authors: Rona Lee Suelo-Calanao, Mahmoud Loubani

Abstract:

Objective: Sternal wound infection (SWI) following cardiac operation has a significant impact on patient morbidity and mortality. It also contributes to longer hospital stays and increased treatment costs. SWI management is mainly focused on treatment rather than prevention. This study looks at the effect of closed incision negative pressure therapy (ciNPT) dressing to help reduce the incidence of superficial SWI in high-risk patients after cardiac surgery. The ciNPT dressing was evaluated at 3 cardiac hospitals in the United Kingdom". Methods: All patients who had cardiac surgery from 2013 to 2021 were included in the study. The patients were classed as high risk if they have two or more of the recognised risk factors: obesity, age above 80 years old, diabetes, and chronic obstructive pulmonary disease. Patients receiving standard dressing (SD) and patients using ciNPT were propensity matched, and the Fisher’s exact test (two-tailed) and unpaired T-test were used to analyse categorical and continuous data, respectively. Results: There were 766 matched cases in each group. Total SWI incidences are lower in the ciNPT group compared to the SD group (43 (5.6%) vs 119 (15.5%), P=0.0001). There are fewer deep sternal wound infections (14(1.8%) vs. 31(4.04%), p=0.0149) and fewer superficial infections (29(3.7%) vs. 88 (11.4%), p=0.0001) in the ciNPT group compared to the SD group. However, the ciNPT group showed a longer average length of stay (11.23 ± 13 days versus 9.66 ± 10 days; p=0.0083) and higher mean logistic EuroSCORE (11.143 ± 13 versus 8.094 ± 11; p=0.0001). Conclusion: Utilization of ciNPT as an approach to help reduce the incidence of superficial and deep SWI may be effective in high-risk patients requiring cardiac surgery.

Keywords: closed incision negative pressure therapy, surgical wound infection, cardiac surgery complication, high risk cardiac patients

Procedia PDF Downloads 76
2799 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 373
2798 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 173
2797 Intelligent Minimal Allocation of Capacitors in Distribution Networks Using Genetic Algorithm

Authors: S. Neelima, P. S. Subramanyam

Abstract:

A distribution system is an interface between the bulk power system and the consumers. Among these systems, radial distributions system is popular because of low cost and simple design. In distribution systems, the voltages at buses reduces when moved away from the substation, also the losses are high. The reason for a decrease in voltage and high losses is the insufficient amount of reactive power, which can be provided by the shunt capacitors. But the placement of the capacitor with an appropriate size is always a challenge. Thus, the optimal capacitor placement problem is to determine the location and size of capacitors to be placed in distribution networks in an efficient way to reduce the power losses and improve the voltage profile of the system. For this purpose, in this paper, two stage methodologies are used. In the first stage, the load flow of pre-compensated distribution system is carried out using ‘dimension reducing distribution load flow algorithm (DRDLFA)’. On the basis of this load flow the potential locations of compensation are computed. In the second stage, Genetic Algorithm (GA) technique is used to determine the optimal location and size of the capacitors such that the cost of the energy loss and capacitor cost to be a minimum. The above method is tested on IEEE 9 and 34 bus system and compared with other methods in the literature.

Keywords: dimension reducing distribution load flow algorithm, DRDLFA, genetic algorithm, electrical distribution network, optimal capacitors placement, voltage profile improvement, loss reduction

Procedia PDF Downloads 380