Search results for: total capacity algorithm
13874 Intergenerational Trauma: Patterns of Child Abuse and Neglect Across Two Generations in a Barbados Cohort
Authors: Rebecca S. Hock, Cyralene P. Bryce, Kevin Williams, Arielle G. Rabinowitz, Janina R. Galler
Abstract:
Background: Findings have been mixed regarding whether offspring of parents who were abused or neglected as children have a greater risk of experiencing abuse or neglect themselves. In addition, many studies on this topic are restricted to physical abuse and take place in a limited number of countries, representing a small segment of the world's population. Methods: We examined relationships between childhood maltreatment history assessed in a subset (N=68) of the original longitudinal birth cohort (G1) of the Barbados Nutrition Study and their now-adult offspring (G2) (N=111) using the Childhood Trauma Questionnaire-Short Form (CTQ-SF). We used Pearson correlations to assess relationships between parent and offspring CTQ-SF total and subscale scores (physical, emotional, and sexual abuse; physical and emotional neglect). Next, we ran multiple regression analyses, using the parental CTQ-SF total score and the parental Sexual Abuse score as primary predictors separately in our models of G2 CTQ-SF (total and subscale scores). Results: G1 total CTQ-SF scores were correlated with G2 offspring Emotional Neglect and total scores. G1 Sexual Abuse history was significantly correlated with G2 Emotional Abuse, Sexual Abuse, Emotional Neglect, and Total Score. In fully-adjusted regression models, parental (G1) total CTQ-SF scores remained significantly associated with G2 offspring reports of Emotional Neglect, and parental (G1) Sexual Abuse was associated with offspring (G2) reports of Emotional Abuse, Physical Abuse, Emotional Neglect, and overall CTQ-SF scores. Conclusions: Our findings support a link between parental exposure to childhood maltreatment and their offspring's self-reported exposure to childhood maltreatment. Of note, there was not an exact correspondence between the subcategory of maltreatment experienced from one generation to the next. Compared with other subcategories, G1 Sexual Abuse history was the most likely to predict G2 offspring maltreatment. Further studies are needed to delineate underlying mechanisms and to develop intervention strategies aimed at preventing intergenerational transmission.Keywords: trauma, family, adolescents, intergenerational trauma, child abuse, child neglect, global mental health, North America
Procedia PDF Downloads 8413873 Hub Port Positioning and Route Planning of Feeder Lines for Regional Transportation Network
Authors: Huang Xiaoling, Liu Lufeng
Abstract:
In this paper, we seek to determine one reasonable local hub port and optimal routes for a containership fleet, performing pick-ups and deliveries, between the hub and spoke ports in a same region. The relationship between a hub port, and traffic in feeder lines is analyzed. A new network planning method is proposed, an integrated hub port location and route design, a capacitated vehicle routing problem with pick-ups, deliveries and time deadlines are formulated and solved using an improved genetic algorithm for positioning the hub port and establishing routes for a containership fleet. Results on the performance of the algorithm and the feasibility of the approach show that a relatively small fleet of containerships could provide efficient services within deadlines.Keywords: route planning, hub port location, container feeder service, regional transportation network
Procedia PDF Downloads 44713872 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J
Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa
Abstract:
A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.Keywords: critical path, transportation network, connectivity reliability, network model, Neo4j application, edge betweenness centrality index
Procedia PDF Downloads 13413871 Umbrella Reinforcement Learning – A Tool for Hard Problems
Authors: Egor E. Nuzhin, Nikolay V. Brilliantov
Abstract:
We propose an approach for addressing Reinforcement Learning (RL) problems. It combines the ideas of umbrella sampling, borrowed from Monte Carlo technique of computational physics and chemistry, with optimal control methods, and is realized on the base of neural networks. This results in a powerful algorithm, designed to solve hard RL problems – the problems, with long-time delayed reward, state-traps sticking and a lack of terminal states. It outperforms the prominent algorithms, such as PPO, RND, iLQR and VI, which are among the most efficient for the hard problems. The new algorithm deals with a continuous ensemble of agents and expected return, that includes the ensemble entropy. This results in a quick and efficient search of the optimal policy in terms of ”exploration-exploitation trade-off” in the state-action space.Keywords: umbrella sampling, reinforcement learning, policy gradient, dynamic programming
Procedia PDF Downloads 2113870 Battery Replacement Strategy for Electric AGVs in an Automated Container Terminal
Authors: Jiheon Park, Taekwang Kim, Kwang Ryel Ryu
Abstract:
Electric automated guided vehicles (AGVs) are becoming popular in many automated container terminals nowadays because they are pollution-free and environmentally friendly vehicles for transporting the containers within the terminal. Since efficient operation of AGVs is critical for the productivity of the container terminal, the replacement of batteries of the AGVs must be conducted in a strategic way to minimize undesirable transportation interruptions. While a too frequent replacement may lead to a loss of terminal productivity by delaying container deliveries, missing the right timing of battery replacement can result in a dead AGV that causes a severer productivity loss due to the extra efforts required to finish post treatment. In this paper, we propose a strategy for battery replacement based on a scoring function of multiple criteria taking into account the current battery level, the distances to different battery stations, and the progress of the terminal job operations. The strategy is optimized using a genetic algorithm with the objectives of minimizing the total time spent for battery replacement as well as maximizing the terminal productivity.Keywords: AGV operation, automated container terminal, battery replacement, electric AGV, strategy optimization
Procedia PDF Downloads 38913869 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing
Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng
Abstract:
Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware
Procedia PDF Downloads 11813868 A Similarity/Dissimilarity Measure to Biological Sequence Alignment
Authors: Muhammad A. Khan, Waseem Shahzad
Abstract:
Analysis of protein sequences is carried out for the purpose to discover their structural and ancestry relationship. Sequence similarity determines similar protein structures, similar function, and homology detection. Biological sequences composed of amino acid residues or nucleotides provide significant information through sequence alignment. In this paper, we present a new similarity/dissimilarity measure to sequence alignment based on the primary structure of a protein. The approach finds the distance between the two given sequences using the novel sequence alignment algorithm and a mathematical model. The algorithm runs at a time complexity of O(n²). A distance matrix is generated to construct a phylogenetic tree of different species. The new similarity/dissimilarity measure outperforms other existing methods.Keywords: alignment, distance, homology, mathematical model, phylogenetic tree
Procedia PDF Downloads 17813867 Urban Impervious and its Impact on Storm Water Drainage Systems
Authors: Ratul Das, Udit Narayan Das
Abstract:
Surface imperviousness in urban area brings significant changes in storm water drainage systems and some recent studies reveals that the impervious surfaces that passes the storm water runoff directly to drainage systems through storm water collection systems, called directly connected impervious area (DCIA) is an effective parameter rather than total impervious areas (TIA) for computation of surface runoff. In the present study, extension of DCIA and TIA were computed for a small sub-urban area of Agartala, the capital of state Tripura. Total impervious surfaces covering the study area were identified on the existing storm water drainage map from landuse map of the study area in association with field assessments. Also, DCIA assessed through field survey were compared to DCIA computed by empirical relationships provided by other investigators. For the assessment of DCIA in the study area two methods were adopted. First, partitioning the study area into four drainage sub-zones based on average basin slope and laying of existing storm water drainage systems. In the second method, the entire study area was divided into small grids. Each grid or parcel comprised of 20m× 20m area. Total impervious surfaces were delineated from landuse map in association with on-site assessments for efficient determination of DCIA within each sub-area and grid. There was a wide variation in percent connectivity of TIA across each sub-drainage zone and grid. In the present study, total impervious area comprises 36.23% of the study area, in which 21.85% of the total study area is connected to storm water collection systems. Total pervious area (TPA) and others comprise 53.20% and 10.56% of the total area, respectively. TIA recorded by field assessment (36.23%) was considerably higher than that calculated from the available land use map (22%). From the analysis of recoded data, it is observed that the average percentage of connectivity (% DCIA with respect to TIA) is 60.31 %. The analysis also reveals that the observed DCIA lies below the line of optimal impervious surface connectivity for a sub-urban area provided by other investigators and which indicate the probable reason of water logging conditions in many parts of the study area during monsoon period.Keywords: Drainage, imperviousness, runoff, storm water.
Procedia PDF Downloads 35113866 DEA-Based Variable Structure Position Control of DC Servo Motor
Authors: Ladan Maijama’a, Jibril D. Jiya, Ejike C. Anene
Abstract:
This paper presents Differential Evolution Algorithm (DEA) based Variable Structure Position Control (VSPC) of Laboratory DC servomotor (LDCSM). DEA is employed for the optimal tuning of Variable Structure Control (VSC) parameters for position control of a DC servomotor. The VSC combines the techniques of Sliding Mode Control (SMC) that gives the advantages of small overshoot, improved step response characteristics, faster dynamic response and adaptability to plant parameter variations, suppressed influences of disturbances and uncertainties in system behavior. The results of the simulation responses of the VSC parameters adjustment by DEA were performed in Matlab Version 2010a platform and yield better dynamic performance compared with the untuned VSC designed.Keywords: differential evolution algorithm, laboratory DC servomotor, sliding mode control, variable structure control
Procedia PDF Downloads 41513865 Scintigraphic Image Coding of Region of Interest Based on SPIHT Algorithm Using Global Thresholding and Huffman Coding
Authors: A. Seddiki, M. Djebbouri, D. Guerchi
Abstract:
Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to the lossless compression in the region of interest of Scintigraphic images based on SPIHT algorithm and global transform thresholding using Huffman coding.Keywords: global thresholding transform, huffman coding, region of interest, SPIHT coding, scintigraphic images
Procedia PDF Downloads 36813864 New Test Algorithm to Detect Acute and Chronic HIV Infection Using a 4th Generation Combo Test
Authors: Barun K. De
Abstract:
Acquired immunodeficiency syndrome (AIDS) is caused by two types of human immunodeficiency viruses, collectively designated HIV. HIV infection is spreading globally particularly in developing countries. Before an individual is diagnosed with HIV, the disease goes through different phases. First there is an acute early phase that is followed by an established or chronic phase. Subsequently, there is a latency period after which the individual becomes immunodeficient. It is in the acute phase that an individual is highly infectious due to a high viral load. Presently, HIV diagnosis involves use of tests that do not detect the acute phase infection during which both the viral RNA and p24 antigen are expressed. Instead, these less sensitive tests detect antibodies to viral antigens which are typically sero-converted later in the disease process following acute infection. These antibodies are detected in both asymptomatic HIV-infected individuals as well as AIDS patients. Studies indicate that early diagnosis and treatment of HIV infection can reduce medical costs, improve survival, and reduce spreading of infection to new uninfected partners. Newer 4th generation combination antigen/antibody tests are highly sensitive and specific for detection of acute and established HIV infection (HIV1 and HIV2) enabling immediate linkage to care. The CDC (Center of Disease Control, USA) recently recommended an algorithm involving three different tests to screen and diagnose acute and established infections of HIV-1 and HIV-2 in a general population. Initially a 4th generation combo test detects a viral antigen p24 and specific antibodies against HIV -1 and HIV-2 envelope proteins. If the test is positive it is followed by a second test known as a differentiation assay which detects antibodies against specific HIV-1 and HIV-2 envelope proteins confirming established infection of HIV-1 or HIV-2. However if it is negative then another test is performed that measures viral load confirming an acute HIV-1 infection. Screening results of a Phoenix area population detected 0.3% new HIV infections among which 32.4% were acute cases. Studies in the U.S. indicate that this algorithm effectively reduces HIV infection through immediate treatment and education following diagnosis.Keywords: new algorithm, HIV, diagnosis, infection
Procedia PDF Downloads 41113863 Contribution Of Community-based House To House (H2h) Active Tuberculosis (Tb) Case Finding (Acf) To Increase In Tb Notification In Nigeria: Kano State Experience 2012 To 2022
Authors: Ibrahim Umar, S Chindo, A Rajab
Abstract:
Background: TB remains a disease of public health concern in Nigeria with an estimated incidence rate of 219/100,000. Kano has the second highest TB burden in Nigeria and is the leading state with the highest consistent yearly TB notification. House-to-house (H2H) active case search in the community was found to have major contribution to the total TB notification in the state. Aims and Objective: To showcase the impact of H2H community active TB case search (ACF) to yearly TB notification in Kano State, Northern Nigeria from 2012 to 2022. Methodology: This is a retrospective descriptive study based on the analysis of data collected during the routine quarterly and yearly TB data collected in the state. Data was analyzed using the Power BI with statistical alpha level of significance <0.05. Results: Between 2012 and 2013 there was no House-to-house active TB case search in Nigeria and Kano had zero contribution to TB notification from the community in those years. However, in 2014 with the introduction of H2H Active TB Case Search Kano notified 6,014 TB cases out of which 113 came from the community ACF that translated to 2% contribution to total TB notification. From 2014 to 2022 there was progressive increase in community contribution to TB case notification from 113 out of 6,014 total TB patients notified (2012) to 11,799 out of 26,371 TB patients notified (2022) in Kano State. This translated to 45% increase in community contribution to total TB case notification. Discussion: Remarkable increase in community contribution to total TB case notification in Kano State was achieved in 2022 with 11,799 TB cases notified from the community Active TB case search to the total of 26,731 TB cases notified in Kano State, Nigeria. Conclusion: in research has shown that Community-based H2H Active TB Case Search through Community TB Workers (CTWs) is an excellent strategy in finding the missing TB cases towards Ending TB in the world.Keywords: tuberculosis(TB), active case search (ACF), house-to-house (H2H), community TB workers (CTWs)
Procedia PDF Downloads 9213862 Comparative Study between Two Methods for Extracting Pomegranate Juice and Their Effect on Product Quality
Authors: Amani Aljahani
Abstract:
The purpose of the study was to identify the physical and chemical properties of pomegranate juices and to evaluate their sensory quality. The samples were collected from the local markets and included four types of pomegranate produced in the western and southern region of the kingdom. The juices were extracted by manual squeezing and by centrifugal force. The juices were analyzed periodically for their content of organic acids, total acidity, glucose and fructose, total sugars, and the anthosianine. A panel of 30 judges evaluated the juices for their color, smell, taste, consistency and general acceptance using a prepared scale for that purpose. Result showed that pomegranate juices were acidic in nature (PH between 3.56–4.27). The major organic acids were citric, tartaric, malic, and oxalic aids total organic acidity was between 596.32–763.49 ng/100 ml and increased over storage time, however; total acidity almost stable over time except for the southern produced. The major monosaccharide's in pomegranate juices were glucose and fructose. Their concentration in the juice varied by storage. On the average glucose concentration was between 6.68–7.71 g/100 ml while fructose concentration was between 6.72–7.98 g/100 ml. total sugars content was 16% on the average and dropped by storage. Anthosianine concertration increased after five hours of storage then dropped and stabilized over time regardless of method of treatment. In addition, sensory evaluation of the juices showed general acceptance of them as of color, flavor, and constercy but the preferred one was with that of the western kind extracted by squeezing.Keywords: extracting, pomegranate, juice, quality
Procedia PDF Downloads 35013861 Cord Blood Hematopoietic Stem Cell Expansion Ability of Mesenchymal Stem Cells Isolated From Different Sources
Authors: Ana M. Lara, Manuela Llano, Felipe Gaitán, Rosa H. Bustos, Ana Maria Perdomo-Arciniegas, Ximena Bonilla
Abstract:
Umbilical cord blood is used as a source of progenitor and stem cells for the regeneration of the hematopoietic and immune system to treat patients with different hematological or non-hematological diseases. This stem cell source represents an advantage over the use of bone marrow or mobilized peripheral blood because it has a lower incidence rate of graft-versus-host disease, probably due to fewer immunological compatibility restrictions. However, its low cellular dose limits its use in pediatric patients. This work proposes the standardization of a cell expansion technique to compensate for the dose of infused cells through the ex-vivo manipulation of hematopoietic progenitor cells from umbilical cord blood before transplantation. The expansion model is carried out through co-cultures with mesenchymal stem cells (MSC) from bone marrow (BM) and less explored fetal tissues such as Wharton's jelly (WJ) and umbilical cord blood (UCB). Initially, a master cell bank of primary mesenchymal stem cells isolated from different sources was established and characterized following International Society of Cell Therapies (ISCT) indications. Additionally, we assessed the effect of a short 25 Gy cycle of gamma irradiation on cell cycle arrest of mesenchymal cells over the support capacity for the expansion of hematopoietic stem cells from umbilical cord blood was evaluated. The results show that co-cultures with MSC from WJ and UCB allow the cellular dose of HSPC to be maximized between 5 and 16 times having a similar support capacity as BM. In addition, was evaluated the hematopoietic stem progenitor cell's HSPC functionality through the evaluation of migration capacity, their differentiation capacity during culture time by flow cytometry to evaluate the expression of membrane markers associated with lineage-committed progenitors, their clonogenic potential, and the evaluation of secretome profile in the expansion process was evaluated. So far, the treatment with gamma irradiation maintains the hematopoietic support capacity of mesenchymal stem cells from the three sources studied compared to treatments without irradiation, favoring the use of fetal tissues that are generally waste to obtain mesenchymal cell lines for ex-vivo expansion systems. With the results obtained, a standardized protocol that will contribute to the development of ex-vivo expansion with MSC on a larger scale will be achieved, enabling its clinical use and expanding its application in adults.Keywords: ex-vivo expansion, hematopoietic stem cells, hematopoietic stem cell transplantation, mesenchymal stem cells, umbilical cord blood
Procedia PDF Downloads 11513860 The Effects of Continuous and Interval Aerobic Exercises with Moderate Intensity on Serum Levels of Glial Cell Line-Derived Neurotrophic Factor and Aerobic Capacity in Obese Children
Authors: Ali Golestani, Vahid Naseri, Hossein Taheri
Abstract:
Recently, some of studies examined the effect of exercise on neurotrophic factors influencing the growth, protection, plasticity and function in central and peripheral nerve cells. The aim of this study was to investigate the effects of continuous and interval aerobic exercises with moderate intensity on serum levels of glial cell line-derived neurotrophic factor (GDNF) and aerobic capacity in obese children. 21 obese students with an average age of 13.6 ± 0.5 height 171 ± 5 and BMI 32 ± 1.2 were divided randomly to control, continuous aerobic and interval aerobic groups. Training protocol included continuous or interval aerobic exercises with moderate intensity 50-65%MHR, three times per week for 10 weeks. 48 hours before and after executing of protocol, blood samples were taken from the participants and their GDNF serum levels were measured by ELISA. Aerobic power was estimated using Shuttle-run test. T-test results indicated a small increase in their GDNF serum levels, which was not statistically significant (p =0.11). In addition, the results of ANOVA did not show any significant difference between continuous and interval aerobic training on the serum levels of their GDNF but their aerobic capacity significantly increased (p =0.012). Although continuous and interval aerobic exercise improves aerobic power in obese children, they had no significant effect on their serum levels of GDNF.Keywords: aerobic power, continuous aerobic training, glial cell line-derived neurotrophic factor (GDNF), interval aerobic training, obese children
Procedia PDF Downloads 17713859 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform
Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid
Abstract:
Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm
Procedia PDF Downloads 44113858 Intelligent Rescheduling Trains for Air Pollution Management
Authors: Kainat Affrin, P. Reshma, G. Narendra Kumar
Abstract:
Optimization of timetable is the need of the day for the rescheduling and routing of trains in real time. Trains are scheduled in parallel with the road transport vehicles to the same destination. As the number of trains is restricted due to single track, customers usually opt for road transport to use frequently. The air pollution increases as the density of vehicles on road transport is increased. Use of an alternate mode of transport like train helps in reducing air-pollution. This paper mainly aims at attracting the passengers to Train transport by proper rescheduling of trains using hybrid of stop-skip algorithm and iterative convex programming algorithm. Rescheduling of train bi-directionally is achieved on a single track with dynamic dual time and varying stops. Introduction of more trains attract customers to use rail transport frequently, thereby decreasing the pollution. The results are simulated using Network Simulator (NS-2).Keywords: air pollution, AODV, re-scheduling, WSNs
Procedia PDF Downloads 36113857 Achievable Average Secrecy Rates over Bank of Parallel Independent Fading Channels with Friendly Jamming
Authors: Munnujahan Ara
Abstract:
In this paper, we investigate the effect of friendly jamming power allocation strategies on the achievable average secrecy rate over a bank of parallel fading wiretap channels. We investigate the achievable average secrecy rate in parallel fading wiretap channels subject to Rayleigh and Rician fading. The achievable average secrecy rate, due to the presence of a line-of-sight component in the jammer channel is also evaluated. Moreover, we study the detrimental effect of correlation across the parallel sub-channels, and evaluate the corresponding decrease in the achievable average secrecy rate for the various fading configurations. We also investigate the tradeoff between the transmission power and the jamming power for a fixed total power budget. Our results, which are applicable to current orthogonal frequency division multiplexing (OFDM) communications systems, shed further light on the achievable average secrecy rates over a bank of parallel fading channels in the presence of friendly jammers.Keywords: fading parallel channels, wire-tap channel, OFDM, secrecy capacity, power allocation
Procedia PDF Downloads 51213856 Prevalence of Pretreatment Drug HIV-1 Mutations in Moscow, Russia
Authors: Daria Zabolotnaya, Svetlana Degtyareva, Veronika Kanestri, Danila Konnov
Abstract:
An adequate choice of the initial antiretroviral treatment determines the treatment efficacy. In the clinical guidelines in Russia non-nucleoside reverse transcriptase inhibitors (NNRTIs) are still considered to be an option for first-line treatment while pretreatment drug resistance (PDR) testing is not routinely performed. We conducted a cohort retrospective study in HIV-positive treatment naïve patients of the H-clinic (Moscow, Russia) who performed PDR testing from July 2017 to November 2021. All the information was obtained from the medical records anonymously. We analyzed the mutations in reverse transcriptase and protease genes. RT-sequences were obtained by AmpliSens HIV-Resist-Seq kit. Drug resistance was defined using the HIVdb Program v. 8.9-1. PDR was estimated using the Stanford algorithm. Descriptive statistics were performed in Excel (Microsoft Office, 2019). A total of 261 HIV-1 infected patients were enrolled in the study including 197 (75.5%) male and 64 (24.5%) female. The mean age was 34.6±8.3 years. The median CD4 count – 521 cells/µl (IQR 367-687 cells/µl). Data on risk factors of HIV-infection were scarce. The total quantity of strains containing mutations in the reverse transcriptase gene was 75 (28.7%). From these 5 (1.9%) mutations were associated with PDR to nucleoside reverse transcriptase inhibitors (NRTIs) and 30 (11.5%) – with PDR to NNRTIs. The number of strains with mutations in protease gene was 43 (16.5%), from these only 3 (1.1%) mutations were associated with resistance to protease inhibitors. For NNRTIs the most prevalent PDR mutations were E138A, V106I. Most of the HIV variants exhibited a single PDR mutation, 2 were found in 3 samples. Most of HIV variants with PDR mutation displayed a single drug class resistance mutation. 2/37 (5.4%) strains had both NRTIs and NNRTIs mutations. There were no strains identified with PDR mutations to all three drug classes. Though earlier data demonstrated a lower level of PDR in HIV treatment naïve population in Russia and our cohort can be not fully representative as it is taken from the private clinic, it reflects the trend of increasing PDR especially to NNRTIs. Therefore, we consider either pretreatment testing or giving the priority to other drugs as first-line treatment necessary.Keywords: HIV, resistance, mutations, treatment
Procedia PDF Downloads 9413855 Decision Trees Constructing Based on K-Means Clustering Algorithm
Authors: Loai Abdallah, Malik Yousef
Abstract:
A domain space for the data should reflect the actual similarity between objects. Since objects belonging to the same cluster usually share some common traits even though their geometric distance might be relatively large. In general, the Euclidean distance of data points that represented by large number of features is not capturing the actual relation between those points. In this study, we propose a new method to construct a different space that is based on clustering to form a new distance metric. The new distance space is based on ensemble clustering (EC). The EC distance space is defined by tracking the membership of the points over multiple runs of clustering algorithm metric. Over this distance, we train the decision trees classifier (DT-EC). The results obtained by applying DT-EC on 10 datasets confirm our hypotheses that embedding the EC space as a distance metric would improve the performance.Keywords: ensemble clustering, decision trees, classification, K nearest neighbors
Procedia PDF Downloads 19113854 Household Solid Waste Generation per Capita and Management Behaviour in Mthatha City, South Africa
Authors: Vuyayo Tsheleza, Simbarashe Ndhleve, Christopher Mpundu Musampa
Abstract:
Mismanagement of waste is continuously emerging as a rising malpractice in most developing countries, especially in fast growing cities. Household solid waste in Mthatha has been reported to be one of the problems facing the city and is overwhelming local authorities, as it is beyond the environment and management capacity of the existing waste management system. This study estimates per capita waste generation, quantity of different waste types generated by inhabitants of formal and informal settlements in Mthatha as well as waste management practices in the aforementioned socio-economic stratums. A total of 206 households were systematically selected for the study using stratified random sampling categorized into formal and informal settlements. Data on household waste generation rate, composition, awareness, and household waste management behaviour and practices was gathered through mixed methods. Sampled households from both formal and informal settlements with a total of 684 people generated 1949kg per week. This translates to 2.84kg per capita per week. On average, the rate of solid waste generation per capita was 0.40 kg per day for a person living in informal settlement and 0.56 kg per day person living in formal settlement. When recorded in descending order, the proportion food waste accounted for the most generated waste at approximately 23.7%, followed by disposable nappies at 15%, papers and cardboards 13.34%, glass 13.03%, metals at 11.99%, plastics at 11.58%, residue at 5.17, textiles 3.93%, with leather and rubber at 2.28% as the least generated waste type. Different waste management practices were reported in both formal and informal settlements with formal settlements proving to be more concerned about environmental management as compared to their counterparts, informal settlement. Understanding attitudes and perceptions on waste management, waste types and per capita solid waste generation rate can help evolve appropriate waste management strategies based on the principle of reduce, re-use, recycle, environmental sound disposal and also assist in projecting future waste generation rate. These results can be utilized as input when designing growing cities’ waste management plans.Keywords: awareness, characterisation, per capita, quantification
Procedia PDF Downloads 30213853 Detecting and Secluding Route Modifiers by Neural Network Approach in Wireless Sensor Networks
Authors: C. N. Vanitha, M. Usha
Abstract:
In a real world scenario, the viability of the sensor networks has been proved by standardizing the technologies. Wireless sensor networks are vulnerable to both electronic and physical security breaches because of their deployment in remote, distributed, and inaccessible locations. The compromised sensor nodes send malicious data to the base station, and thus, the total network effectiveness will possibly be compromised. To detect and seclude the Route modifiers, a neural network based Pattern Learning predictor (PLP) is presented. This algorithm senses data at any node on present and previous patterns obtained from the en-route nodes. The eminence of any node is upgraded by their predicted and reported patterns. This paper propounds a solution not only to detect the route modifiers, but also to seclude the malevolent nodes from the network. The simulation result proves the effective performance of the network by the presented methodology in terms of energy level, routing and various network conditions.Keywords: neural networks, pattern learning, security, wireless sensor networks
Procedia PDF Downloads 40413852 Multi-Agent System Based Distributed Voltage Control in Distribution Systems
Authors: A. Arshad, M. Lehtonen. M. Humayun
Abstract:
With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids
Procedia PDF Downloads 31213851 Potential Assessment and Techno-Economic Evaluation of Photovoltaic Energy Conversion System: A Case of Ethiopia Light Rail Transit System
Authors: Asegid Belay Kebede, Getachew Biru Worku
Abstract:
The Earth and its inhabitants have faced an existential threat as a result of severe manmade actions. Global warming and climate change have been the most apparent manifestations of this threat throughout the world, with increasingly intense heat waves, temperature rises, flooding, sea-level rise, ice sheet melting, and so on. One of the major contributors to this disaster is the ever-increasing production and consumption of energy, which is still primarily fossil-based and emits billions of tons of hazardous GHG. The transportation industry is recognized as the biggest actor in terms of emissions, accounting for 24% of direct CO2 emissions and being one of the few worldwide sectors where CO2 emissions are still growing. Rail transportation, which includes all from light rail transit to high-speed rail services, is regarded as one of the most efficient modes of transportation, accounting for 9% of total passenger travel and 7% of total freight transit. Nonetheless, there is still room for improvement in the transportation sector, which might be done by incorporating alternative and/or renewable energy sources. As a result of these rapidly changing global energy situations and rapidly dwindling fossil fuel supplies, we were driven to analyze the possibility of renewable energy sources for traction applications. Even a small achievement in energy conservation or harnessing might significantly influence the total railway system and have the potential to transform the railway sector like never before. As a result, the paper begins by assessing the potential for photovoltaic (PV) power generation on train rooftops and existing infrastructure such as railway depots, passenger stations, traction substation rooftops, and accessible land along rail lines. As a result, a method based on a Google Earth system (using Helioscopes software) is developed to assess the PV potential along rail lines and on train station roofs. As an example, the Addis Ababa light rail transit system (AA-LRTS) is utilized. The case study examines the electricity-generating potential and economic performance of photovoltaics installed on AALRTS. As a consequence, the overall capacity of solar systems on all stations, including train rooftops, reaches 72.6 MWh per day, with an annual power output of 10.6 GWh. Throughout a 25-year lifespan, the overall CO2 emission reduction and total profit from PV-AA-LRTS can reach 180,000 tons and 892 million Ethiopian birrs, respectively. The PV-AA-LRTS has a 200% return on investment. All PV stations have a payback time of less than 13 years, and the price of solar-generated power is less than $0.08/kWh, which can compete with the benchmark price of coal-fired electricity. Our findings indicate that PV-AA-LRTS has tremendous potential, with both energy and economic advantages.Keywords: sustainable development, global warming, energy crisis, photovoltaic energy conversion, techno-economic analysis, transportation system, light rail transit
Procedia PDF Downloads 7613850 Flexible Arm Manipulator Control for Industrial Tasks
Authors: Mircea Ivanescu, Nirvana Popescu, Decebal Popescu, Dorin Popescu
Abstract:
This paper addresses the control problem of a class of hyper-redundant arms. In order to avoid discrepancy between the mathematical model and the actual dynamics, the dynamic model with uncertain parameters of this class of manipulators is inferred. A procedure to design a feedback controller which stabilizes the uncertain system has been proposed. A PD boundary control algorithm is used in order to control the desired position of the manipulator. This controller is easy to implement from the point of view of measuring techniques and actuation. Numerical simulations verify the effectiveness of the presented methods. In order to verify the suitability of the control algorithm, a platform with a 3D flexible manipulator has been employed for testing. Experimental tests on this platform illustrate the applications of the techniques developed in the paper.Keywords: distributed model, flexible manipulator, observer, robot control
Procedia PDF Downloads 32113849 The Effect of Feature Selection on Pattern Classification
Authors: Chih-Fong Tsai, Ya-Han Hu
Abstract:
The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.Keywords: data mining, feature selection, pattern classification, dimensionality reduction
Procedia PDF Downloads 66913848 Adaptive Filtering in Subbands for Supervised Source Separation
Authors: Bruna Luisa Ramos Prado Vasques, Mariane Rembold Petraglia, Antonio Petraglia
Abstract:
This paper investigates MIMO (Multiple-Input Multiple-Output) adaptive filtering techniques for the application of supervised source separation in the context of convolutive mixtures. From the observation that there is correlation among the signals of the different mixtures, an improvement in the NSAF (Normalized Subband Adaptive Filter) algorithm is proposed in order to accelerate its convergence rate. Simulation results with mixtures of speech signals in reverberant environments show the superior performance of the proposed algorithm with respect to the performances of the NLMS (Normalized Least-Mean-Square) and conventional NSAF, considering both the convergence speed and SIR (Signal-to-Interference Ratio) after convergence.Keywords: adaptive filtering, multi-rate processing, normalized subband adaptive filter, source separation
Procedia PDF Downloads 43513847 Retraction Free Motion Approach and Its Application in Automated Robotic Edge Finishing and Inspection Processes
Authors: M. Nemer, E. I. Konukseven
Abstract:
In this paper, a motion generation algorithm for a six Degrees of Freedom (DoF) robotic hand in a static environment is presented. The purpose of developing this method is to be used in the path generation of the end-effector for edge finishing and inspection processes by utilizing the CAD model of the considered workpiece. Nonetheless, the proposed algorithm may be extended to be applicable for other similar manufacturing processes. A software package programmed in the application programming interface (API) of SolidWorks generates tool path data for the robot. The proposed method significantly simplifies the given problem, resulting in a reduction in the CPU time needed to generate the path, and offers an efficient overall solution. The ABB IRB2000 robot is chosen for executing the generated tool path.Keywords: CAD-based tools, edge deburring, edge scanning, offline programming, path generation
Procedia PDF Downloads 28413846 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm
Procedia PDF Downloads 16513845 Prediction of the Thermodynamic Properties of Hydrocarbons Using Gaussian Process Regression
Authors: N. Alhazmi
Abstract:
Knowing the thermodynamics properties of hydrocarbons is vital when it comes to analyzing the related chemical reaction outcomes and understanding the reaction process, especially in terms of petrochemical industrial applications, combustions, and catalytic reactions. However, measuring the thermodynamics properties experimentally is time-consuming and costly. In this paper, Gaussian process regression (GPR) has been used to directly predict the main thermodynamic properties - standard enthalpy of formation, standard entropy, and heat capacity -for more than 360 cyclic and non-cyclic alkanes, alkenes, and alkynes. A simple workflow has been proposed that can be applied to directly predict the main properties of any hydrocarbon by knowing its descriptors and chemical structure and can be generalized to predict the main properties of any material. The model was evaluated by calculating the statistical error R², which was more than 0.9794 for all the predicted properties.Keywords: thermodynamic, Gaussian process regression, hydrocarbons, regression, supervised learning, entropy, enthalpy, heat capacity
Procedia PDF Downloads 222