Search results for: machine learningSarapin
522 Modeling of a Pilot Installation for the Recovery of Residual Sludge from Olive Oil Extraction
Authors: Riad Benelmir, Muhammad Shoaib Ahmed Khan
Abstract:
The socio-economic importance of the olive oil production is significant in the Mediterranean region, both in terms of wealth and tradition. However, the extraction of olive oil generates huge quantities of wastes that may have a great impact on land and water environment because of their high phytotoxicity. Especially olive mill wastewater (OMWW) is one of the major environmental pollutants in olive oil industry. This work projects to design a smart and sustainable integrated thermochemical catalytic processes of residues from olive mills by hydrothermal carbonization (HTC) of olive mill wastewater (OMWW) and fast pyrolysis of olive mill wastewater sludge (OMWS). The byproducts resulting from OMWW-HTC treatment are a solid phase enriched in carbon, called biochar and a liquid phase (residual water with less dissolved organic and phenolic compounds). HTC biochar can be tested as a fuel in combustion systems and will also be utilized in high-value applications, such as soil bio-fertilizer and as catalyst or/and catalyst support. The HTC residual water is characterized, treated and used in soil irrigation since the organic and the toxic compounds will be reduced under the permitted limits. This project’s concept includes also the conversion of OMWS to a green diesel through a catalytic pyrolysis process. The green diesel is then used as biofuel in an internal combustion engine (IC-Engine) for automotive application to be used for clean transportation. In this work, a theoretical study is considered for the use of heat from the pyrolysis non-condensable gases in a sorption-refrigeration machine for pyrolysis gases cooling and condensation of bio-oil vapors.Keywords: biomass, olive oil extraction, adsorption cooling, pyrolisis
Procedia PDF Downloads 92521 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation
Authors: Harini Chakkera
Abstract:
Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.Keywords: kidney, transplant, diabetes, insulin
Procedia PDF Downloads 91520 Remote Sensing through Deep Neural Networks for Satellite Image Classification
Authors: Teja Sai Puligadda
Abstract:
Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss
Procedia PDF Downloads 161519 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 501518 A High Content Screening Platform for the Accurate Prediction of Nephrotoxicity
Authors: Sijing Xiong, Ran Su, Lit-Hsin Loo, Daniele Zink
Abstract:
The kidney is a major target for toxic effects of drugs, industrial and environmental chemicals and other compounds. Typically, nephrotoxicity is detected late during drug development, and regulatory animal models could not solve this problem. Validated or accepted in silico or in vitro methods for the prediction of nephrotoxicity are not available. We have established the first and currently only pre-validated in vitro models for the accurate prediction of nephrotoxicity in humans and the first predictive platforms based on renal cells derived from human pluripotent stem cells. In order to further improve the efficiency of our predictive models, we recently developed a high content screening (HCS) platform. This platform employed automated imaging in combination with automated quantitative phenotypic profiling and machine learning methods. 129 image-based phenotypic features were analyzed with respect to their predictive performance in combination with 44 compounds with different chemical structures that included drugs, environmental and industrial chemicals and herbal and fungal compounds. The nephrotoxicity of these compounds in humans is well characterized. A combination of chromatin and cytoskeletal features resulted in high predictivity with respect to nephrotoxicity in humans. Test balanced accuracies of 82% or 89% were obtained with human primary or immortalized renal proximal tubular cells, respectively. Furthermore, our results revealed that a DNA damage response is commonly induced by different PTC-toxicants with diverse chemical structures and injury mechanisms. Together, the results show that the automated HCS platform allows efficient and accurate nephrotoxicity prediction for compounds with diverse chemical structures.Keywords: high content screening, in vitro models, nephrotoxicity, toxicity prediction
Procedia PDF Downloads 314517 Characteristics of the Particle Size Distribution and Exposure Concentrations of Nanoparticles Generated from the Laser Metal Deposition Process
Authors: Yu-Hsuan Liu, Ying-Fang Wang
Abstract:
The objectives of the present study are to characterize nanoparticles generated from the laser metal deposition (LMD) process and to estimate particle concentrations deposited in the head (H), that the tracheobronchial (TB) and alveolar (A) regions, respectively. The studied LMD chamber (3.6m × 3.8m × 2.9m) is installed with a robot laser metal deposition machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling inside the chamber for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L / min, respectively. The resultant size distributions were used to predict depositions of nanoparticles at the H, TB, and A regions of the respiratory tract using the UK National Radiological Protection Board’s (NRPB’s) LUDEP Software. Result that the number concentrations of nanoparticles in indoor background and LMD chamber were 4.8×10³ and 4.3×10⁵ # / cm³, respectively. However, the nanoparticles emitted from the LMD process was in the form of the uni-modal with number median diameter (NMD) and geometric standard deviation (GSD) as 142nm and 1.86, respectively. The fractions of the nanoparticles deposited on the alveolar region (A: 69.8%) were higher than the other two regions of the head region (H: 10.9%), tracheobronchial region (TB: 19.3%). This study conducted static sampling to measure the nanoparticles in the LMD process, and the results show that the fraction of particles deposited on the A region was higher than the other two regions. Therefore, applying the characteristics of nanoparticles emitted from LMD process could be provided valuable scientific-based evidence for exposure assessments in the future.Keywords: exposure assessment, laser metal deposition process, nanoparticle, respiratory region
Procedia PDF Downloads 284516 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 183515 Performance Analysis of Pumps-as-Turbine Under Cavitating Conditions
Authors: Calvin Stephen, Biswajit Basu, Aonghus McNabola
Abstract:
Market liberalization in the power sector has led to the emergence of micro-hydropower schemes that are dependent on the use of pumps-as-turbines in applications that were not suitable as potential hydropower sites in earlier years. These applications include energy recovery in water supply networks, sewage systems, irrigation systems, alcohol breweries, underground mining and desalination plants. As a result, there has been an accelerated adoption of pumpsas-turbine technology due to the economic advantages it presents in comparison to the conventional turbines in the micro-hydropower space. The performance of this machines under cavitation conditions, however, is not well understood as there is a deficiency of knowledge in literature focused on their turbine mode of operation. In hydraulic machines, cavitation is a common occurrence which needs to be understood to safeguard them and prolong their operation life. The overall purpose of this study is to investigate the effects of cavitation on the performance of a pumps-as-turbine system over its entire operating range. At various operating speeds, the cavitating region is identified experimentally while monitoring the effects this has on the power produced by the machine. Initial results indicate occurrence of cavitation at higher flow rates for lower operating speeds and at lower flow rates at higher operating speeds. This implies that for cavitation free operation, low speed pumps-as-turbine must be used for low flow rate conditions whereas for sites with higher flow rate conditions high speed turbines should be adopted. Such a complete understanding of pumps-as-turbine suction performance can aid avoid cavitation induced failures hence improved reliability of the micro-hydropower plant.Keywords: cavitation, micro-hydropower, pumps-as-turbine, system design
Procedia PDF Downloads 121514 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 81513 Energy Production with Closed Methods
Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani
Abstract:
In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.Keywords: energy, heating, atmosphere, waste, gasification
Procedia PDF Downloads 236512 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health
Authors: Irfan Ahmad Afip
Abstract:
This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression
Procedia PDF Downloads 117511 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks
Authors: Adrian Ionita, Ana-Maria Ghimes
Abstract:
The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling
Procedia PDF Downloads 164510 Automation of Pneumatic Seed Planter for System of Rice Intensification
Authors: Tukur Daiyabu Abdulkadir, Wan Ishak Wan Ismail, Muhammad Saufi Mohd Kassim
Abstract:
Seed singulation and accuracy in seed spacing are the major challenges associated with the adoption of mechanical seeder for system of rice intensification. In this research the metering system of a pneumatic planter was modified and automated for increase precision to meet the demand of system of rice intensification SRI. The chain and sprocket mechanism of a conventional vacuum planter were now replaced with an electro mechanical system made up of a set of servo motors, limit switch, micro controller and a wheel divided into 10 equal angles. The circumference of the planter wheel was determined based on which seed spacing was computed and mapped to the angles of the metering wheel. A program was then written and uploaded to arduino micro controller and it automatically turns the seed plates for seeding upon covering the required distance. The servo motor was calibrated with the aid of labVIEW. The machine was then calibrated using a grease belt and varying the servo rpm through voltage variation between 37 rpm to 47 rpm until an optimum value of 40 rpm was obtained with a forward speed of 5 kilometers per hour. A pressure of 1.5 kpa was found to be optimum under which no skip or double was recorded. Precision in spacing (coefficient of variation), miss index, multiple index, doubles and skips were investigated. No skip or double was recorded both at laboratory and field levels. The operational parameters under consideration were both evaluated at laboratory and field. Even though there was little variation between the laboratory and field values of precision in spacing, multiple index and miss index, the different is not significant as both laboratory and field values fall within the acceptable range.Keywords: automation, calibration, pneumatic seed planter, system of rice intensification
Procedia PDF Downloads 644509 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 24508 A Study Problem and Needs Compare the Held of the Garment Industries in Nonthaburi and Bangkok Area
Authors: Thepnarintra Praphanphat
Abstract:
The purposes of this study were to investigate garment industry’s condition, problems, and need for assistance. The population of the study was 504 managers or managing directors of garment establishments finished apparel industrial manager and permission of the Department of Industrial Works 28, Ministry of Industry until January 1, 2012. In determining the sample size with the opening of the Taro Yamane finished at 95% confidence level is ± 5% deviation was 224 managers. Questionnaires were used to collect the data. Percentage, frequency, arithmetic mean, standard deviation, t-test, ANOVA, and LSD were used to analyze the data. It was found that most establishments were of a large size, operated in a form of limited company for more than 15 years most of which produced garments for working women. All investment was made by Thai people. The products were made to order and distributed domestically and internationally. The total sale of the year 2010, 2011, and 2012 was almost the same. With respect to the problems of operating the business, the study indicated, as a whole, by- aspects, and by-items, that they were at a high level. The comparison of the level of problems of operating garment business as classified by general condition showed that problems occurring in business of different sizes were, as a whole, not different. In taking aspects into consideration, it was found that the level of problem in relation to production was different; medium establishments had more problems in production than those of small and large sizes. According to the by-items analysis, five problems were found different; namely, problems concerning employees, machine maintenance, number of designers, and price competition. Such problems in the medium establishments were at a higher level than those in the small and large establishments. Regarding business age, the examination yielded no differences as a whole, by-aspects, and by-items. The statistical significance level of this study was set at .05.Keywords: garment industry, garment, fashion, competitive enhancement project
Procedia PDF Downloads 188507 The Associations between Ankle and Brachial Systolic Blood Pressures with Obesity Parameters
Authors: Matei Tudor Berceanu, Hema Viswambharan, Kirti Kain, Chew Weng Cheng
Abstract:
Background - Obesity parameters, particularly visceral obesity as measured by the waist-to-height ratio (WHtR), correlate with insulin resistance. The metabolic microvascular changes associated with insulin resistance causes increased peripheral arteriolar resistance primarily to the lower limb vessels. We hypothesize that ankle systolic blood pressures (SBPs) are more significantly associated with visceral obesity than brachial SBPs. Methods - 1098 adults enriched in south Asians or Europeans with diabetes (T2DM) were recruited from a primary care practice in West Yorkshire. Their medical histories, including T2DM and cardiovascular disease (CVD) status, were gathered from an electronic database. The brachial, dorsalis pedis, and posterior tibial SBPs were measured using a Doppler machine. Their body mass index (BMI) and WHtR were calculated after measuring their weight, height, and waist circumference. Linear regressions were performed between the 6 SBPs and both obesity parameters, after adjusting for covariates. Results - Generally, the left posterior tibial SBP (P=4.559*10⁻¹⁵) and right posterior tibial SBP (P=1.114* 10⁻¹³ ) are the pressures most significantly associated with the BMI, as well as in south Asians (P < 0.001) and Europeans (P < 0.001) specifically. In South Asians, although the left (P=0.032) and right brachial SBP (P=0.045) were associated to the WHtR, the left posterior tibial SBP (P=0.023) showed the strongest association. Conclusion - Regardless of ethnicity, ankle SBPs are more significantly associated with generalized obesity than brachial SBPs, suggesting their screening potential for screening for early detection of T2DM and CVD. A combination of ankle SBPs with WHtR is proposed in south Asians.Keywords: ankle blood pressures, body mass index, insulin resistance, waist-to-height-ratio
Procedia PDF Downloads 142506 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 53505 Studies on the Proximate Composition and Functional Properties of Extracted Cocoyam Starch Flour
Authors: Adebola Ajayi, Francis B. Aiyeleye, Olakunke M. Makanjuola, Olalekan J. Adebowale
Abstract:
Cocoyam, a generic term for both xanthoma and colocasia, is a traditional staple root crop in many developing countries in Africa, Asia and the Pacific. It is mostly cultivated as food crop which is very rich in vitamin B6, magnesium and also in dietary fiber. The cocoyam starch is easily digested and often used for baby food. Drying food is a method of food preservation that removes enough moisture from the food so bacteria, yeast and molds cannot grow. It is a one of the oldest methods of preserving food. The effect of drying methods on the proximate composition and functional properties of extracted cocoyam starch flour were studied. Freshly harvested cocoyam cultivars at matured level were washed with portable water, peeled, washed and grated. The starch in the grated cocoyam was extracted, dried using sun drying, oven and cabinet dryers. The extracted starch flour was milled into flour using Apex mill and packed and sealed in low-density polyethylene film (LDPE) 75 micron thickness with Nylon sealing machine QN5-3200HI and kept for three months under ambient temperature before analysis. The result showed that the moisture content, ash, crude fiber, fat, protein and carbohydrate ranged from 6.28% to 12.8% 2.32% to 3.2%, 0.89% to 2.24%%, 1.89% to 2.91%, 7.30% to 10.2% and 69% to 83% respectively. The functional properties of the cocoyam starch flour ranged from 2.65ml/g to 4.84ml/g water absorption capacity, 1.95ml/g to 3.12ml/g oil absorption capacity, 0.66ml/g to 7.82ml/g bulk density and 3.82% to 5.30ml/g swelling capacity. Significant difference (P≥0.5) was not obtained across the various drying methods used. The drying methods provide extension to the shelf-life of the extracted cocoyam starch flour.Keywords: cocoyam, extraction, oven dryer, cabinet dryer
Procedia PDF Downloads 295504 Prevalence of Diabetes Mellitus Among Human Immune Deficiency Virus-Positive Patients Under Anti-retroviral Attending in Rwanda, a Case Study of University Teaching Hospital of Butare
Authors: Venuste Kayinamura, V. Iyamuremye, A. Ngirabakunzi
Abstract:
Anti-retroviral therapy (ART) for HIV patient can cause a deficiency in glucose metabolism by promoting insulin resistance, glucose intolerance, and diabetes, diabetes mellitus keep increasing among HIV-infected patients worldwide but there is limited data on levels of blood glucose and its relationship with antiretroviral drugs (ARVs) and HIV-infection worldwide, particularly in Rwanda. A convenient sampling strategy was used in this study and it involved 323 HIV patients (n=323). Patients who are HIV positive under ARVs were involved in this study. The patient’s blood glucose was analyzed using an automated machine or glucometer (COBAS C 311). Data were analyzed using Microsoft Excel and SPSS V. 20.0 and presented in percentages. The highest diabetes mellitus prevalence was 93.33 % in people aged >40 years while the lowest diabetes mellitus prevalence was 6.67% in people aged between 21-and 40 years. The P-value was (0.021). Thus, there is a significant association between age and diabetes occurrence. The highest diabetes mellitus prevalence was 28.2% in patients under ART treatment for more than 10 years, 16.7% were <5years while 20% of patients were on ART treatment between 5-10 years. The P-value here is (0.03), thus the incidence of diabetes is associated with long-term ART use in HIV-infected patients. This study assessed the prevalence of diabetes among HIV-infected patients under ARVs attending the University Teaching Hospital of Butare (CHUB), it shows that the prevalence of diabetes is high in HIV-infected patients under ARTs. This study found no significant relationship between gender and diabetes mellitus growth. Therefore, regular assessment of diabetes mellitus especially among HIV-infected patients under ARVs is highly recommended to control other health issues caused by diabetes mellitus.Keywords: anti-retroviral, diabetes mellitus, antiretroviral therapy, human immune deficiency virus
Procedia PDF Downloads 114503 Experimental Investigations on the Mechanical properties of Spiny (Kawayan Tinik) Bamboo Layers
Authors: Ma. Doreen E. Candelaria, Ma. Louise Margaret A. Ramos, Dr. Jaime Y. Hernandez, Jr
Abstract:
Bamboo has been introduced as a possible alternative to some construction materials nowadays. Its potential use in the field of engineering, however, is still not widely practiced due to insufficient engineering knowledge on the material’s properties and characteristics. Although there are researches and studies proving its advantages, it is still not enough to say that bamboo can sustain and provide the strength and capacity required of common structures. In line with this, a more detailed analysis was made to observe the layered structure of the bamboo, particularly the species of Kawayan Tinik. It is the main intent of this research to provide the necessary experiments to determine the tensile strength of dried bamboo samples. The test includes tensile strength parallel to fibers with samples taken at internodes only. Throughout the experiment, methods suggested by the International Organization for Standardization (ISO) were followed. The specimens were tested using 3366 INSTRON Universal Testing Machine, with a rate of loading set to 0.6 mm/min. It was then observed from the results of these experiments that dried bamboo samples recorded high layered tensile strengths, as high as 600 MPa. Likewise, along the culm’s length and across its cross section, higher tensile strength were observed at the top part and at its outer layers. Overall, the top part recorded the highest tensile strength per layer, with its outer layers having tensile strength as high as 600 MPa. The recorded tensile strength of its middle and inner layers, on the other hand, were approximately 450 MPa and 180 MPa, respectively. From this variation in tensile strength across the cross section, it may be concluded that an increase in tensile strength may be observed towards the outer periphery of the bamboo. With these preliminary investigations on the layered tensile strength of bamboo, it is highly recommended to conduct experimental investigations on the layered compressive strength properties as well. It is also suggested to conduct investigations evaluating perpendicular layered tensile strength of the material.Keywords: bamboo strength, layered strength tests, strength test, tensile test
Procedia PDF Downloads 420502 Comparison of Mechanical Properties of Three Different Orthodontic Latex Elastic Bands Leached with NaOH Solution
Authors: Thipsupar Pureprasert, Niwat Anuwongnukroh, Surachai Dechkunakorn, Surapich Loykulanant, Chaveewan Kongkaew, Wassana Wichai
Abstract:
Objective: Orthodontic elastic bands made from natural rubber continue to be commonly used due to their favorable characteristics. However, there are concerns associated cytotoxicity due to harmful components released during conventional vulcanization (sulfur-based method). With the co-operation of The National Metal and Materials Technology Center (MTEC) and Faculty of Dentistry Mahidol University, a method was introduced to reduce toxic components by leaching the orthodontic elastic bands with NaOH solution. Objectives: To evaluate the mechanical properties of Thai and commercial orthodontic elastic brands (Ormco and W&H) leached with NaOH solution. Material and methods: Three elastic brands (N =30, size ¼ inch, 4.5 oz.) were tested for mechanical properties in terms of initial extension force, residual force, force loss, breaking strength and maximum displacement using a Universal Testing Machine. Results : Force loss significantly decreased in Thai-LEACH and W&H-LEACH, whereas the values increased in Ormco-LEACH (P < 0.05). The data exhibited a significantly decrease in breaking strength with Thai-LEACH and Ormco-LEACH, whereas all 3 brands revealed a significantly decrease in maximum displacement with the leaching process (P < 0.05). Conclusion: Leaching with NaOH solution is a new method, which can remove toxic components from orthodontic latex elastic bands. However, this process can affect their mechanical properties. Leached elastic bands from Thai had comparable properties with Ormco and have potential to be developed as a promising product.Keywords: leaching, orthodontic elastics, natural rubber latex, orthodontic
Procedia PDF Downloads 271501 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 88500 Bridging the Gap Between Student Needs and Labor Market Requirements in the Translation Industry in Saudi Arabia
Authors: Sultan Samah A Almjlad
Abstract:
The translation industry in Saudi Arabia is experiencing significant shifts driven by Vision 2030, which aims to diversify the economy and enhance international engagement. This change highlights the need for translators who are skilled in various languages and cultures, playing a crucial role in the nation's global integration efforts. However, there's a notable gap between the skills taught in academic institutions and what the job market demands. Many translation programs in Saudi universities don't align well with industry needs, resulting in graduates who may not meet employer expectations. To tackle this challenge, it's essential to thoroughly analyze the market to identify the key skills required, especially in sectors like legal, medical, technical, and audiovisual translation. At the same time, existing translation programs need to be evaluated to see if they cover necessary topics and provide practical training. Involving stakeholders such as translation agencies, professionals, and students is crucial to gather diverse perspectives. Identifying discrepancies between academic offerings and market demands will guide the development of targeted strategies. These strategies may include enriching curricula with industry-specific content, integrating emerging technologies like machine translation and CAT tools, and establishing partnerships with industry players to offer practical training opportunities and internships. Industry-led workshops and seminars can provide students with valuable insights, and certification programs can validate their skills. By aligning academic programs with industry needs, Saudi Arabia can build a skilled workforce of translators, supporting its economic diversification goals under Vision 2030. This alignment benefits both students and the industry, contributing to the growth of the translation sector and the overall development of the country.Keywords: translation industry, briging gap, labor market, requirements
Procedia PDF Downloads 41499 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 91498 Generalized Additive Model for Estimating Propensity Score
Authors: Tahmidul Islam
Abstract:
Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching
Procedia PDF Downloads 368497 Gearbox Defect Detection in the Semi Autogenous Mills Using the Vibration Analysis Technique
Authors: Mostafa Firoozabadi, Alireza Foroughi Nematollahi
Abstract:
Semi autogenous mills are designed for grinding or primary crushed ore, and are the most widely used in concentrators globally. Any defect occurrence in semi autogenous mills can stop the production line. A Gearbox is a significant part of a rotating machine or a mill, so, the gearbox monitoring is a necessary process to prevent the unwanted defects. When a defect happens in a gearbox bearing, this defect can be transferred to the other parts of the equipment like inner ring, outer ring, balls, and the bearing cage. Vibration analysis is one of the most effective and common ways to detect the bearing defects in the mills. Vibration signal in a mill can be made by different parts of the mill including electromotor, pinion girth gear, different rolling bearings, and tire. When a vibration signal, made by the aforementioned parts, is added to the gearbox vibration spectrum, an accurate and on time defect detection in the gearbox will be difficult. In this paper, a new method is proposed to detect the gearbox bearing defects in the semi autogenous mill on time and accurately, using the vibration signal analysis method. In this method, if the vibration values are increased in the vibration curve, the probability of defect occurrence is investigated by comparing the equipment vibration values and the standard ones. Then, all vibration frequencies are extracted from the vibration signal and the equipment defect is detected using the vibration spectrum curve. This method is implemented on the semi autogenous mills in the Golgohar mining and industrial company in Iran. The results show that the proposed method can detect the bearing looseness on time and accurately. After defect detection, the bearing is opened before the equipment failure and the predictive maintenance actions are implemented on it.Keywords: condition monitoring, gearbox defects, predictive maintenance, vibration analysis
Procedia PDF Downloads 466496 Taxonomic Study and Environmental Ecology of Parrot (Rose Ringed) in City Mirpurkhas, Sindh, Pakistan
Authors: Aisha Liaquat Ali, Ghulam Sarwar Gachal, Muhammad Yusuf Sheikh
Abstract:
The Parrot rose ringed (Psittaculla krameri) commonly known as Tota, belongs to the order ‘Psittaciformes’ and family ‘Psittacidea’. Its sub-species inhabiting Pakistan are Psittaculla borealis. The parrot rose-ringed has been categorized the least concern species, the core aim of the present study is to investigate the ecology and taxonomy of parrot (rose-ringed). Sampling was obtained for the taxonomic identification from various adjoining areas in City Mirpurkhas by non-random method, which was conducted from Feb to June 2017. The different parameters measured with the help of a vernier caliper, foot scale, digital weighing machine. Body parameters were measured via; length of body, length of the wings, length of tail, mass in grams. During present study, a total number of 36 specimens were collected from different localities of City Mirpurkhas (38.2%) were male and (62.7%) were female. Maximum population density of Psittaculla Krameri borealis (52.9%) was collected from Sindh Horticulture Research Station (fruit farm) Mirpurkhas. Minimum no: of Psittaculla krameri borealis (5.5%) collected in urban parks. It was observed that Psittaculla krameri borealis were in dense population during the months of ‘May’ and ‘June’ when the temperature ranged between 20°C and 45°C. A Psittaculla krameri borealis female was found the heaviest in body weight. The species of parrot (rose ringed) captured during study having green plumage, coverts were gray, upper beak, red and lower beak black, shorter tail in female long tail in the male which was similar to the Psittaculla krameri borealis.Keywords: Mirpurkhas Sindh Pakistan, environmental ecology, parrot, rose-ringed, taxonomy
Procedia PDF Downloads 175495 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 422494 Fluid Structure Interaction Study between Ahead and Angled Impact of AGM 88 Missile Entering Relatively High Viscous Fluid for K-Omega Turbulence Model
Authors: Abu Afree Andalib, Rafiur Rahman, Md Mezbah Uddin
Abstract:
The main objective of this work is to anatomize on the various parameters of AGM 88 missile anatomized using FSI module in Ansys. Computational fluid dynamics is used for the study of fluid flow pattern and fluidic phenomenon such as drag, pressure force, energy dissipation and shockwave distribution in water. Using finite element analysis module of Ansys, structural parameters such as stress and stress density, localization point, deflection, force propagation is determined. Separate analysis on structural parameters is done on Abacus. State of the art coupling module is used for FSI analysis. Fine mesh is considered in every case for better result during simulation according to computational machine power. The result of the above-mentioned parameters is analyzed and compared for two phases using graphical representation. The result of Ansys and Abaqus are also showed. Computational Fluid Dynamics and Finite Element analyses and subsequently the Fluid-Structure Interaction (FSI) technique is being considered. Finite volume method and finite element method are being considered for modelling fluid flow and structural parameters analysis. Feasible boundary conditions are also utilized in the research. Significant change in the interaction and interference pattern while the impact was found. Theoretically as well as according to simulation angled condition was found with higher impact.Keywords: FSI (Fluid Surface Interaction), impact, missile, high viscous fluid, CFD (Computational Fluid Dynamics), FEM (Finite Element Analysis), FVM (Finite Volume Method), fluid flow, fluid pattern, structural analysis, AGM-88, Ansys, Abaqus, meshing, k-omega, turbulence model
Procedia PDF Downloads 468493 Functional Connectivity Signatures of Polygenic Depression Risk in Youth
Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip
Abstract:
Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.Keywords: genetics, functional connectivity, pre-adolescents, depression
Procedia PDF Downloads 60