Search results for: HVOF (High Velocity Oxygen Fuel)
3862 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks
Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions
Procedia PDF Downloads 823861 Views and Experiences of Medical Students of Kerman University of Medical Sciences on Facilitators and Inhibitators of Quality of Education in the Clinical Education System in 2021
Authors: Hossein Ghaedamini, Salman Farahbakhsh, Alireza Amirbeigi, Zahra Saghafi, Salman Daneshi, Alireza Ghaedamini
Abstract:
Background: Assessing the challenges of clinical education of medical students is one of the most important and sensitive parts of medical education. The aim of this study was to investigate the views and experiences of Kerman medical students on the factors that facilitate and inhibit the quality of clinical education. Materials and Methods: This research was qualitative and used a phenomenological approach. The study population included medical interns of Kerman University of Medical Sciences in 1400. The method of data collection was in-depth interviews with participants. Data were encoded and analyzed by Claizey stepwise model. Results: First, about 540 primary codes were extracted in the form of two main themes (facilitators and inhibitors) and 10 sub-themes including providing motivational models and creating interest in interns, high scientific level of professors and the appropriate quality of their teaching, the use of technology in the clinical education process, delegating authority and freedom of action and more responsibilities to interns, inappropriate treatment of some officials, professors, assistants and department staff with their interns, inadequate educational programming, lack of necessary cooperation and providing inappropriate treatment by clinical training experts for interns, inadequate evaluation method in clinical training for interns, poor quality mornings, the unefficiency of grand rounds, the inappropriate way of evaluating clinical training for interns, the lack of suitable facilities and conditions with the position of a medical intern, and the hardwork of some departments were categorized. Conclusion: Clinical education is always mixed with special principles and subtleties, and special attention to facilitators and inhibitors in this process has an important role in improving its quality.Keywords: clinical education, medical students, qualitative study, education
Procedia PDF Downloads 983860 Impact of Sociocultural Factors on Management and Utilization of Solid Waste in Ibadan Metropolis, Nigeria
Authors: Olufunmilayo Folaranmi
Abstract:
This research was carried out to examine the impact of socio-cultural factors on the management and utilization of solid waste in Ibadan Metropolis. A descriptive survey research design was adopted for the study while a systematic and stratified random sampling technique was used to select 300 respondents which were categorized into high, middle and low-density areas. Four hypothesis were tested using chi-square test on variables of unavailability of waste disposal facilities and waste management, negligence of contractors to liaise with community members, lack of adequate environmental education and waste management and utilization, low level of motivation of sanitation workers with solid wastes management, lack of community full participation with solid waste management and utilization. Results showed that significant effect of waste disposal facilities on solid waste management and utilization (X2 +16.6, P < .05). Also, there is a significant relationship between negligence of the contractors to liaise with community elites with improper disposal (X2 = 87.5, P < .05). The motivation of sanitation workers is significantly related to solid waste management (X2 = 70.4, P < .05). Adequate environmental education and awareness influenced solid waste management. There was also a significant relationship between lack of community participation with waste management disposal and improper waste disposal. Based on the findings from the study it was recommended that the quality of life in urban centers should be improved, social welfare of the populace enhanced and environment should be adequately attended to. Poverty alleviation programmes should be intensified and made to live beyond the life of a particular administration, micro-credit facilities should be available to community members to promote their welfare. Lastly, sustained environmental education programmes for citizens at all levels of education, formal and informal through the use of agencies like Ethical and Attitudinal Reorientation Commission (EARCOM) and the National Orientation Agency (NOA).Keywords: management, social welfare, socio-cultural factors, solid waste
Procedia PDF Downloads 2303859 Attenuation of Endotoxin Induced Hepatotoxicity by Dexamethasone, Melatonin and Pentoxifylline in White Albino Mice: A Comparative Study
Authors: Ammara Khan
Abstract:
Sepsis is characterized by an overwhelming surge of cytokines and oxidative stress to one of many factors, gram-negative bacteria commonly implicated. Despite major expansion and elaboration of sepsis pathophysiology and therapeutic approach; death rate remains very high in septic patients due to multiple organ damages including hepatotoxicity.The present study was aimed to ascertain the adequacy of three different drugs delivered separately and collectively- low dose steroid-dexamethasone (3mg/kg i.p) ,antioxidant-melatonin(10 mg/kg i.p) ,and phosphodiesterases inhibitor - pentoxifylline (75 mg/kg i.p)in endotoxin-induced hepatotoxicity in mice. Endotoxin/lipopolysaccharides induced hepatotoxicity was reproduced in mice by giving lipopolysaccharide of serotype E.Coli intraperitoneally. The preventive role was questioned by giving the experimental agent half an hour prior to LPS injection whereas the therapeutic potential of the experimental agent was searched out via post-LPS delivering. The extent of liver damage was adjudged via serum alanine aminotransferases (ALT) and aspartate aminotransferase (AST) estimation along with a histopathological examination of liver tissue. Dexamethasone is given before (Group 3) and after LPS (group 4) significantly attenuated LPS generated liver injury.Pentoxifylline generated similar results and serum ALT; AST histological alteration abated considerably (p≤ 0.05) both in animals subjected to pentoxifylline pre (Group 5) and post-treatment(Group 6). Melatonin was also prosperous in aversion (Group 7) and curation (Group 8) of LPS invoked hepatotoxicity as evident by lessening of augmented ALT (≤0.01) and AST (≤0.01) along with restoration of pathological changes in liver sections (p≤0.05). Combination therapies with dexamethasone in conjunction with melatonin (Group 9), dexamethasone together with pentoxifylline (Group 10), and pentoxifylline along with melatonin (Group 11) after LPS administration tapered LPS evoked hepatic dysfunction statistically considerably. In conclusion, both melatonin and pentoxifylline set up promising results in endotoxin-induced hepatotoxicity and can be used therapeutic adjuncts to conventional treatment strategies in sepsis-induced liver failure.Keywords: endotoxin/lipopolysacchride, dexamethasone, hepatotoxicity, melatonin, pentoxifylline
Procedia PDF Downloads 2803858 Identification of Blood Biomarkers Unveiling Early Alzheimer's Disease Diagnosis Through Single-Cell RNA Sequencing Data and Autoencoders
Authors: Hediyeh Talebi, Shokoofeh Ghiam, Changiz Eslahchi
Abstract:
Traditionally, Alzheimer’s disease research has focused on genes with significant fold changes, potentially neglecting subtle but biologically important alterations. Our study introduces an integrative approach that highlights genes crucial to underlying biological processes, regardless of their fold change magnitude. Alzheimer's Single-cell RNA-seq data related to the peripheral blood mononuclear cells (PBMC) was extracted from the Gene Expression Omnibus (GEO). After quality control, normalization, scaling, batch effect correction, and clustering, differentially expressed genes (DEGs) were identified with adjusted p-values less than 0.05. These DEGs were categorized based on cell-type, resulting in four datasets, each corresponding to a distinct cell type. To distinguish between cells from healthy individuals and those with Alzheimer's, an adversarial autoencoder with a classifier was employed. This allowed for the separation of healthy and diseased samples. To identify the most influential genes in this classification, the weight matrices in the network, which includes the encoder and classifier components, were multiplied, and focused on the top 20 genes. The analysis revealed that while some of these genes exhibit a high fold change, others do not. These genes, which may be overlooked by previous methods due to their low fold change, were shown to be significant in our study. The findings highlight the critical role of genes with subtle alterations in diagnosing Alzheimer's disease, a facet frequently overlooked by conventional methods. These genes demonstrate remarkable discriminatory power, underscoring the need to integrate biological relevance with statistical measures in gene prioritization. This integrative approach enhances our understanding of the molecular mechanisms in Alzheimer’s disease and provides a promising direction for identifying potential therapeutic targets.Keywords: alzheimer's disease, single-cell RNA-seq, neural networks, blood biomarkers
Procedia PDF Downloads 663857 Adsorption of Chlorinated Pesticides in Drinking Water by Carbon Nanotubes
Authors: Hacer Sule Gonul, Vedat Uyak
Abstract:
Intensive use of pesticides in agricultural activity causes mixing of these compounds into water sources with surface flow. Especially after the 1970s, a number of limitations imposed on the use of chlorinated pesticides that have a carcinogenic risk potential and regulatory limit have been established. These chlorinated pesticides discharge to water resources, transport in the water and land environment and accumulation in the human body through the food chain raises serious health concerns. Carbon nanotubes (CNTs) have attracted considerable attention from on all because of their excellent mechanical, electrical, and environmental characteristics. Due to CNT particles' high degree of hydrophobic surfaces, these nanoparticles play critical role in the removal of water contaminants of natural organic matters, pesticides and phenolic compounds in water sources. Health concerns associated with chlorinated pesticides requires the removal of such contaminants from aquatic environment. Although the use of aldrin and atrazine was restricted in our country, repatriation of illegal entry and widespread use of such chemicals in agricultural areas cause increases for the concentration of these chemicals in the water supply. In this study, the compounds of chlorinated pesticides such as aldrin and atrazine compounds would be tried to eliminate from drinking water with carbon nanotube adsorption method. Within this study, 2 different types of CNT would be used including single-wall (SWCNT) and multi-wall (MWCNT) carbon nanotubes. Adsorption isotherms within the scope of work, the parameters affecting the adsorption of chlorinated pesticides in water are considered as pH, contact time, CNT type, CNT dose and initial concentration of pesticides. As a result, under conditions of neutral pH conditions with MWCNT respectively for atrazine and aldrin obtained adsorption capacity of determined as 2.24 µg/mg ve 3.84 µg/mg. On the other hand, the determined adsorption capacity rates for SWCNT for aldrin and atrazine has identified as 3.91 µg/mg ve 3.92 µg/mg. After all, each type of pesticide that provides superior performance in relieving SWCNT particles has emerged.Keywords: pesticide, drinking water, carbon nanotube, adsorption
Procedia PDF Downloads 1713856 A Nutritional Wellness Program for Overweight Health Care Providers in Hospital Setting: A Randomized Controlled Trial Pilot Study
Authors: Kim H. K. Choy, Oliva H. K. Chu, W. Y. Keung, B. Lim, Winnie P. Y. Tang
Abstract:
Background: The prevalence of workplace obesity is rising worldwide; therefore, the workplace is an ideal venue to implement weight control intervention. This pilot randomized controlled trial aimed to develop, implement, and evaluate a nutritional wellness program for obese health care providers working in a hospital. Methods: This hospital-based nutritional wellness program was an 8-week pilot randomized controlled trial for obese health care providers. The primary outcomes were body weight and body mass index (BMI). The secondary outcomes were serum fasting glucose, fasting cholesterol, triglyceride, high-density (HDL) and low-density (LDL) lipoprotein, body fat percentage, and body mass. Participants were randomly assigned to the intervention (n = 20) or control (n = 22) group. Participants in both groups received individual nutrition counselling and nutrition pamphlets, whereas only participants in the intervention group were given mobile phone text messages. Results: 42 participants completed the study. In comparison with the control group, the intervention group showed approximately 0.98 kg weight reduction after two months. Participants in intervention group also demonstrated clinically significant improvement in BMI, serum cholesterol level, and HDL level. There was no improvement of body fat percentage and body mass for both intervention and control groups. Conclusion: The nutritional wellness program for obese health care providers was feasible in hospital settings. Health care providers demonstrated short-term weight loss, decrease in serum fasting cholesterol level, and HDL level after completing the program.Keywords: weight management, weight control, health care providers, hospital
Procedia PDF Downloads 2433855 New Test Algorithm to Detect Acute and Chronic HIV Infection Using a 4th Generation Combo Test
Authors: Barun K. De
Abstract:
Acquired immunodeficiency syndrome (AIDS) is caused by two types of human immunodeficiency viruses, collectively designated HIV. HIV infection is spreading globally particularly in developing countries. Before an individual is diagnosed with HIV, the disease goes through different phases. First there is an acute early phase that is followed by an established or chronic phase. Subsequently, there is a latency period after which the individual becomes immunodeficient. It is in the acute phase that an individual is highly infectious due to a high viral load. Presently, HIV diagnosis involves use of tests that do not detect the acute phase infection during which both the viral RNA and p24 antigen are expressed. Instead, these less sensitive tests detect antibodies to viral antigens which are typically sero-converted later in the disease process following acute infection. These antibodies are detected in both asymptomatic HIV-infected individuals as well as AIDS patients. Studies indicate that early diagnosis and treatment of HIV infection can reduce medical costs, improve survival, and reduce spreading of infection to new uninfected partners. Newer 4th generation combination antigen/antibody tests are highly sensitive and specific for detection of acute and established HIV infection (HIV1 and HIV2) enabling immediate linkage to care. The CDC (Center of Disease Control, USA) recently recommended an algorithm involving three different tests to screen and diagnose acute and established infections of HIV-1 and HIV-2 in a general population. Initially a 4th generation combo test detects a viral antigen p24 and specific antibodies against HIV -1 and HIV-2 envelope proteins. If the test is positive it is followed by a second test known as a differentiation assay which detects antibodies against specific HIV-1 and HIV-2 envelope proteins confirming established infection of HIV-1 or HIV-2. However if it is negative then another test is performed that measures viral load confirming an acute HIV-1 infection. Screening results of a Phoenix area population detected 0.3% new HIV infections among which 32.4% were acute cases. Studies in the U.S. indicate that this algorithm effectively reduces HIV infection through immediate treatment and education following diagnosis.Keywords: new algorithm, HIV, diagnosis, infection
Procedia PDF Downloads 4103854 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 1163853 Bronchoscopy and Genexpert in the Diagnosis of Pulmonary Tuberculosis in the Indian Private Health Sector: A Short Case Series
Authors: J. J. Mathew
Abstract:
Pulmonary tuberculosis is highly prevalent in the Indian subcontinent. Most cases of pulmonary tuberculosis are diagnosed with sputum examinations and the vast majority of these are undertaken by the government run establishments. However, mycobacterial cultures are not routinely done, unless drug resistance is detected based on clinical response. Modern diagnostic tests like bronchoscopy and Genexpert are not routinely employed in the government institutions for the diagnosis of pulmonary tuberculosis, but have been accepted widely by good private institutions. The utility of these investigations in the private sector is not yet well recognized. This retrospective study aims to assess the usefulness of bronchoscopy and Genexpert in the diagnosis of pulmonary tuberculosis in quaternary care private hospital in India. 30 patients with respiratory symptoms raising the possibility of tuberculosis based on clinical and radiological features, but without any significant sputum production, were subject to bronchoscopy and BAL samples taken for microbiological studies, including Genexpert. 6 out of the 30 patients were found to be Genexpert positive and none of them showed Rifampicin resistance. All the 6 cases had upper zone predominant disease. One of the 6 cases of tuberculosis had another co-existent bacterial infection according to the routine culture studies. 6 other cases were proven to be due to other bacterial infections alone, 2 had a malignant diagnosis and the remaining cases were thought to be non-infective pathologies. The Genexpert results were made available within 48 hours in the 6 positive cases. All of them were commenced on standard anti-tuberculous regimen with excellent clinical response. The other infective cases were also managed successfully based on the drug susceptibilities. The study has shown the usefulness of these investigations as early intervention enabled diagnosis facilitating treatment and prevention of any clinical deterioration. The study lends support to early bronchoscopy and Genexpert testing in suspected cases of pulmonary tuberculosis without significant sputum production, in a high prevalence country which normally relies on sputum examination for the diagnosis of pulmonary tuberculosis.Keywords: pulmonary, tuberculosis, bronchoscopy, genexpert
Procedia PDF Downloads 2453852 Response of Yield and Morphological Characteristic of Rice Cultivars to Heat Stress at Different Growth Stages
Authors: Mohammad Taghi Karbalaei Aghamolki, Mohd Khanif Yusop, Fateh Chand Oad, Hamed Zakikhani, Hawa Zee Jaafar, Sharifh Kharidah, Mohamed Hanafi Musa, Shahram Soltani
Abstract:
The high temperatures during sensitive growth phases are changing rice morphology as well as influencing yield. In the glass house study, the treatments were: growing conditions [normal growing (32oC+2) and heat stress (38oC+2) day time and 22oC+2 night time], growth stages (booting, flowering and ripening) and four cultivars (Hovaze, Hashemi, Fajr, as exotic and MR219 as indigenous). The heat chamber was prepared covered with plastic, and automatic heater was adjusted at 38oC+2 (day) and 22oC+2 (night) for two weeks in every growth stages. Rice morphological and yield under the influence of heat stress during various growth stages showed taller plants in Hashsemi due to its tall character. The total tillers per hill were significantly higher in Fajr receiving heat stress during booting stage. In all growing conditions and growth stages, Hashemi recorded higher panicle exertion and flag leaf length. The flag leaf width in all situations was found higher in Hovaze. The total tillers per hill were more in Fajr, although heat stress was imposed during booting and flowering stages. The indigenous MR219 in all situations of growing conditions, growth stages recorded higher grain yield. However, its grain yield slightly decreased when heat stress was imposed during booting and flowering. Similar results were found in all other exotic cultivars recording to lower grain yield in the heat stress condition during booting and flowering. However, plants had no effect on heat stress during ripening stage.Keywords: rice, growth, heat, temperature, stress, morphology, yield
Procedia PDF Downloads 2763851 Relationships between Screen Time, Internet Addiction and Other Lifestyle Behaviors with Obesity among Secondary School Students in the Turkish Republic of Northern Cyprus
Authors: Ozen Asut, Gulifeiya Abuduxike, Imge Begendi, Mustafa O. Canatan, Merve Colak, Gizem Ozturk, Lara Tasan, Ahmed Waraiet, Songul A. Vaizoglu, Sanda Cali
Abstract:
Obesity among children and adolescents is one of the critical public health problems worldwide. Internet addiction is one of the sedentary behaviors that cause obesity due to the excessive screen time and reduced physical activities. We aimed to examine the relationships between the screen time, internet addiction and other lifestyle behaviors with obesity among high school students in the Near East College in Nicosia, Northern Cyprus. A cross-sectional study conducted among 469 secondary school students, mean age 11.95 (SD, 0.81) years. A self-administrated questionnaire was applied to assess the screen time and lifestyle behaviors. The Turkish adopted version of short-form of internet addiction test was used to assess internet addiction problems. Height and weight were measured to calculate BMI and classified based on the BMI percentiles for sex and age. Descriptive analysis, Chi-Square test, and multivariate regression analysis were done. Of all, 17.2% of the participants were overweight and obese, and 18.1% had internet addictions, while 40.7% of them reported having screen time more than two hours. After adjusting the analysis for age and sex, eating snacks while watching television (OR, 3.04; 95% CI, 1.28-7.21), self- perceived body weight (OR, 24.9; 95% CI, 9.64-64.25) and having a play station in the room (OR, 4.6; 95% CI, 1.85 - 11.42) were significantly associated with obesity. Screen time (OR, 4.68; 95% CI, 2.61-8.38; p=0.000) and having a computer in bedroom (OR, 1.7; 95% CI, 1.01- 2.87; p=0.046) were significantly associated with internet addiction, whereas parent’s compliant regarding the lengthy technology use (OR, 0.23; 95% CI, 0.11-0.46; p=0.000) was found to be a protective factor against internet addiction. Prolonged screen time, internet addiction, sedentary lifestyles, and reduced physical and social activities are interrelated, multi-dimensional factors that lead to obesity among children and adolescents. A family - school-based integrated approach should be implemented to tackle obesity problems.Keywords: adolescents, internet addiction, lifestyle, Northern Cyprus, obesity, screen time
Procedia PDF Downloads 1423850 Design of an Active Compression System for Treating Vascular Disease Using a Series of Silicone Based Inflatable Mini Bladders
Authors: Gayani K. Nandasiri, Tilak Dias, William Hurley
Abstract:
Venous disease of human lower limb could range from minor asymptomatic incompetence of venous valves to chronic venous ulceration. The sheer prevalence of varicose veins and its associated significant costs of treating late complications such as chronic ulcers contribute to a higher burden on health care resources. In most of western countries with developed health care systems, treatment costs associated with Venous disease accounts for a considerable portion of their total health care budget, and it has become a high-cost burden to National Health Service (NHS), UK. The established gold standard of treatment for the venous disease is the graduated compression, where the pressure at the ankle being highest and decreasing towards the knee and thigh. Currently, medical practitioners use two main methods to treat venous disease; i.e. compression bandaging and compression stockings. Both these systems have their own disadvantages which lead to the current programme of research. The aim of the present study is to revolutionize the compression therapy by using a novel active compression system to deliver a controllable and more accurate pressure profiles using a series of inflatable mini bladders. Two types of commercially available silicones were tested for the application. The mini bladders were designed with a special fabrication procedure to provide required pressure profiles, and a series of experiments were conducted to characterise the mini bladders. The inflation/deflation heights of these mini bladders were investigated experimentally and using a finite element model (FEM), and the experimental data were compared to the results obtained from FEM simulations, which showed 70-80% agreement. Finally, the mini bladders were tested for its pressure transmittance characteristics, and the results showed a 70-80% of inlet air pressure transmitted onto the treated surface.Keywords: finite element analysis, graduated compression, inflatable bladders, venous disease
Procedia PDF Downloads 1853849 Spatial Differentiation of Elderly Care Facilities in Mountainous Cities: A Case Study of Chongqing
Abstract:
In this study, a web crawler was used to collect POI sample data from 38 districts and counties of Chongqing in 2022, and ArcGIS was combined to coordinate and projection conversion and realize data visualization. Nuclear density analysis and spatial correlation analysis were used to explore the spatial distribution characteristics of elderly care facilities in Chongqing, and K mean cluster analysis was carried out with GeoDa to study the spatial concentration degree of elderly care resources in 38 districts and counties. Finally, the driving force of spatial differentiation of elderly care facilities in various districts and counties of Chongqing is studied by using the method of geographic detector. The results show that: (1) in terms of spatial distribution structure, the distribution of elderly care facilities in Chongqing is unbalanced, showing a distribution pattern of ‘large dispersion and small agglomeration’ and the asymmetric pattern of ‘west dense and east sparse, north dense and south sparse’ is prominent. (2) In terms of the spatial matching between elderly care resources and the elderly population, there is a weak coordination between the input of elderly care resources and the distribution of the elderly population at the county level in Chongqing. (3) The analysis of the results of the geographical detector shows that the single factor influence is mainly the number of elderly population, public financial revenue and district and county GDP. The high single factor influence is mainly caused by the elderly population, public financial income, and district and county GDP. The influence of each influence factor on the spatial distribution of elderly care facilities is not simply superimposed but has a nonlinear enhancement effect or double factor enhancement. It is necessary to strengthen the synergistic effect of two factors and promote the synergistic effect of multiple factors.Keywords: aging, elderly care facilities, spatial differentiation, geographical detector, driving force analysis, Mountain city
Procedia PDF Downloads 383848 Authentication and Traceability of Meat Products from South Indian Market by Species-Specific Polymerase Chain Reaction
Authors: J. U. Santhosh Kumar, V. Krishna, Sebin Sebastian, G. S. Seethapathy, G. Ravikanth, R. Uma Shaanker
Abstract:
Food is one of the basic needs of human beings. It requires the normal function of the body part and a healthy growth. Recently, food adulteration increases day by day to increase the quantity and make more benefit. Animal source foods can provide a variety of micronutrients that are difficult to obtain in adequate quantities from plant source foods alone. Particularly in the meat industry, products from animals are susceptible targets for fraudulent labeling due to the economic profit that results from selling cheaper meat as meat from more profitable and desirable species. This work presents an overview of the main PCR-based techniques applied to date to verify the authenticity of beef meat and meat products from beef species. We were analyzed 25 market beef samples in South India. We examined PCR methods based on the sequence of the cytochrome b gene for source species identification. We found all sample were sold as beef meat as Bos Taurus. However, interestingly Male meats are more valuable high price compare to female meat, due to this reason most of the markets samples are susceptible. We were used sex determination gene of cattle like TSPY(Y-encoded, testis-specific protein TSPY is a Y-specific gene). TSPY homologs exist in several mammalian species, including humans, horses, and cattle. This gene is Y coded testis protein genes, which only amplify the male. We used multiple PCR products form species-specific “fingerprints” on gel electrophoresis, which may be useful for meat authentication. Amplicons were obtained only by the Cattle -specific PCR. We found 13 market meat samples sold as female beef samples. These results suggest that the species-specific PCR methods established in this study would be useful for simple and easy detection of adulteration of meat products.Keywords: authentication, meat products, species-specific, TSPY
Procedia PDF Downloads 3753847 Lithuanian Sign Language Literature: Metaphors at the Phonological Level
Authors: Anželika Teresė
Abstract:
In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics
Procedia PDF Downloads 1363846 Low Pricing Strategy of Forest Products in Community Forestry Program: Subsidy to the Forest Users or Loss of Economy?
Authors: Laxuman Thakuri
Abstract:
Community-based forest management is often glorified as one of the best forest management alternatives in the developing countries like Nepal. It is also believed that the transfer of forest management authorities to local communities is decisive to take efficient decisions, maximize the forest benefits and improve the people’s livelihood. The community forestry of Nepal also aims to maximize the forest benefits; share them among the user households and improve their livelihood. However, how the local communities fix the price of forest products and local pricing made by the forest user groups affects to equitable forest benefits-sharing among the user households and their livelihood improvement objectives, the answer is largely silent among the researchers and policy-makers alike. This study examines local pricing system of forest products in the lowland community forestry and its effects on equitable benefit-sharing and livelihood improvement objectives. The study discovered that forest user groups fixed the price of forest products based on three criteria: i) costs incur in harvesting, ii) office operation costs, and iii) livelihood improvement costs through community development and income generating activities. Since user households have heterogeneous socio-economic conditions, the forest user groups have been applied low pricing strategy even for high-value forest products that the access of socio-economically worse-off households can be increased. However, the results of forest products distribution showed that as a result of low pricing strategy the access of socio-economically better-off households has been increasing at higher rate than worse-off and an inequality situation has been created. Similarly, the low pricing strategy is also found defective to livelihood improvement objectives. The study suggests for revising the forest products pricing system in community forest management and reforming the community forestry policy as well.Keywords: community forestry, forest products pricing, equitable benefit-sharing, livelihood improvement, Nepal
Procedia PDF Downloads 2993845 The Effects of Arginine, Glutamine and Threonine Supplementation in the Starting Phase on Subsequent Performance of Male Broile
Authors: Jalal Fazli Amiri, Mohammad Hossein Shahir, Mohammad Hossein Nemati, Afshin Heidarinia
Abstract:
The current study was performed to investigate the effects of arginine, threonine, and glutamine supplementation in excess of requirements in the starter period (17 days) on performance, intestinal morphology, and immune response of broilers. Four hundred and sixteen male day-old chicks were assigned in a 2×2×2 factorial arrangement to a completely randomized design with four replicates (13 birds per replicate ). Treatments were: a control group that received the basal diet, basal diet plus 1% glutamine, basal diet plus 0.2% threonine, basal diet plus 0.75 % arginine, and combination of these three amino acids (glutamine+arginine, glutamine+threonine, arginine+threonine and arginine+ glutamine+threonine). The effect of glutamine supplementation on feed intake was significant in week 4 (p < 0.05), week 6 (p < 0.001), and total feed intake (p < 0.05) and caused declined feed intake. No significant differences of glutamine addition were observed on intestinal morphology (villi height, crypt depth, villi height to crypt depth ratio, villi width). Threonine supplementation caused increased weight gain in week 2 (p < 0.001) and 3 and a decrease of total feed intake (p < 0.05). Duodenum and jejunum villi height, crypt depth, villi height to crypt depth ratio, villi width were not affected. The effect of arginine supplementation was the increase of breast percentage (p < 0.05) and a decrease of jejunum villi high (p < 0.05) and Jejunum crypt depth (p < 0.05). Supplementation of arginine, threonine, and glutamine had no significant effects on blood titer of antibodies against Newcastle disease, infectious bronchitis, avian influenza. Overall, it seems that the supplementation of arginine, threonine, and glutamine in excess of requirements in the starter period had no effect on performance in subsequent periods and intestinal morphology.Keywords: intestinal morphology, immunity, broiler chickens, glutamine, arginine, threonine
Procedia PDF Downloads 1373844 Evaluating Robustness of Conceptual Rainfall-runoff Models under Climate Variability in Northern Tunisia
Authors: H. Dakhlaoui, D. Ruelland, Y. Tramblay, Z. Bargaoui
Abstract:
To evaluate the impact of climate change on water resources at the catchment scale, not only future projections of climate are necessary but also robust rainfall-runoff models that are able to be fairly reliable under changing climate conditions. This study aims at assessing the robustness of three conceptual rainfall-runoff models (GR4j, HBV and IHACRES) on five basins in Northern Tunisia under long-term climate variability. Their robustness was evaluated according to a differential split sample test based on a climate classification of the observation period regarding simultaneously precipitation and temperature conditions. The studied catchments are situated in a region where climate change is likely to have significant impacts on runoff and they already suffer from scarcity of water resources. They cover the main hydrographical basins of Northern Tunisia (High Medjerda, Zouaraâ, Ichkeul and Cap bon), which produce the majority of surface water resources in Tunisia. The streamflow regime of the basins can be considered as natural since these basins are located upstream from storage-dams and in areas where withdrawals are negligible. A 30-year common period (1970‒2000) was considered to capture a large spread of hydro-climatic conditions. The calibration was based on the Kling-Gupta Efficiency (KGE) criterion, while the evaluation of model transferability is performed according to the Nash-Suttfliff efficiency criterion and volume error. The three hydrological models were shown to have similar behaviour under climate variability. Models prove a better ability to simulate the runoff pattern when transferred toward wetter periods compared to the case when transferred to drier periods. The limits of transferability are beyond -20% of precipitation and +1.5 °C of temperature in comparison with the calibration period. The deterioration of model robustness could in part be explained by the climate dependency of some parameters.Keywords: rainfall-runoff modelling, hydro-climate variability, model robustness, uncertainty, Tunisia
Procedia PDF Downloads 2923843 Transverse Momentum Dependent Factorization and Evolution for Spin Physics
Authors: Bipin Popat Sonawane
Abstract:
After 1988 Electron muon Collaboration (EMC) announcement of measurement of spin dependent structure function, it has been found that it has become a need to understand spin structure of a hadron. In the study of three-dimensional spin structure of a proton, we need to understand the foundation of quantum field theory in terms of electro-weak and strong theories using rigorous mathematical theories and models. In the process of understanding the inner dynamical stricture of proton we need understand the mathematical formalism in perturbative quantum chromodynamics (pQCD). In QCD processes like proton-proton collision at high energy we calculate cross section using conventional collinear factorization schemes. In this calculations, parton distribution functions (PDFs) and fragmentation function are used which provide the information about probability density of finding quarks and gluons ( partons) inside the proton and probability density of finding final hadronic state from initial partons. In transverse momentum dependent (TMD) PDFs and FFs, collectively called as TMDs, take an account for intrinsic transverse motion of partons. The TMD factorization in the calculation of cross sections provide a scheme of hadronic and partonic states in the given QCD process. In this study we review Transverse Momentum Dependent (TMD) factorization scheme using Collins-Soper-Sterman (CSS) Formalism. CSS formalism considers the transverse momentum dependence of the partons, in this formalism the cross section is written as a Fourier transform over a transverse position variable which has physical interpretation as impact parameter. Along with this we compare this formalism with improved CSS formalism. In this work we study the TMD evolution schemes and their comparison with other schemes. This would provide description in the process of measurement of transverse single spin asymmetry (TSSA) in hadro-production and electro-production of J/psi meson at RHIC, LHC, ILC energy scales. This would surely help us to understand J/psi production mechanism which is an appropriate test of QCD. Procedia PDF Downloads 693842 Modelling of Heat Transfer during Controlled Cooling of Thermo-Mechanically Treated Rebars Using Computational Fluid Dynamics Approach
Authors: Rohit Agarwal, Mrityunjay K. Singh, Soma Ghosh, Ramesh Shankar, Biswajit Ghosh, Vinay V. Mahashabde
Abstract:
Thermo-mechanical treatment (TMT) of rebars is a critical process to impart sufficient strength and ductility to rebar. TMT rebars are produced by the Tempcore process, involves an 'in-line' heat treatment in which hot rolled bar (temperature is around 1080°C) is passed through water boxes where it is quenched under high pressure water jets (temperature is around 25°C). The quenching rate dictates composite structure consisting (four non-homogenously distributed phases of rebar microstructure) pearlite-ferrite, bainite, and tempered martensite (from core to rim). The ferrite and pearlite phases present at core induce ductility to rebar while martensitic rim induces appropriate strength. The TMT process is difficult to model as it brings multitude of complex physics such as heat transfer, highly turbulent fluid flow, multicomponent and multiphase flow present in the control volume. Additionally the presence of film boiling regime (above Leidenfrost point) due to steam formation adds complexity to domain. A coupled heat transfer and fluid flow model based on computational fluid dynamics (CFD) has been developed at product technology division of Tata Steel, India which efficiently predicts temperature profile and percentage martensite rim thickness of rebar during quenching process. The model has been validated with 16 mm rolling of New Bar mill (NBM) plant of Tata Steel Limited, India. Furthermore, based on the scenario analyses, optimal configuration of nozzles was found which helped in subsequent increase in rolling speed.Keywords: boiling, critical heat flux, nozzles, thermo-mechanical treatment
Procedia PDF Downloads 2163841 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination
Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana
Abstract:
Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization
Procedia PDF Downloads 5063840 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models
Authors: Robin Molinier
Abstract:
Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.Keywords: business model, capacity, sourcing, synergies
Procedia PDF Downloads 1743839 Experimental Study of an Isobaric Expansion Heat Engine with Hydraulic Power Output for Conversion of Low-Grade-Heat to Electricity
Authors: Maxim Glushenkov, Alexander Kronberg
Abstract:
Isobaric expansion (IE) process is an alternative to conventional gas/vapor expansion accompanied by a pressure decrease typical of all state-of-the-art heat engines. The elimination of the expansion stage accompanied by useful work means that the most critical and expensive parts of ORC systems (turbine, screw expander, etc.) are also eliminated. In many cases, IE heat engines can be more efficient than conventional expansion machines. In addition, IE machines have a very simple, reliable, and inexpensive design. They can also perform all the known operations of existing heat engines and provide usable energy in a very convenient hydraulic or pneumatic form. This paper reports measurement made with the engine operating as a heat-to-shaft-power or electricity converter and a comparison of the experimental results to a thermodynamic model. Experiments were carried out at heat source temperature in the range 30–85 °C and heat sink temperature around 20 °C; refrigerant R134a was used as the engine working fluid. The pressure difference generated by the engine varied from 2.5 bar at the heat source temperature 40 °C to 23 bar at the heat source temperature 85 °C. Using a differential piston, the generated pressure was quadrupled to pump hydraulic oil through a hydraulic motor that generates shaft power and is connected to an alternator. At the frequency of about 0.5 Hz, the engine operates with useful powers up to 1 kW and an oil pumping flowrate of 7 L/min. Depending on the temperature of the heat source, the obtained efficiency was 3.5 – 6 %. This efficiency looks very high, considering such a low temperature difference (10 – 65 °C) and low power (< 1 kW). The engine’s observed performance is in good agreement with the predictions of the model. The results are very promising, showing that the engine is a simple and low-cost alternative to ORC plants and other known energy conversion systems, especially at low temperatures (< 100 °C) and low power range (< 500 kW) where other known technologies are not economic. Thus low-grade solar, geothermal energy, biomass combustion, and waste heat with a temperature above 30 °C can be involved into various energy conversion processes.Keywords: isobaric expansion, low-grade heat, heat engine, renewable energy, waste heat recovery
Procedia PDF Downloads 2263838 Expression of DNMT Enzymes-Regulated miRNAs Involving in Epigenetic Event of Tumor and Margin Tissues in Patients with Breast Cancer
Authors: Fatemeh Zeinali Sehrig
Abstract:
Background: miRNAs play an important role in the post-transcriptional regulation of genes, including genes involved in DNA methylation (DNMTs), and are also important regulators of oncogenic pathways. The study of microRNAs and DNMTs in breast cancer allows the development of targeted treatments and early detection of this cancer. Methods and Materials: Clinical Patients and Samples: Institutional guidelines, including ethical approval and informed consent, were followed by the Ethics Committee (Ethics code: IR.IAU.TABRIZ.REC.1401.063) of Tabriz Azad University, Tabriz, Iran. In this study, tissues of 100 patients with breast cancer and tissues of 100 healthy women were collected from Noor Nejat Hospital in Tabriz. The basic characteristics of the patients with breast cancer included: 1)Tumor grade(Grade 3 = 5%, Grade 2 = 87.5%, Grade 1 = 7.5%), 2)Lymph node(Yes = 87.5%, No = 12.5%), 3)Family cancer history(Yes = 47.5%, No = 41.3%, Unknown = 11.2%), 4) Abortion history(Yes = 36.2%).In silico methods (data gathering, process, and build networks): Gene Expression Omnibus (GEO), a high-throughput genomic database, was queried for miRNAs expression profiles in breast cancer. For Experimental protocol Tissue Processing, Total RNA isolation, complementary DNA(cDNA) synthesis, and quantitative real time PCR (QRT-PCR) analysis were performed. Results: In the present study, we found significant (p.value<0.05) changes in the expression level of miRNAs and DNMTs in patients with breast cancer. In bioinformatics studies, the GEO microarray data set, similar to qPCR results, showed a decreased expression of miRNAs and increased expression of DNMTs in breast cancer. Conclusion: According to the results of the present study, which showed a decrease in the expression of miRNAs and DNMTs in breast cancer, it can be said that these genes can be used as important diagnostic and therapeutic biomarkers in breast cancer.Keywords: gene expression omnibus, microarray dataset, breast cancer, miRNA, DNMT (DNA methyltransferases)
Procedia PDF Downloads 353837 Problems and Prospects of Agricultural Biotechnology in Nigeria’s Developing Economy
Authors: Samson Abayomi Olasoju, Olufemi Adekunle, Titilope Edun, Johnson Owoseni
Abstract:
Science offers opportunities for revolutionizing human activities, enriched by input from scientific research and technology. Biotechnology is a major force for development in developing countries such as Nigeria. It is found to contribute to solving human problems like water and food insecurity that impede national development and threaten peace wherever it is applied. This review identified the problems of agricultural biotechnology in Nigeria. On the part of rural farmers, there is a lack of adequate knowledge or awareness of biotechnology despite the fact that they constitute the bulk of Nigerian farmers. On part of the government, the problems include: lack of adequate implementation of government policy on bio-safety and genetically modified products, inadequate funding of education as well as research and development of products related to biotechnology. Other problems include: inadequate infrastructures (including laboratory), poor funding and lack of national strategies needed for development and running of agricultural biotechnology. In spite of all the challenges associated with agricultural biotechnology, its prospects still remain great if Nigeria is to meet with the food needs of the country’s ever increasing population. The introduction of genetically engineered products will lead to the high productivity needed for commercialization and food security. Insect, virus and other related diseases resistant crops and livestock are another viable area of contribution of biotechnology to agricultural production. In conclusion, agricultural biotechnology will not only ensure food security, but, in addition, will ensure that the local farmers utilize appropriate technology needed for large production, leading to the prosperity of the farmers and national economic growth, provided government plays its role of adequate funding and good policy implementation.Keywords: biosafety, biotechnology, food security, genetic engineering, genetic modification
Procedia PDF Downloads 1743836 Reasonable Adjustment for Students with Disabilities - Opportunities and Limits in Social Work Education
Authors: Bartelsen-Raemy Annabelle, Gerber Andrea
Abstract:
Objectives: The adoption of the UN Convention on the Rights of Persons with Disabilities has the effect that higher education institutions in Switzerland are called upon to promote inclusive university education. In this context, our School of Social Work aims to provide fair participation and the removal of barriers in our study programmes at bachelor’s and master’s levels. In 2015 we developed a concept of reasonable adjustments for students with disabilities and chronic illness as an instrument to provide equal opportunities for those students. We reviewed the implementation of this concept as part of our quality management process. Using a qualitative research design, we explored how affected students and lecturers experience the processes and measures taken and which barriers they still perceive. Methods: We captured subjective perspectives and experience of measures by conducting 15 problem-centred interviews with affected students and three experimental focus groups with lecturers. The data was processed using structured qualitative content analysis and summarised as key categories. Results: All respondents evaluated the concept of reasonable adjustment very positively and emphasised its importance for equal opportunities. Our analysis revealed differences in the usage and perception of both groups and showed that the students interviewed were a heterogeneous group with different needs. Overall, the students described the adjustments, in particular in relation to examinations and other assignments, as a great relief. The lecturers expressed high standards for their own teaching and supervision of students and, at the same time, wished for more support from the university. However, despite the positive evaluation by the lecturers, the limits of reasonable adjustment became evident. It is necessary to consider the limits of reasonable adjustments in terms of professional skills. Conclusion: Reasonable adjustments should, therefore, be seen as an element of an inclusive university culture that must be complemented by further measures. Taking this into account, we have planned further research as a basis for the development of a diversity and inclusion policy.Keywords: opportunities and limits, reasonable adjustment, social work education, students with disabilities
Procedia PDF Downloads 1323835 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC
Procedia PDF Downloads 2413834 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 1273833 Root System Architecture Analysis of Sorghum Genotypes and Its Effect on Drought Adaptation
Authors: Hailemariam Solomon, Taye Tadesse, Daniel Nadew, Firezer Girma
Abstract:
Sorghum is an important crop in semi-arid regions and has shown resilience to drought stress. However, recurrent drought is affecting its productivity. Therefore, it is necessary to explore genes that contribute to drought stress adaptation to increase sorghum productivity. The aim of this study is to evaluate and determine the effect of root system traits, specifically root angle, on drought stress adaptation and grain yield performance in sorghum genotypes. A total of 428 sorghum genotypes from the Ethiopian breeding program were evaluated in three drought-stress environments. Field trials were conducted using a row-column design with three replications. Root system traits were phenotyped using a high-throughput phenotyping platform and analyzed using a row-column design with two replications. Data analysis was performed using R software and regression analysis. The study found significant variations in root system architecture among the sorghum genotypes. Non-stay-green genotypes had a grain yield ranging from 1.63 to 3.1 tons/ha, while stay-green genotypes had a grain yield ranging from 2.4 to 2.9 tons/ha. The analysis of root angle showed that non-stay-green genotypes had an angle ranging from 8.0 to 30.5 degrees, while stay-green genotypes had an angle ranging from 12.0 to 29.0 degrees. Improved varieties exhibited angles between 14.04 and 19.50 degrees. Positive and significant correlations were observed between leaf areas and shoot dry weight, as well as between leaf width and shoot dry weight. Negative correlations were observed between root angle and leaf area, as well as between root angle and root length. This research highlights the importance of root system architecture, particularly root angle traits, in enhancing grain yield production in drought-stressed conditions. It also establishes an association between root angle and grain yield traits for maximizing sorghum productivity.Keywords: roor sysytem architecture, root angle, narrow root angle, wider root angle, drought
Procedia PDF Downloads 75