Search results for: social innovation process
1155 Students’ Speech Anxiety in Blended Learning
Authors: Mary Jane B. Suarez
Abstract:
Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety
Procedia PDF Downloads 1021154 A Bayesian Approach for Health Workforce Planning in Portugal
Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro
Abstract:
Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning
Procedia PDF Downloads 2511153 Option Pricing Theory Applied to the Service Sector
Authors: Luke Miller
Abstract:
This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.Keywords: option pricing theory, real options, service sector, valuation
Procedia PDF Downloads 3541152 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn
Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh
Abstract:
It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.Keywords: burn, degradation, postmortem interval, troponin-T
Procedia PDF Downloads 4481151 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 341150 Missed Opportunities for Immunization of under Five Children in Calabar South County Cros River State, Nigeria, the Way Forward
Authors: Celestine Odigwe, Epoke Lincoln, Rhoda-Dara Ephraim
Abstract:
Background; Immunization against the childhood killer diseases is the cardinal strategy for the prevention of these diseases all over the world in under five children, these diseases include; Tuberculosis, Measles, Polio, Tetanus, Diphthria, Pertusis, Yellow Fever, Hepatitis B, Haemophilus Influenza type B. 6.9 million children die before their fifth birthday , 80% of the worlds death in children under 5 years occur in 25 countries most in Africa and Asia and 2 million children can be saved each year with routine immunization Therefore failure to achieve total immunization coverage puts several children at risk. Aim; The aim of the study was to ascertain the prevalence, Investigate the various reasons and causes why several under five children in a suburb of calabar municipal county fail to get the required immunizations as at and when due and possibly the consequences, so that efforts can be re-directed towards the solution of the problems so identified. Methods; the study was a community based cross sectional study. The respondents were the mothers/guardians of the sampled children who were all aged 0-59 months. To be eligible for recruitment into the study, the parent or guardian was required to give an informed consent, reside within the Calabar South County with his/her children aged 0-59 months. We calculated our sample size using the Leslie-Kish formula and we used a two-staged sampling method, first to ballot for the wards to be involved and then to select four of the most populated ones in the wards chosen. Data collection was by interviewer administered structured questionnaire (Appendix I), Data collected was entered and analyzed using Statistical Package for the Social Sciences (SPSS) Version 20. Percentages were calculated and represented using charts and tables Results; The number of children sampled was 159. We found that 150 were fully immunized and 9 were not, the prevalence of missed opportunity was 32% from the study. The reasons for missed opportunities were varied, ranging from false contraindications, logistical problems resulting in very poor access roads to health facilities and poor organization of health centers together with negative health worker attitudes. Some of the consequences of these missed opportunities were increased susceptibility to vaccine preventable diseases, resurgence of the above diseases and increased morbidity and mortality of children aged less than 5 years. Conclusion; We found that ignorance on the part of both parents/guardians and health care staff together with infrastructural inadequacies in the county such as- roads, poor electric power supply for storage of vaccines were hugely responsible for most missed opportunities for immunization. The details of these and suggestions for improvement and the way forward are discussed.Keywords: missed opportunity, immunization, under five, Calabar south
Procedia PDF Downloads 3241149 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium
Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas
Abstract:
Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides
Procedia PDF Downloads 4351148 The Politics of Fantasy Meet Precarity of Place
Authors: Claudia Popescu, Adriana Mihaela Soaita
Abstract:
Within the EU accession process, Romania, as well as other CEE countries, have embarked on the post-1990 urbanization wave aiming to reduce the gaps between ‘older’ and ‘new’ EU member states. While post-socialist urban transitions have been extensively scrutinized, little is known about the developing trajectories of these new towns across the CEE region. To start addressing this knowledge gap, we wish to bring to the fore one of the most humble expressions of urbanism, that of the small, new towns of Romania. Despite rural-to-urban reclassification, urbanization levels have remained persistently low over the last three decades. In this context, it is timely and legitimate to ask about the prospects of new towns for a ‘successful’ socioeconomic performance within the urban network and avoidance of precarity and marginalization and adequate measure of place performance within the urban/settlement network and understanding the drivers that trigger towns’ socioeconomic performances. To answer these, we create a socioeconomic index of the place in order to compare the profile of the 60 new towns with large cities, old small towns and rural. We conceive ‘successful’ and ‘precarious’ performance in terms of a locality’s index value being above or below all small towns’ index average. Second, we performed logistic regression to interrogate the relevance of some key structural factors to the new towns’ socioeconomic performance (i.e. population size, urban history, regional location, connectivity and political determination of their local governments). Related to the first research question, our findings highlight the precarity of place as a long-standing condition of living and working in the new towns of Romania, particularly evident through our cross-comparative analysis across key category along the rural-urban continuum. We have substantiated the socioeconomic condition of precarity in rural places, with the new towns still maintaining features of ‘rurality’ rather than ‘urbanity’ - except a few successful satellites of economically striving large cities, particularly the country capital of Bucharest, which benefited from spillover effects. Related to our second research question, we found that the new towns of Romania have significantly higher odds of being characterized by precarity as a socioeconomic condition than all other small towns and urban places, but less so compared to the even more marginalized rural areas. Many new towns contain resource-dependent rural communities with a poor response to the context of change. Therefore, issues pertaining to local capacity building to adapt to the new urban environment should be addressed by the spatial planning policy. Our approach allowed us to bring to the fore the idea of precarity as a condition of whole localities. Thinking of precarity of place is important as it brings the whole institutional and political apparatus of spatial planning, urban and regional, into conversation with other causative or substantive axes of precarity developed in the literature. We recommend future research on the new towns in Romania and elsewhere.Keywords: politics of fantasy, precarity of place, urbanization, Romania
Procedia PDF Downloads 131147 Factors Affecting the Success of Premarital Screening Services in Middle Eastern Countries
Authors: Wafa Al Jabri
Abstract:
Background: In Middle Eastern Countries (MECs), there is a high prevalence of genetic blood disorders (GBDs), particularly sickle cell disease and thalassemia. The GBDs are considered a major public health concern that place a huge burden to individuals, families, communities, and health care systems. The high rates of consanguineous marriages, along with the unacceptable termination of at-risk pregnancy in MECs, reduce the possible solutions to control the high prevalence of GBDs. Since the early 1970s, most of MECs have started introducing premarital screening services (PSS) as a preventive measure to identify the asymptomatic carriers of GBDs and to provide genetic counseling to help couples plan for healthy families; yet, the success rate of PSS is very low. Purpose: This paper aims to highlight the factors that affect the success of PSS in MECs. Methods: An integrative review of articles located in CINAHL, PubMed, SCOPUS, and MedLine was carried out using the following terms: “premarital screening,” “success,” “effectiveness,” and “ genetic blood disorders”. Second, a hand search of the reference lists and Google searches were conducted to find studies that did not exist in the primary database searches. Only studies which are conducted in MECs and published after 2010 were included. Studies that were not published in English were excluded. Results: Eighteen articles were included in the review. The results showed that PSS in most of the MECs was successful in achieving its objective of identifying high-risk marriages; however, the service failed to meet its ultimate goal of reducing the prevalence of GBDs. Various factors seem to hinder the success of PSS, including poor public awareness, late timing of the screening, culture and social stigma, lack of prenatal diagnosis services and therapeutic abortion, emotional factors, religious beliefs, and lack of genetic counseling services. However, poor public awareness, late timing of the screening, religious misbeliefs, and the lack of adequate counseling services were the most common barriers identified. Conclusion and Implications: The review help in providing a framework for an effective preventive measure to reduce the prevalence of GBDs in MECS. This framework focuses primarily in overcoming the identified barriers by providing effective health education programs in collaboration with religious leaders, offering the screening test to young adults at an earlier stage, and tailoring the genetic counseling to consider people’s values, beliefs, and preferences.Keywords: premarital screening, middle east, genetic blood disorders, factors
Procedia PDF Downloads 811146 Roads and Agriculture: Impacts of Connectivity in Peru
Authors: Julio Aguirre, Yohnny Campana, Elmer Guerrero, Daniel De La Torre Ugarte
Abstract:
A well-developed transportation network is a necessary condition for a country to derive full benefits from good trade and macroeconomic policies. Road infrastructure plays a key role in the economic development of rural areas of developing countries; where agriculture is the main economic activity. The ability to move agricultural production from the place of production to the market, and then to the place of consumption, greatly influence the economic value of farming activities, and of the resources involved in the production process, i.e., labor and land. Consequently, investment in transportation networks contributes to enhance or overcome the natural advantages or disadvantages that topography and location have imposed over the agricultural sector. This is of particular importance when dealing with countries, like Peru, with a great topographic diversity. The objective of this research is to estimate the impacts of road infrastructure on the performance of the agricultural sector. Specific variables of interest are changes in travel time, shifts of production for self-consumption to production for the market, changes in farmers income, and impacts on the diversification of the agricultural sector. In the study, a cross-section model with instrumental variables is the central methodological instrument. The data is obtained from agricultural and transport geo-referenced databases, and the instrumental variable specification utilized is based on the Kruskal algorithm. The results show that the expansion of road connectivity reduced farmers' travel time by an average of 3.1 hours and the proportion of output sold in the market increases by up to 40 percentage points. The increase in connectivity has an unexpected increase in the districts index of diversification of agricultural production. The results are robust to the inclusion of year and region fixed-effects, and to control for geography (i.e., slope and altitude), population variables, and mining activity. Other results are also very eloquent. For example, a clear positive impact can be seen in access to local markets, but this does not necessarily correlate with an increase in the production of the sector. This can be explained by the fact that agricultural development not only requires provision of roads but additional complementary infrastructure and investments intended to provide the necessary conditions so that producers can offer quality products (improved management practices, timely maintenance of irrigation infrastructure, transparent management of water rights, among other factors). Therefore, complementary public goods are needed to enhance the effects of roads on the welfare of the population, beyond enabling them to increase their access to markets.Keywords: agriculture devolepment, market access, road connectivity, regional development
Procedia PDF Downloads 2051145 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 2031144 Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Sources
Authors: Annisa Ulfah Pristya, Andi Setiawan
Abstract:
Electricity is the primary requirement today's world, including Indonesia. This is because electricity is a source of electrical energy that is flexible to use. Fossil energy sources are the major energy source that is used as a source of energy power plants. Unfortunately, this conversion process impacts on the depletion of fossil fuel reserves and causes an increase in the amount of CO2 in the atmosphere, disrupting health, ozone depletion, and the greenhouse effect. Solutions have been applied are solar cells, ocean wave power, the wind, water, and so forth. However, low efficiency and complicated treatment led to most people and industry in Indonesia still using fossil fuels. Referring to this Fuel Cell was developed. Fuel Cells are electrochemical technology that continuously converts chemical energy into electrical energy for the fuel and oxidizer are the efficiency is considerably higher than the previous natural source of electrical energy, which is 40-60%. However, Fuel Cells still have some weaknesses in terms of the use of an expensive platinum catalyst which is limited and not environmentally friendly. Because of it, required the simultaneous source of electrical energy and environmentally friendly. On the other hand, Indonesia is a rich country in marine sediments and organic content that is never exhausted. Stacking the organic component can be an alternative energy source continued development of fuel cell is A Microbial Fuel Cell. Microbial Fuel Cells (MFC) is a tool that uses bacteria to generate electricity from organic and non-organic compounds. MFC same tools as usual fuel cell composed of an anode, cathode and electrolyte. Its main advantage is the catalyst in the microbial fuel cell is a microorganism and working conditions carried out in neutral solution, low temperatures, and environmentally friendly than previous fuel cells (Chemistry Fuel Cell). However, when compared to Chemistry Fuel Cell, MFC only have an efficiency of 40%. Therefore, the authors provide a solution in the form of Nano-MFC (Nano Microbial Fuel Cell): Utilization of Carbon Nano Tube to Increase Efficiency of Microbial Fuel Cell Power as an Effective, Efficient and Environmentally Friendly Alternative Energy Source. Nano-MFC has the advantage of an effective, high efficiency, cheap and environmental friendly. Related stakeholders that helped are government ministers, especially Energy Minister, the Institute for Research, as well as the industry as a production executive facilitator. strategic steps undertaken to achieve that begin from conduct preliminary research, then lab scale testing, and dissemination and build cooperation with related parties (MOU), conduct last research and its applications in the field, then do the licensing and production of Nano-MFC on an industrial scale and publications to the public.Keywords: CNT, efficiency, electric, microorganisms, sediment
Procedia PDF Downloads 4071143 The Valuable Triad of Adipokine Indices to Differentiate Pediatric Obesity from Metabolic Syndrome: Chemerin, Progranulin, Vaspin
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity is associated with cardiovascular disease risk factors and metabolic syndrome (MetS). In this study, associations between adipokines and adipokine as well as obesity indices were evaluated. Plasma adipokine levels may exhibit variations according to body adipose tissue mass. Besides, upon consideration of obesity as an inflammatory disease, adipokines may play some roles in this process. The ratios of proinflammatory adipokines to adiponectin may act as highly sensitive indicators of body adipokine status. The aim of the study is to present some adipokine indices, which are thought to be helpful for the evaluation of childhood obesity and also to determine the best discriminators in the diagnosis of MetS. 80 prepubertal children (aged between 6-9.5 years) included in the study were divided into three groups; 30 children with normal weight (NW), 25 morbid obese (MO) children and 25 MO children with MetS. Physical examinations were performed. Written informed consent forms were obtained from the parents. The study protocol was approved by Ethics Committee of Namik Kemal University Medical Faculty. Anthropometric measurements, such as weight, height, waist circumference (C), hip C, head C, neck C were recorded. Values for body mass index (BMI), diagnostic obesity notation model assessment Index-II (D2 index) as well as waist-to-hip, head-to-neck ratios were calculated. Adiponectin, resistin, leptin, chemerin, vaspin, progranulin assays were performed by ELISA. Adipokine-to-adiponectin ratios were obtained. SPSS Version 20 was used for the evaluation of data. p values ≤ 0.05 were accepted as statistically significant. Values of BMI and D2 index, waist-to-hip, head-to-neck ratios did not differ between MO and MetS groups (p ≥ 0.05). Except progranulin (p ≤ 0.01), similar patterns were observed for plasma levels of each adipokine. There was not any difference in vaspin as well as resistin levels between NW and MO groups. Significantly increased leptin-to-adiponectin, chemerin-to-adiponectin and vaspin-to-adiponectin values were noted in MO in comparison with those of NW. The most valuable adipokine index was progranulin-to-adiponectin (p ≤ 0.01). This index was strongly correlated with vaspin-to-adiponectin ratio in all groups (p ≤ 0.05). There was no correlation between vaspin-to-adiponectin and chemerin-to--adiponectin in NW group. However, a correlation existed in MO group (r = 0.486; p ≤ 0.05). Much stronger correlation (r = 0.609; p ≤ 0.01) was observed in MetS group between these two adipokine indices. No correlations were detected between vaspin and progranulin as well as vaspin and chemerin levels. Correlation analyses showed a unique profile confined to MetS children. Adiponectin was found to be correlated with waist-to-hip (r = -0.435; p ≤ 0.05) as well as head-to-neck (r = 0.541; p ≤ 0.05) ratios only in MetS children. In this study, it has been investigated if adipokine indices have priority over adipokine levels. In conclusion, vaspin-to-adiponectin, progranulin-to-adiponectin, chemerin-to-adiponectin along with waist-to-hip and head-to-neck ratios were the optimal combinations. Adiponectin, waist-to-hip, head-to-neck, vaspin-to-adiponectin, chemerin-to-adiponectin ratios had appropriate discriminatory capability for MetS children.Keywords: adipokine indices, metabolic syndrome, obesity indices, pediatric obesity
Procedia PDF Downloads 2041142 Embracing the Uniqueness and Potential of Each Child: Moving Theory to Practice
Authors: Joy Chadwick
Abstract:
This Study of Teaching and Learning (SoTL) research focused on the experiences of teacher candidates involved in an inclusive education methods course within a four-year direct entry Bachelor of Education program. The placement of this course within the final fourteen-week practicum semester is designed to facilitate deeper theory-practice connections between effective inclusive pedagogical knowledge and the real life of classroom teaching. The course focuses on supporting teacher candidates to understand that effective instruction within an inclusive classroom context must be intentional, responsive, and relational. Diversity is situated not as exceptional but rather as expected. This interpretive qualitative study involved the analysis of twenty-nine teacher candidate reflective journals and six individual teacher candidate semi-structured interviews. The journal entries were completed at the start of the semester and at the end of the semester with the intent of having teacher candidates reflect on their beliefs of what it means to be an effective inclusive educator and how the course and practicum experiences impacted their understanding and approaches to teaching in inclusive classrooms. The semi-structured interviews provided further depth and context to the journal data. The journals and interview transcripts were coded and themed using NVivo software. The findings suggest that instructional frameworks such as universal design for learning (UDL), differentiated instruction (DI), response to intervention (RTI), social emotional learning (SEL), and self-regulation supported teacher candidate’s abilities to meet the needs of their students more effectively. Course content that focused on specific exceptionalities also supported teacher candidates to be proactive rather than reactive when responding to student learning challenges. Teacher candidates also articulated the importance of reframing their perspective about students in challenging moments and that seeing the individual worth of each child was integral to their approach to teaching. A persisting question for teacher educators exists as to what pedagogical knowledge and understanding is most relevant in supporting future teachers to be effective at planning for and embracing the diversity of student needs within classrooms today. This research directs us to consider the critical importance of addressing personal attributes and mindsets of teacher candidates regarding children as well as considering instructional frameworks when designing coursework. Further, the alignment of an inclusive education course during a teaching practicum allows for an iterative approach to learning. The practical application of course concepts while teaching in a practicum allows for a deeper understanding of instructional frameworks, thus enhancing the confidence of teacher candidates. Research findings have implications for teacher education programs as connected to inclusive education methods courses, practicum experiences, and overall teacher education program design.Keywords: inclusion, inclusive education, pre-service teacher education, practicum experiences, teacher education
Procedia PDF Downloads 681141 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach
Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal
Abstract:
Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol
Procedia PDF Downloads 1071140 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution
Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit
Abstract:
Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics
Procedia PDF Downloads 411139 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea
Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park
Abstract:
Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques
Procedia PDF Downloads 1671138 Accumulated Gender-Diverse Co-signing Experience, Knowledge Sharing, and Audit Quality
Authors: Anxuan Xie, Chun-Chan Yu
Abstract:
Survey evidence provides support that auditors can gain professional knowledge not only from client firms but also from teammates they work with. Furthermore, given that knowledge is accumulated in nature, along with the reality that auditors today must work in an environment of increased diversity, whether the attributes of teammates will influence the effects of knowledge sharing and accumulation and ultimately influence an audit partner’s audit quality should be interesting research issues. We test whether the gender of co-signers will moderate the effect of a lead partner’s cooperative experiences on financial restatements. Furthermore, if the answer is “yes”, we further investigate the underlying reasons. We use data from Taiwan because, according to Taiwan’s law, engagement partners, who are basically two certificate public accountants from the same audit firm, are required to disclose (i.e., sign) their names in the audit report of public companies since 1983. Therefore, we can trace each engagement partner’s historic direct cooperative (co-signing) records and get large-sample data. We find that the benefits of knowledge sharing manifest primarily via co-signing audit reports with audit partners of different gender from the lead engagement partners, supporting the argument that in an audit setting, accumulated gender-diverse working relationship is positively associated with knowledge sharing, and therefore improve lead engagements’ audit quality. This study contributes to the extant literature in the following ways. First, we provide evidence that in the auditing setting, the experiences accumulated from cooperating with teammates of a different gender from the lead partner can improve audit quality. Given that most studies find evidence of negative effects of surface-level diversity on team performance, the results of this study support the prior literature that the association between diversity and knowledge sharing actually hinges on the context (e.g., organizational culture, task complexity) and “bridge” (a pre-existing commonality among team members that can smooth the process of diversity toward favorable results) among diversity team members. Second, this study also provides practical insights with respect to the audit firms’ policy of knowledge sharing and deployment of engagement partners. For example, for audit firms that appreciate the merits of knowledge sharing, the deployment of auditors of different gender within an audit team can help auditors accumulate audit-related knowledge, which will further benefit the future performance of those audit firms. Moreover, nowadays, client firms also attach importance to the diversity of their engagement partners. As their policy goals, lawmakers and regulators also continue to promote a gender-diverse working environment. The findings of this study indicate that for audit firms, gender diversity will not be just a means to cater to those groups. Third, for audit committees or other stakeholders, they can evaluate the quality of existing (or potential) lead partners by tracking their co-signing experiences, especially whether they have gender-diverse co-signing experiences.Keywords: co-signing experiences, audit quality, knowledge sharing, gender diversity
Procedia PDF Downloads 841137 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy
Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang
Abstract:
In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties
Procedia PDF Downloads 1551136 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments
Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora
Abstract:
Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver
Procedia PDF Downloads 3131135 Fabrication of Highly Conductive Graphene/ITO Transparent Bi-Film through Chemical Vapor Deposition (CVD) and Organic Additives-Free Sol-Gel Techniques
Authors: Bastian Waduge Naveen Harindu Hemasiri, Jae-Kwan Kim, Ji-Myon Lee
Abstract:
Indium tin oxide (ITO) remains the industrial standard transparent conducting oxides with better performances. Recently, graphene becomes as a strong material with unique properties to replace the ITO. However, graphene/ITO hybrid composite material is a newly born field in the electronic world. In this study, the graphene/ITO composite bi-film was synthesized by a two steps process. 10 wt.% tin-doped, ITO thin films were produced by an environmentally friendly aqueous sol-gel spin coating technique with economical salts of In(NO3)3.H2O and SnCl4 without using organic additives. The wettability and surface free energy (97.6986 mJ/m2) enhanced oxygen plasma treated glass substrates were used to form voids free continuous ITO film. The spin-coated samples were annealed at 600 0C for 1 hour under low vacuum conditions to obtained crystallized, ITO film. The crystal structure and crystalline phases of ITO thin films were analyzed by X-ray diffraction (XRD) technique. The Scherrer equation was used to determine the crystallite size. Detailed information about chemical composition and elemental composition of the ITO film were determined by X-ray photoelectron spectroscopy (XPS) and energy dispersive X-ray spectroscopy (EDX) coupled with FE-SEM respectively. Graphene synthesis was done under chemical vapor deposition (CVD) method by using Cu foil at 1000 0C for 1 min. The quality of the synthesized graphene was characterized by Raman spectroscopy (532nm excitation laser beam) and data was collected at room temperature and normal atmosphere. The surface and cross-sectional observation were done by using FE-SEM. The optical transmission and sheet resistance were measured by UV-Vis spectroscopy and four point probe head at room temperature respectively. Electrical properties were also measured by using V-I characteristics. XRD patterns reveal that the films contain the In2O3 phase only and exhibit the polycrystalline nature of the cubic structure with the main peak of (222) plane. The peak positions of In3d5/2 (444.28 eV) and Sn3d5/2 (486.7 eV) in XPS results indicated that indium and tin are in the oxide form only. The UV-visible transmittance shows 91.35 % at 550 nm with 5.88 x 10-3 Ωcm specific resistance. The G and 2D band in Raman spectroscopy of graphene appear at 1582.52 cm-1 and 2690.54 cm-1 respectively when the synthesized CVD graphene on SiO2/Si. The determined intensity ratios of 2D to G (I2D/IG) and D to G (ID/IG) were 1.531 and 0.108 respectively. However, the above-mentioned G and 2D peaks appear at 1573.57 cm-1 and 2668.14 cm-1 respectively when the CVD graphene on the ITO coated glass, the positions of G and 2D peaks were red shifted by 8.948 cm-1 and 22.396 cm-1 respectively. This graphene/ITO bi-film shows modified electrical properties when compares with sol-gel derived ITO film. The reduction of sheet resistance in the bi-film was 12.03 % from the ITO film. Further, the fabricated graphene/ITO bi-film shows 88.66 % transmittance at 550 nm wavelength.Keywords: chemical vapor deposition, graphene, ITO, Raman Spectroscopy, sol-gel
Procedia PDF Downloads 2591134 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches
Authors: Erin Lawlor
Abstract:
In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.Keywords: radicalisation, deradicalisation, violent extremism, public health
Procedia PDF Downloads 661133 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)
Authors: Vinay Kumar Vanjakula, Frank Adam
Abstract:
The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour
Procedia PDF Downloads 1681132 Erasmus+ Program in Vocational Education: Effects of European International Mobility in Portuguese Vocational Schools
Authors: José Carlos Bronze, Carlinda Leite, Angélica Monteiro
Abstract:
The creation of the Erasmus Program in 1987 represented a milestone in promoting and funding international mobility in higher education in Europe. Its effects were so significant that they influenced the creation of the European Higher Education Area through the Bologna Process and ensured the program’s continuation and maintenance. Over the last decades, the escalating figures of participants and funds instigated significant scientific studies on the program's effects on higher education. More recently, in 2014, the program was renamed “Erasmus+” when it expanded into other fields of education, namely Vocational Education and Training (VET). Despite being now running in this field of education for a decade (2014-2024), its effects on VET remain less studied and less known, while the higher education field keeps attracting researchers’ attention. Given this gap, it becomes relevant to study the effects of E+ on VET, particularly in the priority domains of the Program: “Inclusion and Diversity,” “Participation in Democratic Life, Common Values and Civic Engagement,” “Environment and Fight Against Climate Change,” and “Digital Transformation.” This latter has been recently emphasized due to the COVID-19 pandemic that forced the so-called emergency remote teaching, leading schools to quickly transform and adapt to a new reality regardless of the preparedness levels of teachers and students. Together with the remaining E+ priorities, they directly relate to an emancipatory perspective of education sustained in soft skills such as critical thinking, intercultural awareness, autonomy, active citizenship, teamwork, and problem-solving, among others. Based on this situation, it is relevant to know the effects of E+ on the VET field, namely questioning how international mobility instigates digitalization processes and supports emancipatory queries therein. As an education field that more directly connects to hard skills and an instrumental approach oriented to the labor market’s needs, a study was conducted to determine the effects of international mobility on developing digital literacy and soft skills in the VET field. In methodological terms, the study used semi-structured interviews with teaching and non-teaching staff from three VET schools who are strongly active in the E+ Program. The interviewees were three headmasters, four mobility project managers, and eight teachers experienced in international mobility. The data was subjected to qualitative content analysis using the NVivo 14 application. The results show that E+ international mobility promotes and facilitates the use of digital technologies as a pedagogical resource at VET schools and enhances and generates students’ soft skills. In conclusion, E+ mobility in the VET field supports adopting the program’s priorities by increasing the teachers’ knowledge and use of digital resources and amplifying and generating participants’ soft skills.Keywords: Erasmus international mobility, digital literacy, soft skills, vocational education and training
Procedia PDF Downloads 311131 The Conflict of Grammaticality and Meaningfulness of the Corrupt Words: A Cross-lingual Sociolinguistic Study
Authors: Jayashree Aanand, Gajjam
Abstract:
The grammatical tradition in Sanskrit literature emphasizes the importance of the correct use of Sanskrit words or linguistic units (sādhu śabda) that brings the meritorious values, denying the attribution of the same religious merit to the incorrect use of Sanskrit words (asādhu śabda) or the vernacular or corrupt forms (apa-śabda or apabhraṁśa), even though they may help in communication. The current research, the culmination of the doctoral research on sentence definition, studies the difference among the comprehension of both correct and incorrect word forms in Sanskrit and Marathi languages in India. Based on the total of 19 experiments (both web-based and classroom-controlled) on approximately 900 Indian readers, it is found that while the incorrect forms in Sanskrit are comprehended with lesser accuracy than the correct word forms, no such difference can be seen for the Marathi language. It is interpreted that the incorrect word forms in the native language or in the language which is spoken daily (such as Marathi) will pose a lesser cognitive load as compared to the language that is not spoken on a daily basis but only used for reading (such as Sanskrit). The theoretical base for the research problem is as follows: among the three main schools of Language Science in ancient India, the Vaiyākaraṇas (Grammarians) hold that the corrupt word forms do have their own expressive power since they convey meaning, while as the Mimāṁsakas (the Exegesists) and the Naiyāyikas (the Logicians) believe that the corrupt forms can only convey the meaning indirectly, by recalling their association and similarity with the correct forms. The grammarians argue that the vernaculars that are born of the speaker’s inability to speak proper Sanskrit are regarded as degenerate versions or fallen forms of the ‘divine’ Sanskrit language and speakers who could not use proper Sanskrit or the standard language were considered as Śiṣṭa (‘elite’). The different ideas of different schools strictly adhere to their textual dispositions. For the last few years, sociolinguists have agreed that no variety of language is inherently better than any other; they are all the same as long as they serve the need of people that use them. Although the standard form of a language may offer the speakers some advantages, the non-standard variety is considered the most natural style of speaking. This is visible in the results. If the incorrect word forms incur the recall of the correct word forms in the reader as the theory suggests, it would have added one extra step in the process of sentential cognition leading to more cognitive load and less accuracy. This has not been the case for the Marathi language. Although speaking and listening to the vernaculars is the common practice and reading the vernacular is not, Marathi readers have readily and accurately comprehended the incorrect word forms in the sentences, as against the Sanskrit readers. The primary reason being Sanskrit is spoken and also read in the standard form only and the vernacular forms in Sanskrit are not found in the conversational data.Keywords: experimental sociolinguistics, grammaticality and meaningfulness, Marathi, Sanskrit
Procedia PDF Downloads 1251130 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull
Authors: Elsher Lawson-Boyd
Abstract:
In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.Keywords: genetics, mental illness, neuroepigenetics, trauma
Procedia PDF Downloads 1241129 Organ Donation after Medical Aid in Dying: A Critical Study of Clinical Processes and Legal Rules in Place
Authors: Louise Bernier
Abstract:
Under some jurisdictions (including Canada), eligible patients can request and receive medical assistance in dying (MAiD) through lethal injections, inducing their cardiocirculatory death. Those same patients can also wish to donate their organs in the process. If they qualify as organ donors, a clinical and ethical rule called the 'dead donor rule' (DDR) requires the transplant teams to wait after cardiocirculatory death is confirmed, followed by a 'no touch' period (5 minutes in Canada) before they can proceed with organ removal. The medical procedures (lethal injections) as well as the delays associated with the DDR can damage organs (mostly thoracic organs) due to prolonged anoxia. Yet, strong scientific evidences demonstrate that operating differently and reconsidering the DDR would result in more organs of better quality available for transplant. This idea generates discomfort and resistance, but it is also worth considering, especially in a context of chronic shortage of available organs. One option that could be examined for MAiD’ patients who wish and can be organ donors would be to remove vital organs while patients are still alive (and under sedation). This would imply accepting that patient’s death would occur through organ donation instead of lethal injections required under MAiD’ legal rules. It would also mean that patients requesting MAiD and wishing to be organ donors could aspire to donate better quality organs, including their heart, an altruistic gesture that carries important symbolic value for many donors and their families. Following a patient centered approach, our hypothesis is that preventing vital organ donation from a living donor in all circumstance is neither perfectly coherent with how legal mentalities have evolved lately in the field of fundamental rights nor compatible with the clinical and ethical frameworks that shape the landscape in which those complex medical decisions unfold. Through a study of the legal, ethical, and clinical rules in place, both at the national and international levels, this analysis raises questions on the numerous inconsistencies associated with respecting the DDR with patients who have chosen to die through MAiD. We will begin with an assessment of the erosion of certain national legal frameworks that pertain to the sacred nature of the right to life which now also includes the right to choose how one wishes to die. We will then study recent innovative clinical protocols tested in different countries to help address acute organ shortage problems in creative ways. We will conclude this analysis with an ethical assessment of the situation, referring to principles such as justice, autonomy, altruism, beneficence, and non-malfeasance. This study will build a strong argument in favor of starting to allow vital organ donations from living donors in countries where MAiD is already permitted.Keywords: altruism, autonomy, dead donor rule, medical assistance in dying, non-malfeasance, organ donation
Procedia PDF Downloads 1761128 Stability Study of Hydrogel Based on Sodium Alginate/Poly (Vinyl Alcohol) with Aloe Vera Extract for Wound Dressing Application
Authors: Klaudia Pluta, Katarzyna Bialik-Wąs, Dagmara Malina, Mateusz Barczewski
Abstract:
Hydrogel networks, due to their unique properties, are highly attractive materials for wound dressing. The three-dimensional structure of hydrogels provides tissues with optimal moisture, which supports the wound healing process. Moreover, a characteristic feature of hydrogels is their absorption properties which allow for the absorption of wound exudates. For the fabrication of biomedical hydrogels, a combination of natural polymers ensuring biocompatibility and synthetic ones that provide adequate mechanical strength are often used. Sodium alginate (SA) is one of the polymers widely used in wound dressing materials because it exhibits excellent biocompatibility and biodegradability. However, due to poor strength properties, often alginate-based hydrogel materials are enhanced by the addition of another polymer such as poly(vinyl alcohol) (PVA). This paper is concentrated on the preparation methods of sodium alginate/polyvinyl alcohol hydrogel system incorporating Aloe vera extract and glycerin for wound healing material with particular focus on the role of their composition on structure, thermal properties, and stability. Briefly, the hydrogel preparation is based on the chemical cross-linking method using poly(ethylene glycol) diacrylate (PEGDA, Mn = 700 g/mol) as a crosslinking agent and ammonium persulfate as an initiator. In vitro degradation tests of SA/PVA/AV hydrogels were carried out in Phosphate-Buffered Saline (pH – 7.4) as well as in distilled water. Hydrogel samples were firstly cut into half-gram pieces (in triplicate) and immersed in immersion fluid. Then, all specimens were incubated at 37°C and then the pH and conductivity values were measurements at time intervals. The post-incubation fluids were analyzed using SEC/GPC to check the content of oligomers. The separation was carried out at 35°C on a poly(hydroxy methacrylate) column (dimensions 300 x 8 mm). 0.1M NaCl solution, whose flow rate was 0.65 ml/min, was used as the mobile phase. Three injections with a volume of 50 µl were made for each sample. The thermogravimetric data of the prepared hydrogels were collected using a Netzsch TG 209 F1 Libra apparatus. The samples with masses of about 10 mg were weighed separately in Al2O3 crucibles and then were heated from 30°C to 900°C with a scanning rate of 10 °C∙min−1 under a nitrogen atmosphere. Based on the conducted research, a fast and simple method was developed to produce potential wound dressing material containing sodium alginate, poly(vinyl alcohol) and Aloe vera extract. As a result, transparent and flexible SA/PVA/AV hydrogels were obtained. The degradation experiments indicated that most of the samples immersed in PBS as well as in distilled water were not degraded throughout the whole incubation time.Keywords: hydrogels, wound dressings, sodium alginate, poly(vinyl alcohol)
Procedia PDF Downloads 1641127 Repair of Thermoplastic Composites for Structural Applications
Authors: Philippe Castaing, Thomas Jollivet
Abstract:
As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic
Procedia PDF Downloads 3031126 Sustainable Harvesting, Conservation and Analysis of Genetic Diversity in Polygonatum Verticillatum Linn.
Authors: Anchal Rana
Abstract:
Indian Himalayas with their diverse climatic conditions are home to many rare and endangered medicinal flora. One such species is Polygonatum verticillatum Linn., popularly known as King Solomon’s Seal or Solomon’s Seal. Its mention as an incredible medicinal herb comes from 5000 years ago in Indian Materia Medica as a component of Ashtavarga, a poly-herbal formulation comprising of eight herbs illustrated as world’s first ever revitalizing and rejuvenating nutraceutical food, which is now commercialised in the name ‘Chaywanprash’. It is an erect tall (60 to 120 cm) perennial herb with sessile, linear leaves and white pendulous flowers. The species grows well in an altitude range of 1600 to 3600 m amsl, and propagates mostly through rhizomes. The rhizomes are potential source for significant phytochemicals like flavonoids, phenolics, lectins, terpenoids, allantoin, diosgenin, β-Sitosterol and quinine. The presence of such phytochemicals makes the species an asset for antioxidant, cardiotonic, demulcent, diuretic, energizer, emollient, aphrodisiac, appetizer, glactagogue, etc. properties. Having profound concentrations of macro and micronutrients, species has fine prospects of being used as a diet supplement. However, due to unscientific and gregarious uprooting, it has been assigned a status of ‘vulnerable’ and ‘endangered’ in the Conservation Assessment and Management Plan (CAMP) process conducted by Foundation for Revitalisation of Local Health Traditions (FRLHT) during 2010, according to IUCN Red-List Criteria. Further, destructive harvesting, land use disturbances, heavy livestock grazing, climatic changes and habitat fragmentation have substantially contributed towards anomaly of the species. It, therefore, became imperative to conserve the diversity of the species and make judicious use in future research and commercial programme and schemes. A Gene Bank was therefore established at High Altitude Herbal Garden of the Forest Research Institute, Dehradun, India situated at Chakarata (30042’52.99’’N, 77051’36.77’’E, 2205 m amsl) consisting 149 accessions collected from thirty-one geographical locations spread over three Himalayan States of Jammu and Kashmir, Himachal Pradesh, and Uttarakhand. The present investigations purport towards sampling and collection of divergent germplasm followed by planting and cultivation techniques. The ultimate aim is thereby focussed on analysing genetic diversity of the species and capturing promising genotypes for carrying out further genetic improvement programme so to contribute towards sustainable development and healthcare.Keywords: Polygonatum verticillatum Linn., phytochemicals, genetic diversity, conservation, gene bank
Procedia PDF Downloads 170