Search results for: bio analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27770

Search results for: bio analysis

1400 Training Manual of Organic Agriculture Farming for the Farmers: A Case Study from Kunjpura and Surrounding Villages

Authors: Rishi Pal Singh

Abstract:

In Indian Scenario, Organic agriculture is growing by the conscious efforts of inspired people who are able to create the best promising relationship between the earth and men. Nowadays, the major challenge is its entry into the policy-making framework, its entry into the global market and weak sensitization among the farmers. But, during the last two decades, the contamination in environment and food which is linked with the bad agricultural potential/techniques has diverted the mind set of farmers towards the organic farming. In the view of above concept, a small-scale project has been installed to promote the 20 farmers from the Kunjura and surrounding villages for organic farming. This project is working since from the last 3 crops (starting from October, 2016) and found that it can meet both demands and complete development of rural areas. Farmers of this concept are working on the principles such that the nature never demands unreasonable quantities of water, mining and to destroy the microbes and other organisms. As per details of Organic Monitor estimates, global sales reached in billion in the present analysis. In this initiative, firstly, wheat and rice were considered for farming and observed that the production of crop has grown almost 10-15% per year from the last crop production. This is not linked only with the profit or loss but also emphasized on the concept of health, ecology, fairness and care of soil enrichment. Several techniques were used like use of biological fertilizers instead of chemicals, multiple cropping, temperature management, rain water harvesting, development of own seed, vermicompost and integration of animals. In the first year, to increase the fertility of the land, legumes (moong, cow pea and red gram) were grown in strips for the 60, 90 and 120 days. Simultaneously, the mixture of compost and vermicompost in the proportion of 2:1 was applied at the rate of 2.0 ton per acre which was enriched with 5 kg Azotobacter and 5 kg Rhizobium biofertilizer. To complete the amount of phosphorus, 250 kg rock phosphate was used. After the one month, jivamrut can be used with the irrigation water or during the rainy days. In next season, compost-vermicompost mixture @ 2.5 ton/ha was used for all type of crops. After the completion of this treatment, now the soil is ready for high value ordinary/horticultural crops. The amount of above stated biofertilizers, compost-vermicompost and rock phosphate may be increased for the high alternative fertilizers. The significance of the projects is that now the farmers believe in cultural alternative (use of disease-free their own seed, organic pest management), maintenance of biodiversity, crop rotation practices and health benefits of organic farming. This type of organic farming projects should be installed at the level of gram/block/district administration.

Keywords: organic farming, Kunjpura, compost, bio-fertilizers

Procedia PDF Downloads 194
1399 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity

Authors: Ladislav Écsi, Roland Jančo

Abstract:

Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.

Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility

Procedia PDF Downloads 123
1398 Support for Refugee Entrepreneurs Through International Aid

Authors: Julien Benomar

Abstract:

The World Bank report published in April 2023 called “Migrants, Refugees and Society” allows us to first distinguish migrants in search of economic opportunities and refugees that flee a situation of danger and choose their destination based on their immediate need for safety. Amongst those two categories, the report distinguished people having professional skills adapted to the labor market of the host country and those who have not. Out of that distinction of four categories, we choose to focus our research on refugees that do not have professional skills adapted to the labor market of the host country. Given that refugees generally have no recourse to public assistance schemes and cannot count on the support of their entourage or support network, we propose to examine the extent to which external assistance, such as international humanitarian action, is likely to accompany refugees' transition to financial empowerment through entrepreneurship. To this end, we propose to carry out a case study structured in three stages: (i) an exchange with a Non-Governmental Organisation (NGO) active in supporting refugee populations from Congo and Burundi to Rwanda, enabling us to (i.i) define together a financial empowerment income, and (i. ii) learn about the content of the support measures taken for the beneficiaries of the humanitarian project; (ii) monitor the population of 118 beneficiaries, including 73 refugees and 45 Rwandans (reference population); (iii) conduct a participatory analysis to identify the level of performance of the project and areas for improvement. The case study thus involved the staff of an international NGO active in helping refugees from Rwanda since 2015 and the staff of a Luxembourg NGO that has been funding this economic aid project through entrepreneurship since 2021. The case study thus involved the staff of an international NGO active in helping refugees from Rwanda since 2015 and the staff of a Luxembourg NGO, which has been funding this economic aid through an entrepreneurship project since 2021, and took place over a 48-day period between April and May 2023. The main results are of two types: (i) the need to associate indicators for monitoring the impact of the project on the indirect beneficiaries of the project (refugee community) and (ii) the identification of success factors making it possible to bring concrete and relevant responses to the constraints encountered. The first result thus made it possible to identify the following indicators: Indicator of community potential ((jobs, training or mentoring) promoted by the activity of the entrepreneur), Indicator of social contribution (tax paid by the entrepreneur), Indicator of resilience (savings and loan capacity generated, and finally impact on social cohesion. The second result made it possible to identify that among the 7 success factors tested, the sector of activity chosen and the level of experience in the sector of the future activity are those that stand out the most clearly.

Keywords: entrepreuneurship, refugees, financial empowerment, international aid

Procedia PDF Downloads 77
1397 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 147
1396 Linkages between Innovation Policies and SMEs' Innovation Activities: Empirical Evidence from 15 Transition Countries

Authors: Anita Richter

Abstract:

Innovation is one of the key foundations of competitive advantage, generating growth and welfare worldwide. Consequently, all firms should innovate to bring new ideas to the market. Innovation is a vital growth driver, particularly for transition countries to move towards knowledge-based, high-income economies. However, numerous barriers, such as financial, regulatory or infrastructural constraints prevent, in particular, new and small firms in transition countries from innovating. Thus SMEs’ innovation output may benefit substantially from government support. This research paper aims to assess the effect of government interventions on innovation activities in SMEs in emerging countries. Until now academic research related to the innovation policies focused either on single country and/or high-income countries assessments and less on cross-country and/or low and middle-income countries. Therefore the paper seeks to close the research gap by providing empirical evidence from 8,500 firms in 15 transition countries (Eastern Europe, South Caucasus, South East Europe, Middle East and North Africa). Using firm-level data from the Business Environment and Enterprise Performance Survey of the World Bank and EBRD and policy data from the SME Policy Index of the OECD, the paper investigates how government interventions affect SME’s likelihood of investing in any technological and non-technological innovation. Using the Standard Linear Regression, the impact of government interventions on SMEs’ innovation output and R&D activities is measured. The empirical analysis suggests that a firm’s decision to invest into innovative activities is sensitive to government interventions. A firm’s likelihood to invest into innovative activities increases by 3% to 8%, if the innovation eco-system noticeably improves (measured by an increase of 1 level in the SME Policy Index). At the same time, a better eco-system encourages SMEs to invest more in R&D. Government reforms in establishing a dedicated policy framework (IP legislation), institutional infrastructure (science and technology parks, incubators) and financial support (public R&D grants, innovation vouchers) are particularly relevant to stimulate innovation performance in SMEs. Particular segments of the SME population, namely micro and manufacturing firms, are more likely to benefit from an increased innovation framework conditions. The marginal effects are particularly strong on product innovation, process innovation, and marketing innovation, but less on management innovation. In conclusion, government interventions supporting innovation will likely lead to higher innovation performance of SMEs. They increase productivity at both firm and country level, which is a vital step in transitioning towards knowledge-based market economies.

Keywords: innovation, research and development, government interventions, economic development, small and medium-sized enterprises, transition countries

Procedia PDF Downloads 324
1395 Goal-Setting in a Peer Leader HIV Prevention Intervention to Improve Preexposure Prophylaxis Access among Black Men Who Have Sex with Men

Authors: Tim J. Walsh, Lindsay E. Young, John A. Schneider

Abstract:

Background: The disproportionate rate of HIV infection among Black men who have sex with men (BMSM) in the United States suggest the importance of Preexposure Prophylaxis (PrEP) interventions for this population. As such, there is an urgent need for innovative outreach strategies that extend beyond the traditional patient-provider relationship to reach at-risk populations. Training members of the BMSM community as peer change agents (PCAs) is one such strategy. An important piece of this training is goal-setting. Goal-setting not only encourages PCAs to define the parameters of the intervention according to their lived experience, it also helps them plan courses of action. Therefore, the aims of this mixed methods study are: (1) Characterize the goals that BMSM set at the end of their PrEP training and (2) Assess the relationship between goal types and PCA engagement. Methods: Between March 2016 and July 2016, preliminary data were collected from 68 BMSM, ages 18-33, in Chicago as part of an ongoing PrEP intervention. Once enrolled, PCAs participate in a half-day training in which they learn about PrEP, practice initiating conversations about PrEP, and identify strategies for supporting at-risk peers through the PrEP adoption process. Training culminates with a goal-setting exercise, whereby participants establish a goal related to their role as a PCA. Goals were coded for features that either emerged from the data itself or existed in extant goal-setting literature. The main outcomes were (1) number of PrEP conversations PCAs self-report during booster conversations two weeks following the intervention and (2) number of peers PCAs recruit into the study that completed the PrEP workshop. Results: PCA goals (N=68) were characterized in terms of four features: Specificity, target population, personalization, and purpose defined. To date, PCAs report a collective 52 PrEP conversations. 56, 25, and 6% of PrEP conversations occurred with friends, family, and sexual partners, respectively. PCAs with specific goals had more PrEP conversations with at-risk peers compared to those with vague goals (58% vs. 42%); PCAs with personalized goals had more PrEP conversations compared to those with de-personalized goals (60% vs. 53%); and PCAs with goals that defined a purpose had more PrEP conversations compared to those who did not define a purpose (75% vs. 52%). 100% of PCAs with goals that defined a purpose recruited peers into the study compared to 45 percent of PCAs with goals that did not define a purpose. Conclusion: Our preliminary analysis demonstrates that BMSM are motivated to set and work toward a diverse set of goals to support peers in PrEP adoption. PCAs with goals involving a clearly defined purpose had more PrEP conversations and greater peer recruitment than those with goals lacking a defined purpose. This may indicate that PCAs who define their purpose at the outset of their participation will be more engaged in the study than those who do not. Goal-setting may be considered as a component of future HIV prevention interventions to advance intervention goals and as an indicator of PCAs understanding of the intervention.

Keywords: HIV prevention, MSM, peer change agent, preexposure prophylaxis

Procedia PDF Downloads 194
1394 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques

Authors: Melese Wondatir

Abstract:

Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.

Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making

Procedia PDF Downloads 66
1393 Bioremediation of Phenol in Wastewater Using Polymer-Supported Bacteria

Authors: Areej K. Al-Jwaid, Dmitiry Berllio, Andrew Cundy, Irina Savina, Jonathan L. Caplin

Abstract:

Phenol is a toxic compound that is widely distributed in the environment including the atmosphere, water and soil, due to the release of effluents from the petrochemical and pharmaceutical industries, coking plants and oil refineries. Moreover, a range of daily products, using phenol as a raw material, may find their way into the environment without prior treatment. The toxicity of phenol effects both human and environment health, and various physio-chemical methods to remediate phenol contamination have been used. While these techniques are effective, their complexity and high cost had led to search for alternative strategies to reduce and eliminate high concentrations of phenolic compounds in the environment. Biological treatments are preferable because they are environmentally friendly and cheaper than physico-chemical approaches. Some microorganisms such as Pseudomonas sp., Rhodococus sp., Acinetobacter sp. and Bacillus sp. have shown a high ability to degrade phenolic compounds to provide a sole source of energy. Immobilisation process utilising various materials have been used to protect and enhance the viability of cells, and to provide structural support for the bacterial cells. The aim of this study is to develop a new approach to the bioremediation of phenol based on an immobilisation strategy that can be used in wastewater. In this study, two bacterial species known to be phenol degrading bacteria (Pseudomonas mendocina and Rhodococus koreensis) were purchased from National Collection of Industrial, Food and Marine Bacteria (NCIMB). The two species and mixture of them were immobilised to produce macro porous crosslinked cell cryogels samples by using four types of cross-linker polymer solutions in a cryogelation process. The samples were used in a batch culture to degrade phenol at an initial concentration of 50mg/L at pH 7.5±0.3 and a temperature of 30°C. The four types of polymer solution - i. glutaraldehyde (GA), ii. Polyvinyl alcohol with glutaraldehyde (PVA+GA), iii. Polyvinyl alcohol–aldehyde (PVA-al) and iv. Polyetheleneimine–aldehyde (PEI-al), were used at different concentrations, ranging from 0.5 to 1.5% to crosslink the cells. The results of SEM and rheology analysis indicated that cell-cryogel samples crosslinked with the four cross-linker polymers formed monolithic macro porous cryogels. The samples were evaluated for their ability to degrade phenol. Macro porous cell–cryogels crosslinked with GA and PVA+GA showed an ability to degrade phenol for only one week, while the other samples crosslinked with a combination of PVA-al + PEI-al at two different concentrations have shown higher stability and viability to reuse to degrade phenol at concentration (50 mg/L) for five weeks. The initial results of using crosslinked cell cryogel samples to degrade phenol indicate that is a promising tool for bioremediation strategies especially to eliminate and remove the high concentration of phenol in wastewater.

Keywords: bioremediation, crosslinked cells, immobilisation, phenol degradation

Procedia PDF Downloads 232
1392 Exploring the Gap between Coverage, Access, Utilization of Long Lasting Insecticidal Nets (LLINs) among the People of Malaria Endemic Districts in Bangladesh

Authors: Fouzia Khanam, Tridib Chowdhury, Belal Hossain, Sajedur Rahman, Mahfuzar Rahman

Abstract:

Introduction: Over the last decades, the world has achieved a noticeable success in preventing malaria. Nevertheless, malaria, a vector-borne infectious disease, remains a major public health burden globally as well as in Bangladesh. To achieve the goal of eliminating malaria, BRAC, a leading organization of Bangladesh in collaboration with government, is distributing free LLIN to the 13 endemic districts of the country. The study was conducted with the aim of assessing the gap between coverage, access, and utilization of LLIN among the people of the 13 malaria endemic districts of Bangladesh. Methods: This baseline study employed a community cross-sectional design triangulated with qualitative methods to measure households’ ownership, access and use of 13 endemic districts. A multistage cluster random sampling was employed for the quantitative part and for qualitative part a purposive sampling strategy was done. Thus present analysis included 2640 households encompassing a total of 14475 populations. Data were collected using a pre-tested structured questionnaire through one on one face-to-face interview with respondents. All analyses were performed using STATA (Version 13.0). For the qualitative part participant observation, in-depth interview, focus group discussion, key informant interview and informal interview was done to gather the contextual data. Findings: According to our study, 99.8% of households possessed at least one-bed net in both study areas. 77.4% households possessed at least two LLIN and 43.2% households had access to LLIN for all the members. So the gap between coverage and access is 34%. 91.8% people in the 13 districts and 95.1% in Chittagong Hill Tracts areas reported having had slept under a bed net the night before interviewed. And despite the relatively low access, in 77.8% of households, all the members were used the LLIN the previous night. This higher utilization compared to access might be due to the increased awareness among the community people regarding LLIN uses. However, among those people with sufficient access to LLIN, 6% of them still did not use the LLIN which reflects the behavioral failure that needs to be addressed. The major reasons for not using LLIN, identified by both qualitative and quantitative findings, were insufficient access, sleeping or living outside the home, migration, perceived low efficacy of LLIN, fear of physical side effects or feeling uncomfortable. Conclusion: Given that LLIN access and use was a bit short of the targets, it conveys important messages to the malaria control program. Targeting specific population segments and groups for achieving expected LLIN coverage is very crucial. And also, addressing behavior failure by well-designed behavioral change interventions is mandatory.

Keywords: long lasting insecticide net, malaria, malaria control programme, World Health Organisation

Procedia PDF Downloads 187
1391 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.

Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity

Procedia PDF Downloads 74
1390 Analysis of Interparticle interactions in High Waxy-Heavy Clay Fine Sands for Sand Control Optimization

Authors: Gerald Gwamba

Abstract:

Formation and oil well sand production is one of the greatest and oldest concerns for the Oil and gas industry. The production of sand particles may vary from very small and limited amounts to far elevated levels which has the potential to block or plug the pore spaces near the perforated points to blocking production from surface facilities. Therefore, the timely and reliable investigation of conditions leading to the onset or quantifying sanding while producing is imperative. The challenges of sand production are even more elevated while producing in Waxy and Heavy wells with Clay Fine sands (WHFC). Existing research argues that both waxy and heavy hydrocarbons exhibit far differing characteristics with waxy more paraffinic while heavy crude oils exhibit more asphaltenic properties. Moreover, the combined effect of WHFC conditions presents more complexity in production as opposed to individual effects that could be attributed to a consolidation of a surmountable opposing force. However, research on a combined high WHFC system could depict a better representation of the surmountable effect which in essence is more comparable to field conditions where a one-sided view of either individual effects on sanding has been argued to some extent misrepresentative of actual field conditions since all factors act surmountably. In recognition of the limited customized research on sand production studies with the combined effect of WHFC however, our research seeks to apply the Design of Experiments (DOE) methodology based on latest literature to analyze the relationship between various interparticle factors in relation to selected sand control methods. Our research aims to unearth a better understanding of how the combined effect of interparticle factors including: strength, cementation, particle size and production rate among others could better assist in the design of an optimal sand control system for the WHFC well conditions. In this regard, we seek to answer the following research question: How does the combined effect of interparticle factors affect the optimization of sand control systems for WHFC wells? Results from experimental data collection will inform a better justification for a sand control design for WHFC. In doing so, we hope to contribute to earlier contrasts arguing that sand production could potentially enable well self-permeability enhancement caused by the establishment of new flow channels created by loosening and detachment of sand grains. We hope that our research will contribute to future sand control designs capable of adapting to flexible production adjustments in controlled sand management. This paper presents results which are part of an ongoing research towards the authors' PhD project in the optimization of sand control systems for WHFC wells.

Keywords: waxy-heavy oils, clay-fine sands, sand control optimization, interparticle factors, design of experiments

Procedia PDF Downloads 130
1389 Characterization of Phenolic Compounds from Carménère Wines during Aging with Oak Wood (Staves, Chips and Barrels)

Authors: E. Obreque-Slier, J. Laqui-Estaña, A. Peña-Neira, M. Medel-Marabolí

Abstract:

Wine is an important source of polyphenols. Red wines show important concentrations of nonflavonoid (gallic acid, ellagic acid, caffeic acid and coumaric acid) and flavonoid compounds [(+)-catechin, (-)-epicatechin, (+)-gallocatechin and (-)-epigallocatechin]. However, a significant variability in the quantitative and qualitative distribution of chemical constituents in wine has to be expected depending on an array of important factors, such as the varietal differences of Vitis vinifera and cultural practices. It has observed that Carménère grapes present a differential composition and evolution of phenolic compounds when compared to other varieties and specifically with Cabernet Sauvignon grapes. Likewise, among the cultural practices, the aging in contact with oak wood is a high relevance factor. Then, the extraction of different polyphenolic compounds from oak wood into wine during its ageing process produces both qualitative and quantitative changes. Recently, many new techniques have been introduced in winemaking. One of these involves putting new pieces of wood (oak chips or inner staves) into inert containers. It offers some distinct and previously unavailable flavour advantages, as well as new options in wine handling. To our best knowledge, there is not information about the behaviour of Carménère wines (Chilean emblematic cultivar) in contact with oak wood. In addition, the effect of aging time and wood product (barrels, chips or staves) on the phenolic composition in Carménère wines has not been studied. This study aims at characterizing the condensed and hydrolyzable tannins from Carménère wines during the aging with staves, chips and barrels from French oak wood. The experimental design was completely randomized with two independent assays: aging time (0-12 month) and different formats of wood (barrel, chips and staves). The wines were characterized by spectrophotometric (total tannins and fractionation of proanthocyanidins into monomers, oligomers and polymers) and HPLC-DAD (ellagitannins) analysis. The wines in contact with different products of oak wood showed a similar content of total tannins during the study, while the control wine (without oak wood) presented a lower content of these compounds. In addition, it was observed that the polymeric proanthocyanidin fraction was the most abundant, while the monomeric fraction was the less abundant fraction in all treatments in two sample. However, significative differences in each fractions were observed between wines in contact from barrel, chips, and staves in two sample dates. Finally, the wine from barrels presented the highest content of the ellagitannins from the fourth to the last sample date. In conclusion, the use of alternative formats of oak wood affects the chemical composition of wines during aging, and these enological products are an interesting alternative to contribute with tannins to wine.

Keywords: enological inputs, oak wood aging, polyphenols, red wine

Procedia PDF Downloads 155
1388 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 186
1387 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 102
1386 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 103
1385 Physical Function and Physical Activity Preferences of Elderly Individuals Admitted for Elective Abdominal Surgery: A Pilot Study.

Authors: Rozelle Labuschagne, Ronel Roos

Abstract:

Individuals often experience a reduction in physical function, quality of life and basic activities of daily living after surgery. This is exponentially true for high-risk patients, especially the elderly and frail individuals. Not much is known about the physical function, physical activity preferences and factors associated with the six-minute walk test of elderly individuals who would undergo elective abdominal surgery in South Africa. Such information is important to design effective prehabilitation physiotherapy programs prior to elective surgery. The purpose of the study was to describe the demographic profile and physical function of elderly patients who would undergo elective surgery and to determine factors associated with their six-minute walk test distance findings. A cross-sectional descriptive study in elderly patients older than 60 years of age who would undergo elective abdominal surgery were consecutively sampled at a private hospital in Pretoria, South Africa. Participants’ demographics were collected and physical function assessed with the Functional Comorbidity Index (FCI), DeMorton Mobility Index (DEMMI), Lawton-Brody Instrumental Activities of Daily Living Scale (IADL) and six-minute walk test (6MWT). Descriptive and inferential statistics were used for data analysis with IBM SPSS 25. A p-value ≤ 0.05 were deemed statistically significant. The pilot study consisted of 12 participants (female (n=11, 91.7%), male (n=1, 8.3%) with a mean age of 65.8 (±4.5) years, body mass index of 28 (±4.2) kg.m2 with one (8.3%) participant being a current smoker and four (33.3%) participants having a smoking history. Nine (75%) participants lived independently at home and three (25%) had caregivers. Participants reported walking (n=6, 50%), stretching exercises (n=1, 8.3%), household chores & gardening (n=2, 16.7%), biking/swimming/running (n=1, 8.3%) as physical activity preferences. Physical function findings of the sample were: mean FCI score 3 (±1.1), DEMMI score 81.1 (±14.9), IADL 95 (±17.3), 6MWT 435.50 (IQR 364.75-458.50) with percentage 6MWT distance achieved 81.8% (IQR 64.4%-87.5%). A strong negative correlation was observed between 6MWT distance walked and FCI (r = -0.729, p=0.007). The majority of study participants reported incorporating some form of physical activity into their daily life as form of exercise. Most participants did not achieve their predicted 6MWT distance indicating less than optimal levels of physical function capacity. The number of comorbidities as determined by the FCI was associated with the distance that participants could walk with the 6MWT. The results of this pilot study could be used to indicate which elderly individuals would benefit most from a pre-surgical rehabilitation program. The main goal of such a program would be to improve physical function capacity as measured by the 6MWT. Surgeons could refer patients based on age and number of comorbidities, as determined by the FCI, to potentially improve surgical outcomes.

Keywords: abdominal surgery, elderly, physical function, six-minute walk test

Procedia PDF Downloads 196
1384 Parents’ Experiences in Using Mobile Tablets with Their Child with Autism to Encourage the Development of Social Communication Skills: The Development of a Parents’ Guide

Authors: Chrysoula Mangafa

Abstract:

Autism is a lifelong condition that affects how individuals interact with others and make sense of the world around them. The two core difficulties associated with autism are difficulties in social communication and interaction, and the manifestation of restricted, repetitive patterns of behaviour. However, children with autism may also have many talents and special interests among which is their affinity with digital technologies. Despite the increasing use of mobile tablets in schools and homes and the children’s motivation in using them, there is limited guidance on how to use the tablets to teach children with autism-specific skills. This study aims to fill this gap in knowledge by providing guidelines about the ways in which iPads and other tablets can be used by parents/carers and their child at home to support the development of social communication skills. Semi-structured interviews with 10 parents of primary school aged children with autism were conducted with the aim to explore their experiences in using mobile devices, such as iPads and Android tablets, and social activities with their children to create opportunities for social communication development. The interview involved questions about the parents’ knowledge and experience in autism, their understanding of social communication skills, the use of technology at home, and their links with the child’s school. Qualitative analysis of the interviews showed that parents used a variety of strategies to boost their child’s social communication skills. Among these strategies were a) the use of communication symbols, b) the use of the child’s special interest as motivator to gain their attention, and c) allowing time to their child to respond. It was also found that parents engaged their child in joint activities such as cooking, role play and creating social stories together on the device. Seven out of ten parents mentioned that the tablet is a motivating tool that can be used to teach social communication skills, nonetheless all parents raised concerns over screen time and their child’s sharing difficulties. The need for training and advice as well as building stronger links with their child’s school was highlighted. In particular, it was mentioned that recommendations would be welcomed about how parents can address their child’s difficulties in initiating or sustaining a conversation, taking turns and sharing, understanding other people’s feelings and facial expressions, and showing interest to other people. The findings of this study resulted in the development of a parents’ guide based on evidence-based practice and the participants’ experiences and concerns. The proposed guidelines aim to urge parents to feel more confident in using the tablet with their child in more collaborative ways. In particular, the guide offers recommendations about how to develop verbal and non-verbal communication, gives examples of tablet-based activities to interact and create things together, as well as it offers suggestions on how to provide a worry-free tablet experience and how to connect with the school.

Keywords: families, perception and cognition in early development, school-age intervention, social development

Procedia PDF Downloads 160
1383 Biocompatibility of Calcium Phosphate Coatings With Different Crystallinity Deposited by Sputtering

Authors: Ekaterina S. Marchenko, Gulsharat A. Baigonakova, Kirill M. Dubovikov, Igor A. Khlusov

Abstract:

NiTi alloys combine biomechanical and biochemical properties. This makes them a perfect candidate for medical applications. However, there is a serious problem with these alloys, such as the release of Ni from the matrix. Ni ions are known to be toxic to living tissues and leach from the matrix into the surrounding implant tissues due to corrosion after prolonged use. To prevent the release of Ni ions, corrosive strong coatings are usually used. Titanium nitride-based coatings are perfect corrosion inhibitors and also have good bioactive properties. However, there is an opportunity to improve the biochemical compatibility of the surface by depositing another layer. This layer can consist of elements such as calcium and phosphorus. The Ca and P ions form different calcium phosphate phases, which are present in the mineral part of human bones. We therefore believe that these elements must promote osteogenesis and osteointegration. In view of the above, the aim of this study is to investigate the effect of crystallinity on the biocompatibility of a two-layer coating deposited on NiTi substrate by sputtering. The first step of the research, apart from the NiTi polishing, is the layer-by-layer deposition of Ti-Ni-Ti by magnetron sputtering and the subsequent synthesis of this composite in an N atmosphere at 900 °C. The total thickness of the corrosion resistant layer is 150 nm. Plasma assisted RF sputtering was then used to deposit a bioactive film on the titanium nitride layer. A Ca-P powder target was used to obtain such a film. We deposited three types of Ca-P layers with different crystallinity and compared them in terms of cytotoxicity. One group of samples had no Ca-P coating and was used as a control. We obtained different crystallinity by varying the sputtering parameters such as bias voltage, plasma source current and pressure. XRD analysis showed that all coatings are calcium phosphate, but the sample obtained at maximum bias and plasma source current and minimum pressure has the most intense peaks from the coating phase. SEM and EDS showed that all three coatings have a homogeneous and dense structure without cracks and consist of calcium, phosphorus and oxygen. Cytotoxic tests carried out on three types of samples with Ca-P coatings and a control group showed that the control sample and the sample with Ca-P coating obtained at maximum bias voltage and plasma source current and minimum pressure had the lowest number of dead cells on the surface, around 11 ± 4%. Two other types of samples with Ca-P coating have 40 ± 9% and 21 ± 7% dead cells on the surface. It can therefore be concluded that these two sputtering modes have a negative effect on the corrosion resistance of the whole samples. The third sputtering mode does not affect the corrosion resistance and has the same level of cytotoxicity as the control. It can be concluded that the most suitable sputtering mode is the third with maximum bias voltage and plasma source current and minimum pressure.

Keywords: calcium phosphate coating, cytotoxicity, NiTi alloy, two-layer coating

Procedia PDF Downloads 65
1382 Modeling of Alpha-Particles’ Epigenetic Effects in Short-Term Test on Drosophila melanogaster

Authors: Z. M. Biyasheva, M. Zh. Tleubergenova, Y. A. Zaripova, A. L. Shakirov, V. V. Dyachkov

Abstract:

In recent years, interest in ecogenetic and biomedical problems related to the effects on the population of radon and its daughter decay products has increased significantly. Of particular interest is the assessment of the consequence of irradiation at hazardous radon areas, which includes the Almaty region due to the large number of tectonic faults that enhance radon emanation. In connection with the foregoing, the purpose of this work was to study the genetic effects of exposure to supernormal radon doses on the alpha-radiation model. Irradiation does not affect the growth of the cell, but rather its ability to differentiate. In addition, irradiation can lead to somatic mutations, morphoses and modifications. These damages most likely occur from changes in the composition of the substances of the cell. Such changes are epigenetic since they affect the regulatory processes of ontogenesis. Variability in the expression of regulatory genes refers to conditional mutations that modify the formation of signs of intraspecific similarity. Characteristic features of these conditional mutations are the dominant type of their manifestation, phenotypic asymmetry and their instability in the generations. Currently, the terms “morphosis” and “modification” are used to describe epigenetic variability, which are maintained in Drosophila melanogaster cultures using linkaged X- chromosomes, and the mutant X-chromosome is transmitted along the paternal line. In this paper, we investigated the epigenetic effects of alpha particles, whose source in nature is mainly radon and its daughter decay products. In the experiment, an isotope of plutonium-238 (Pu238), generating radiation with an energy of about 5500 eV, was used as a source of alpha particles. In an experiment in the first generation (F1), deformities or morphoses were found, which can be called "radiation syndromes" or mutations, the manifestation of which is similar to the pleiotropic action of genes. The proportion of morphoses in the experiment was 1.8%, and in control 0.4%. In this experiment, the morphoses in the flies of the first and second generation looked like black spots, or melanomas on different parts of the imago body; "generalized" melanomas; curled, curved wings; shortened wing; bubble on one wing; absence of one wing, deformation of thorax, interruption and violation of tergite patterns, disruption of distribution of ocular facets and bristles; absence of pigmentation of the second and third legs. Statistical analysis by the Chi-square method showed the reliability of the difference in experiment and control at P ≤ 0.01. On the basis of this, it can be considered that alpha particles, which in the environment are mainly generated by radon and its isotopes, have a mutagenic effect that manifests itself, mainly in the formation of morphoses or deformities.

Keywords: alpha-radiation, genotoxicity, morphoses, radioecology, radon

Procedia PDF Downloads 150
1381 Processing, Nutritional Assessment and Sensory Evaluation of Bakery Products Prepared from Orange Fleshed Sweet Potatoes (OFSP) and Wheat Composite Flours

Authors: Hategekimana Jean Paul, Irakoze Josiane, Ishimweyizerwe Valentin, Iradukunda Dieudonne, Uwanyirigira Jeannette

Abstract:

Orange fleshed sweet potatoes (OFSP) are highly grown and are available plenty in rural and urban local markets and its contribution in reduction of food insecurity in Rwanda is considerable. But the postharvest loss of this commodity is a critical challenge due to its high perishability. Several research activities have been conducted on how fresh food commodities can be transformed into extended shelf life food products for prevention of post-harvest losses. However, such activity was not yet well studied in Rwanda. The aim of the present study was the processing of backed products from (OFSP)combined with wheat composite flour and assess the nutritional content and consumer acceptability of new developed products. The perishability of OFSP and their related lack during off season can be eradicated by producing cake, doughnut and bread with OFSP puree or flour. The processing for doughnut and bread were made by making OFSP puree and other ingredients then a dough was made followed by frying and baking while for cake OFSP was dried through solar dryer to have a flour together with wheat flour and other ingredients to make dough cake and baking. For each product, one control and three experimental samples, (three products in three different ratios (30,40 and50%) of OFSP and the remaining percentage of wheat flour) were prepared. All samples including the control were analyzed for the consumer acceptability (sensory attributes). Most preferred samples (One sample for each product with its control sample and for each OFSP variety) were analyzed for nutritional composition along with control sample. The Cake from Terimbere variety and Bread from Gihingumukungu supplemented with 50% OFSP flour or Puree respectively were most acceptable except Doughnut from Vita variety which was highly accepted at 50% of OFSP supplementation. The moisture, ash, protein, fat, fiber, Total carbohydrate, Vitamin C, reducing sugar and minerals (Sodium, Potassium and Phosphorus.) content was different among products. Cake was rich in fibers (14.71%), protein (6.590%), and vitamin c(19.988mg/100g) compared to other samples while bread found to be rich in reducing sugar with 12.71mg/100g compared to cake and doughnut. Also doughnut was found to be rich in fat content with 6.89% compared to other samples. For sensory analysis, doughnut was highly accepted in ratio of 60:40 compared to other products while cake was least accepted at ratio of 50:50. The Proximate composition and minerals content of all the OFSP products were significantly higher as compared to the control samples.

Keywords: post-harvest loss, OFSP products, wheat flour, sensory evaluation, proximate composition

Procedia PDF Downloads 60
1380 The Impact of Team Heterogeneity and Team Reflexivity on Entrepreneurial Decision -Making - Empirical Study in China

Authors: Chang Liu, Rui Xing, Liyan Tang, Guohong Wang

Abstract:

Entrepreneurial actions are based on entrepreneurial decisions. The quality of decisions influences entrepreneurial activities and subsequent new venture performance. Uncertainty of surroundings put heightened demands on the team as a whole, and each team member. Diverse team composition provides rich information, which a team can draw when making complex decisions. However, team heterogeneity may cause emotional conflicts, which is adverse to team outcomes. Thus, the effects of team heterogeneity on team outcomes are complex. Although team heterogeneity is an essential factor influencing entrepreneurial decision-making, there is a lack of empirical analysis on under what conditions team heterogeneity plays a positive role in promoting decision-making quality. Entrepreneurial teams always struggle with complex tasks. How a team shapes its teamwork is key in resolving constant issues. As a collective regulatory process, team reflexivity is characterized by continuous joint evaluation and discussion of team goals, strategies, and processes, and adapt them to current or anticipated circumstances. It enables diversified information to be shared and overtly discussed. Instead of hostile interpretation of opposite opinions team members take them as useful insights from different perspectives. Team reflexivity leads to better integration of expertise to avoid the interference of negative emotions and conflict. Therefore, we propose that team reflexivity is a conditional factor that influences the impact of team heterogeneity on high-quality entrepreneurial decisions. In this study, we identify team heterogeneity as a crucial determinant of entrepreneurial decision quality. Integrating the literature on decision-making and team heterogeneity, we investigate the relationship between team heterogeneity and entrepreneurial decision-making quality, treating team reflexivity as a moderator. We tested our hypotheses using the hierarchical regression method and the data gathered from 63 teams and 205 individual members from 45 new firms in China's first-tier cities such as Beijing, Shanghai, and Shenzhen. This research found that both teams' education heterogeneity and teams' functional background heterogeneity were significantly positively related to entrepreneurial decision-making quality, and the positive relation was stronger in teams with a high level of team reflexivity. While teams' specialization of education heterogeneity was negatively related to decision-making quality, and the negative relationship was weaker in teams with a high level of team reflexivity. We offer two contributions to decision-making and entrepreneurial team literatures. Firstly, our study enriches the understanding of the role of entrepreneurial team heterogeneity in entrepreneurial decision-making quality. Different from previous entrepreneurial decision-making literatures, which focus more on decision-making modes of entrepreneurs and the top management team, this study is a significant attempt to highlight that entrepreneurial team heterogeneity makes a unique contribution to generating high-quality entrepreneurial decisions. Secondly, this study introduced team reflexivity as the moderating variable, to explore the boundary conditions under which the entrepreneurial team heterogeneity play their roles.

Keywords: decision-making quality, entrepreneurial teams, education heterogeneity, functional background heterogeneity, specialization of education heterogeneity

Procedia PDF Downloads 118
1379 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 285
1378 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 218
1377 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots

Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha

Abstract:

Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.

Keywords: biosensor, dopamine, fluorescence, quantum dots

Procedia PDF Downloads 362
1376 Teachers’ Instructional Decisions When Teaching Geometric Transformations

Authors: Lisa Kasmer

Abstract:

Teachers’ instructional decisions shape the structure and content of mathematics lessons and influence the mathematics that students are given the opportunity to learn. Therefore, it is important to better understand how teachers make instructional decisions and thus find new ways to help practicing and future teachers give their students a more effective and robust learning experience. Understanding the relationship between teachers’ instructional decisions and their goals, resources, and orientations (beliefs) is important given the heightened focus on geometric transformations in the middle school mathematics curriculum. This work is significant as the development and support of current and future teachers need more effective ways to teach geometry to their students. The following research questions frame this study: (1) As middle school mathematics teachers plan and enact instruction related to teaching transformations, what thinking processes do they engage in to make decisions about teaching transformations with or without a coordinate system and (2) How do the goals, resources and orientations of these teachers impact their instructional decisions and reveal about their understanding of teaching transformations? Teachers and students alike struggle with understanding transformations; many teachers skip or hurriedly teach transformations at the end of the school year. However, transformations are an important mathematical topic as this topic supports students’ understanding of geometric and spatial reasoning. Geometric transformations are a foundational concept in mathematics, not only for understanding congruence and similarity but for proofs, algebraic functions, and calculus etc. Geometric transformations also underpin the secondary mathematics curriculum, as features of transformations transfer to other areas of mathematics. Teachers’ instructional decisions in terms of goals, orientations, and resources that support these instructional decisions were analyzed using open-coding. Open-coding is recognized as an initial first step in qualitative analysis, where comparisons are made, and preliminary categories are considered. Initial codes and categories from current research on teachers’ thinking processes that are related to the decisions they make while planning and reflecting on the lessons were also noted. Surfacing ideas and additional themes common across teachers while seeking patterns, were compared and analyzed. Finally, attributes of teachers’ goals, orientations and resources were identified in order to begin to build a picture of the reasoning behind their instructional decisions. These categories became the basis for the organization and conceptualization of the data. Preliminary results suggest that teachers often rely on their own orientations about teaching geometric transformations. These beliefs are underpinned by the teachers’ own mathematical knowledge related to teaching transformations. When a teacher does not have a robust understanding of transformations, they are limited by this lack of knowledge. These shortcomings impact students’ opportunities to learn, and thus disadvantage their own understanding of transformations. Teachers’ goals are also limited by their paucity of knowledge regarding transformations, as these goals do not fully represent the range of comprehension a teacher needs to teach this topic well.

Keywords: coordinate plane, geometric transformations, instructional decisions, middle school mathematics

Procedia PDF Downloads 87
1375 Shale Gas Accumulation of Over-Mature Cambrian Niutitang Formation Shale in Structure-Complicated Area, Southeastern Margin of Upper Yangtze, China

Authors: Chao Yang, Jinchuan Zhang, Yongqiang Xiong

Abstract:

The Lower Cambrian Niutitang Formation shale (NFS) deposited in the marine deep-shelf environment in Southeast Upper Yangtze (SUY), possess excellent source rock basis for shale gas generation, however, it is currently challenged by being over-mature with strong tectonic deformations, leading to much uncertainty of gas-bearing potential. With emphasis on the shale gas enrichment of the NFS, analyses were made based on the regional gas-bearing differences obtained from field gas-desorption testing of 18 geological survey wells across the study area. Results show that the NFS bears low gas content of 0.2-2.5 m³/t, and the eastern region of SUY is higher than the western region in gas content. Moreover, the methane fraction also presents the similar regional differentiation with the western region less than 10 vol.% while the eastern region generally more than 70 vol.%. Through the analysis of geological theory, the following conclusions are drawn: Depositional environment determines the gas-enriching zones. In the western region, the Dengying Formation underlying the NFS in unconformity contact was mainly plateau facies dolomite with caves and thereby bears poor gas-sealing ability. Whereas the Laobao Formation underling the NFS in eastern region was a set of siliceous rocks of shelf-slope facies, which can effectively prevent the shale gas from escaping away from the NFS. The tectonic conditions control the gas-enriching bands in the SUY, which is located in the fold zones formed by the thrust of the Southern China plate towards to the Sichuan Basin. Compared with the western region located in the trough-like folds, the eastern region at the fold-thrust belts was uplifted early and deformed weakly, resulting in the relatively less mature level and relatively slight tectonic deformation of the NFS. Faults determine whether shale gas can be accumulated in large scale. Four deep and large normal faults in the study area cut through the Niutitang Formation to the Sinian strata, directly causing a large spillover of natural gas in the adjacent areas. For the secondary faults developed within the shale formation, the reverse faults generally have a positive influence on the shale accumulation while the normal faults perform the opposite influence. Overall, shale gas enrichment targets of the NFS, are the areas with certain thickness of siliceous rocks at the basement of the Niutitang Formation, and near the margin of the paleouplift with less developed faults. These findings provide direction for shale gas exploration in South China, and also provide references for the areas with similar geological conditions all over the world.

Keywords: over-mature marine shale, shale gas accumulation, structure-complicated area, Southeast Upper Yangtze

Procedia PDF Downloads 143
1374 Stochastic Nuisance Flood Risk for Coastal Areas

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.

Keywords: flood risk, nuisance flooding, urban flooding, FMEA

Procedia PDF Downloads 93
1373 Differences in Preschool Educators' and Parents' Interactive Behavior during a Cooperative Task with Children

Authors: Marina Fuertes

Abstract:

Introduction: In everyday life experiences, children are solicited to cooperate with others. Often they perform cooperative tasks with their parents (e.g., setting the table for dinner) or in school. These tasks are very significant since children may learn to turn taking in interactions, to participate as well to accept others participation, to trust, to respect, to negotiate, to self-regulate their emotions, etc. Indeed, cooperative tasks contribute to children social, motor, cognitive and linguistic development. Therefore, it is important to study what learning, social and affective experiences are provided to children during these tasks. In this study, we included parents and preschool educators. Parents and educators are both significant: educative, interactive and affective figures. Rarely parents and educators behavior have been compared in studies about cooperative tasks. Parents and educators have different but complementary styles of interaction and communication. Aims: Therefore, this study aims to compare parents and educators' (of both genders) interactive behavior (cooperativity, empathy, ability to challenge the child, reciprocity, elaboration) during a play/individualized situation involving a cooperative task. Moreover, to compare parents and educators' behavior with girls and boys. Method: A quasi-experimental study with 45 dyads educators-children and 45 dyads with parents and their children. In this study, participated children between 3 and 5 years old and with age appropriate development. Adults and children were videotaped using a variety of materials (e.g., pencils, wood, wool) and tools (e.g., scissors, hammer) to produce together something of their choice during 20-minutes. Each dyad (one adult and one child) was observed and videotaped independently. Adults and children agreed and consented to participate. Experimental conditions were suitable, pleasant and age appropriated. Results: Findings indicate that parents and teachers offer different learning experiences. Teachers were more likely to challenged children to explore new concepts and to accept children ideas. In turn, parents gave more support to children actions and were more likely to use their own example to teach children. Multiple regression analysis indicates that parent versus educator status predicts their behavior. Gender of both children and adults affected the results. Adults acted differently with girls and boys (e.g., adults worked more cooperatively with girls than boys). Male participants supported more girls participation rather than boys while female adults allowed boys to make more decisions than girls. Discussion: Taking our results and past studies, we learn that different qualitative interactions and learning experiences are offered by parents, educators according to parents and children gender. Thus, the same child needs to learn different cooperative strategies according to their interactive patterns and specific context. Yet, cooperative play and individualized activities with children generate learning opportunities and benefits children participation and involvement.

Keywords: early childhood education, parenting, gender, cooperative tasks, adult-child interaction

Procedia PDF Downloads 323
1372 Health and Disease, Sickness and Well Being: Depictions in the Vinaya Pitaka and Jataka Narratives

Authors: Abhimanyu Kumar

Abstract:

The relationship between religion and medicine is much evident in the context of Buddhism. This paper is an attempt to look at the processes of social and cultural evolution of scientific creativity in the field of medicine and institutionalization of medical practices. The objective of the paper is to understand the Buddhist responses towards health as understood from the Vinaya Piṭaka and the Jātaka. This work is a result of the analysis of two important Buddhist texts: the Vinaya Piṭaka and the Jātaka. Broadly the Vinaya Piṭaka is concerned with the growth of Buddhist monasticism. The Vinaya Piṭaka is considered one of the most important sacred texts of the Buddhists, and contains rules for monastic life. These rules deal with such aspects as formal meetings of the saṃgha (monastery), expiation, confession, training, and legal questions. The Jātaka stories, on the other hand, are in the form of folk narratives, and provide a major source of medical consultation for all classes. These texts help us to ascertain the ‘proficiency and perceptions’ of the prevailing medical traditions. The Jātakas are a collection of 547 stories about the past lives of the Buddha, who is represented in anthropomorphic and animal form. The Jātaka connects itself between existing cognitive environments related to ethics and Buddhist didacticism. These stories are a reflection of the connection between the past and contemporary times (in the sense of time of creation of the story) as well. This is visible through the narrative strategy of the text, where every story is sub-divided into the story of the past and story of the present, and there is a significant identification element or connection that established at the end of each story. The minimal presence of philosophical content and the adoption of a narrative strategy make it possible for more of everyday life. This study gives me an opportunity to raise questions about how far were the body and mind closely interrelated in the Buddhist perceptions, and also did the society act like a laboratory for the Buddhists to practice healing activities? How far did religious responses to afflictions, be they leprosy or plague or anger, influence medical care; what impact did medical practitioners, religious authorities and the regulation of medical activity and practice have on healing the body and the mind; and, how has the healing environment been viewed. This paper is working with the idea that medical science in early India was not only for the curative purpose of diseases, but it fulfilled a greater cause of promoting, maintaining and restoring human health. In this regard, studying these texts gives an insight regarding religious responses to epidemics, from leprosy to plague, as well as to behavioral disorder such as anger. In other words, it deals with the idea about healing the body and healing the soul from a religious perspective.

Keywords: food for health, folk narratives, human body, materia medica, social sickness

Procedia PDF Downloads 276
1371 Embracing the Uniqueness and Potential of Each Child: Moving Theory to Practice

Authors: Joy Chadwick

Abstract:

This Study of Teaching and Learning (SoTL) research focused on the experiences of teacher candidates involved in an inclusive education methods course within a four-year direct entry Bachelor of Education program. The placement of this course within the final fourteen-week practicum semester is designed to facilitate deeper theory-practice connections between effective inclusive pedagogical knowledge and the real life of classroom teaching. The course focuses on supporting teacher candidates to understand that effective instruction within an inclusive classroom context must be intentional, responsive, and relational. Diversity is situated not as exceptional but rather as expected. This interpretive qualitative study involved the analysis of twenty-nine teacher candidate reflective journals and six individual teacher candidate semi-structured interviews. The journal entries were completed at the start of the semester and at the end of the semester with the intent of having teacher candidates reflect on their beliefs of what it means to be an effective inclusive educator and how the course and practicum experiences impacted their understanding and approaches to teaching in inclusive classrooms. The semi-structured interviews provided further depth and context to the journal data. The journals and interview transcripts were coded and themed using NVivo software. The findings suggest that instructional frameworks such as universal design for learning (UDL), differentiated instruction (DI), response to intervention (RTI), social emotional learning (SEL), and self-regulation supported teacher candidate’s abilities to meet the needs of their students more effectively. Course content that focused on specific exceptionalities also supported teacher candidates to be proactive rather than reactive when responding to student learning challenges. Teacher candidates also articulated the importance of reframing their perspective about students in challenging moments and that seeing the individual worth of each child was integral to their approach to teaching. A persisting question for teacher educators exists as to what pedagogical knowledge and understanding is most relevant in supporting future teachers to be effective at planning for and embracing the diversity of student needs within classrooms today. This research directs us to consider the critical importance of addressing personal attributes and mindsets of teacher candidates regarding children as well as considering instructional frameworks when designing coursework. Further, the alignment of an inclusive education course during a teaching practicum allows for an iterative approach to learning. The practical application of course concepts while teaching in a practicum allows for a deeper understanding of instructional frameworks, thus enhancing the confidence of teacher candidates. Research findings have implications for teacher education programs as connected to inclusive education methods courses, practicum experiences, and overall teacher education program design.

Keywords: inclusion, inclusive education, pre-service teacher education, practicum experiences, teacher education

Procedia PDF Downloads 68