Search results for: ASIC (application specific integrated circuit)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17647

Search results for: ASIC (application specific integrated circuit)

847 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography

Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld

Abstract:

With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.

Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry

Procedia PDF Downloads 308
846 Reflective Portfolio to Bridge the Gap in Clinical Training

Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab

Abstract:

Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.

Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education

Procedia PDF Downloads 83
845 Extraction of Rice Bran Protein Using Enzymes and Polysaccharide Precipitation

Authors: Sudarat Jiamyangyuen, Tipawan Thongsook, Riantong Singanusong, Chanida Saengtubtim

Abstract:

Rice is a staple food as well as exported commodity of Thailand. Rice bran, a 10.5% constituent of rice grain, is a by-product of rice milling process. Rice bran is normally used as a raw material for rice bran oil production or sold as feed with a low price. Therefore, this study aimed to increase value of defatted rice bran as obtained after extracting of rice bran oil. Conventionally, the protein in defatted rice bran was extracted using alkaline extraction and acid precipitation, which results in reduction of nutritious components in rice bran. Rice bran protein concentrate is suitable for those who are allergenic of protein from other sources eg. milk, wheat. In addition to its hypoallergenic property, rice bran protein also contains good quantity of lysine. Thus it may act as a suitable ingredient for infant food formulations while adding variety to the restricted diets of children with food allergies. The objectives of this study were to compare properties of rice bran protein concentrate (RBPC) extracted from defatted rice bran using enzymes together with precipitation step using polysaccharides (alginate and carrageenan) to those of a control sample extracted using a conventional method. The results showed that extraction of protein from rice bran using enzymes exhibited the higher protein recovery compared to that extraction with alkaline. The extraction conditions using alcalase 2% (v/w) at 50 C, pH 9.5 gave the highest protein (2.44%) and yield (32.09%) in extracted solution compared to other enzymes. Rice bran protein concentrate powder prepared by a precipitation step using alginate (protein in solution: alginate 1:0.006) exhibited the highest protein (27.55%) and yield (6.62%). Precipitation using alginate was better than that of acid. RBPC extracted with alkaline (ALK) or enzyme alcalase (ALC), then precipitated with alginate (AL) (samples RBP-ALK-AL and RBP-ALC-AL) yielded the precipitation rate of 75% and 91.30%, respectively. Therefore, protein precipitation using alginate was then selected. Amino acid profile of control sample, and sample precipitated with alginate, as compared to casein and soy protein isolated, showed that control sample showed the highest content among all sample. Functional property study of RBP showed that the highest nitrogen solubility occurred in pH 8-10. There was no statically significant between emulsion capacity and emulsion stability of control and sample precipitated by alginate. However, control sample showed a higher of foaming and lower foam stability compared to those of sample precipitated with alginate. The finding was successful in terms of minimizing chemicals used in extraction and precipitation steps in preparation of rice bran protein concentrate. This research involves in a production of value-added product in which the double amount of protein (28%) compared to original amount (14%) contained in rice bran could be beneficial in terms of adding to food products eg. healthy drink with high protein and fiber. In addition, the basic knowledge of functional property of rice bran protein concentrate was obtained, which can be used to appropriately select the application of this value-added product from rice bran.

Keywords: alginate, carrageenan, rice bran, rice bran protein

Procedia PDF Downloads 284
844 Effect of Maturation on the Characteristics and Physicochemical Properties of Banana and Its Starch

Authors: Chien-Chun Huang, P. W. Yuan

Abstract:

Banana is one of the important fruits which constitute a valuable source of energy, vitamins and minerals and an important food component throughout the world. The fruit ripening and maturity standards vary from country to country depending on the expected shelf life of market. During ripening there are changes in appearance, texture and chemical composition of banana. The changes of component of banana during ethylene-induced ripening are categorized as nutritive values and commercial utilization. The objectives of this study were to investigate the changes of chemical composition and physicochemical properties of banana during ethylene-induced ripening. Green bananas were harvested and ripened by ethylene gas at low temperature (15℃) for seven stages. At each stage, banana was sliced and freeze-dried for banana flour preparation. The changes of total starch, resistant starch, chemical compositions, physicochemical properties, activity of amylase, polyphenolic oxidase (PPO) and phenylalanine ammonia lyase (PAL) of banana were analyzed each stage during ripening. The banana starch was isolated and analyzed for gelatinization properties, pasting properties and microscopic appearance each stage of ripening. The results indicated that the highest total starch and resistant starch content of green banana were 76.2% and 34.6%, respectively at the harvest stage. Both total starch and resistant starch content were significantly declined to 25.3% and 8.8%, respectively at the seventh stage. Soluble sugars content of banana increased from 1.21% at harvest stage to 37.72% at seventh stage during ethylene-induced ripening. Swelling power of banana flour decreased with the progress of ripening stage, but solubility increased. These results strongly related with the decreases of starch content of banana flour during ethylene-induced ripening. Both water insoluble and alcohol insoluble solids of banana flour decreased with the progress of ripening stage. Both activity of PPO and PAL increased, but the total free phenolics content decreased, with the increases of ripening stages. As ripening stage extended, the gelatinization enthalpy of banana starch significantly decreased from 15.31 J/g at the harvest stage to 10.55 J/g at the seventh stage. The peak viscosity and setback increased with the progress of ripening stages in the pasting properties of banana starch. The highest final viscosity, 5701 RVU, of banana starch slurry was found at the seventh stage. The scanning electron micrograph of banana starch showed the shapes of banana starch appeared to be round and elongated forms, ranging in 10-50 μm at the harvest stage. As the banana closed to ripe status, some parallel striations were observed on the surface of banana starch granular which could be caused by enzyme reaction during ripening. These results inferred that the highest resistant starch was found in the green banana could be considered as a potential application of healthy foods. The changes of chemical composition and physicochemical properties of banana could be caused by the hydrolysis of enzymes during the ethylene-induced ripening treatment.

Keywords: maturation of banana, appearance, texture, soluble sugars, resistant starch, enzyme activities, physicochemical properties of banana starch

Procedia PDF Downloads 313
843 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 143
842 Process of Production of an Artisanal Brewery in a City in the North of the State of Mato Grosso, Brazil

Authors: Ana Paula S. Horodenski, Priscila Pelegrini, Salli Baggenstoss

Abstract:

The brewing industry with artisanal concepts seeks to serve a specific market, with diversified production that has been gaining ground in the national environment, also in the Amazon region. This growth is due to the more demanding consumer, with a diversified taste that wants to try new types of beer, enjoying products with new aromas, flavors, as a differential of what is so widely spread through the big industrial brands. Thus, through qualitative research methods, the study aimed to investigate how is the process of managing the production of a craft brewery in a city in the northern State of Mato Grosso (BRAZIL), providing knowledge of production processes and strategies in the industry. With the efficient use of resources, it is possible to obtain the necessary quality and provide better performance and differentiation of the company, besides analyzing the best management model. The research is descriptive with a qualitative approach through a case study. For the data collection, a semi-structured interview was elaborated, composed of the areas: microbrewery characterization, artisan beer production process, and the company supply chain management. Also, production processes were observed during technical visits. With the study, it was verified that the artisan brewery researched develops preventive maintenance strategies with the inputs, machines, and equipment, so that the quality of the product and the production process are achieved. It was observed that the distance from the supplying centers makes the management of processes and the supply chain be carried out with a longer planning time so that the delivery of the final product is satisfactory. The production process of the brewery is composed of machines and equipment that allows the control and quality of the product, which the manager states that for the productive capacity of the industry and its consumer market, the available equipment meets the demand. This study also contributes to highlight one of the challenges for the development of small breweries in front of the market giants, that is, the legislation, which fits the microbreweries as producers of alcoholic beverages. This makes the micro and small business segment to be taxed as a major, who has advantages in purchasing large batches of raw materials and tax incentives because they are large employers and tax pickers. It was possible to observe that the supply chain management system relies on spreadsheets and notes that are done manually, which could be simplified with a computer program to streamline procedures and reduce risks and failures of the manual process. In relation to the control of waste and effluents affected by the industry is outsourced and meets the needs. Finally, the results showed that the industry uses preventive maintenance as a productive strategy, which allows better conditions for the production and quality of artisanal beer. The quality is directly related to the satisfaction of the final consumer, being prized and performed throughout the production process, with the selection of better inputs, the effectiveness of the production processes and the relationship with the commercial partners.

Keywords: artisanal brewery, production management, production processes, supply chain

Procedia PDF Downloads 117
841 The Four Elements of Zoroastrianism and Sustainable Ecosystems with an Ecological Approach

Authors: Esmat Momeni, Shabnam Basari, Mohammad Beheshtinia

Abstract:

The purpose of this study is to provide a symbolic explanation of the four elements in Zoroastrianism and sustainable ecosystems with an ecological approach. The research method is fundamental and deductive content analysis. Data collection has been done through library and documentary methods and through reading books and related articles. The population and sample of the present study are Yazd city and Iran country after discovering symbolic concepts derived from the theoretical foundations of Zoroastrianism in four elements of water, air, soil, fire and conformity with Iranian architecture with the ecological approach in Yazd city, the sustainable ecosystem it is explained by the system of nature. The validity and reliability of the results are based on the trust and confidence of the research literature. Research findings show that Yazd was one of the bases of Zoroastrianism in Iran. Many believe that the first person to discuss the elements of nature and respect Zoroastrians is the Prophet of this religion. Keeping the environment clean and pure by paying attention to and respecting these four elements. The water element is a symbol of existence in Zoroastrianism, so the people of Yazd used the aqueduct and designed a pool in front of the building. The soil element is a symbol of the raw material of human creation in the Zoroastrian religion, the most readily available material in the desert areas of Yazd, used as bricks and adobes, creating one of the most magnificent roof coverings is the dome. The wind element represents the invisible force of the soul in Creation in Zoroastrianism, the most important application of wind in the windy, which is a highly efficient cooling system. The element of fire, which is always a symbol of purity in Zoroastrianism, is located in a special place in Yazd's Ataskadeh (altar/ temple), where the most important religious prayers are held in and against the fire. Consequently, indigenous knowledge and attention to indigenous architecture is a part of the national capital of each nation that encompasses their beliefs, values, methods, and knowledge. According to studies on the four elements of Zoroastrianism, the link between these four elements are that due to the hot and dry fire at the beginning, it is the fire that begins to follow the nature of the movement in the stillness of the earth, and arises from the heat of the fire and because of vigor and its decreases, cold (wind) emerges, and from cold, humidity and wetness. And by examining books and resources on Yazd's architectural design with an ecological approach to the values of the four elements Zoroastrianism has been inspired, it can be concluded that in order to have environmentally friendly architecture, it is essential to use sustainable architectural principles, to link religious and sacrament culture and ecology through architecture.

Keywords: ecology, architecture, quadruple elements of air, soil, water, fire, Zoroastrian religion, sustainable ecosystem, Iran, Yazd city

Procedia PDF Downloads 110
840 A Comparative Life Cycle Assessment: The Design of a High Performance Building Envelope and the Impact on Operational and Embodied Energy

Authors: Stephanie Wall, Guido Wimmers

Abstract:

The construction and operation of buildings greatly contribute to environmental degradation through resource and energy consumption and greenhouse gas emissions. The design of the envelope system affects the environmental impact of a building in two major ways; 1) high thermal performance and air tightness can significantly reduce the operational energy of the building and 2) the material selection for the envelope largely impacts the embodied energy of the building. Life cycle assessment (LCA) is a scientific methodology that is used to systematically analyze the environmental load of processes or products, such as buildings, over their life. The paper will discuss the results of a comparative LCA of different envelope designs and the long-term monitoring of the Wood Innovation Research Lab (WIRL); a Passive House (PH), industrial building under construction in Prince George, Canada. The WIRL has a footprint of 30m x 30m on a concrete raft slab foundation and consists of shop space as well as a portion of the building that includes a two-story office/classroom space. The lab building goes beyond what was previously thought possible in regards to energy efficiency of industrial buildings in cold climates due to their large volume to surface ratio, small floor area, and high air change rate, and will be the first PH certified industrial building in Canada. These challenges were mitigated through the envelope design which utilizes solar gains while minimizing overheating, reduces thermal bridges with thick (570mm) prefabricated truss walls filled with blown in mineral wool insulation and a concrete slab and roof insulated with EPS rigid insulation. The envelope design results in lower operational and embodied energy when compared to buildings built to local codes or with steel. The LCA conducted using Athena Impact Estimator for Buildings identifies project specific hot spots as well illustrates that for high-efficiency buildings where the operational energy is relatively low; the embodied energy of the material selection becomes a significant design decision as it greatly impacts the overall environmental footprint of the building. The results of the LCA will be reinforced by long-term monitoring of the buildings envelope performance through the installation of temperature and humidity sensors throughout the floor slab, wall and roof panels and through detailed metering of the energy consumption. The data collected from the sensors will also be used to reinforce the results of hygrothermal analysis using WUFI®, a program used to verify the durability of the wall and roof panels. The WIRL provides an opportunity to showcase the use of wood in a high performance envelope of an industrial building and to emphasize the importance of considering the embodied energy of a material in the early stages of design. The results of the LCA will be of interest to leading researchers and scientists committed to finding sustainable solutions for new construction and high-performance buildings.

Keywords: high performance envelope, life cycle assessment, long term monitoring, passive house, prefabricated panels

Procedia PDF Downloads 159
839 Higher Education Benefits and Undocumented Students: An Explanatory Model of Policy Adoption

Authors: Jeremy Ritchey

Abstract:

Undocumented immigrants in the U.S. face many challenges when looking to progress in society, especially when pursuing post-secondary education. The majority of research done on state-level policy adoption pertaining to undocumented higher-education pursuits, specifically in-state resident tuition and financial aid eligibility policies, have framed the discussion on the potential and actual impacts which implementation can and has achieved. What is missing is a model to view the social, political and demographic landscapes upon which such policies (in their various forms) find a route to legislative enactment. This research looks to address this gap in the field by investigating the correlations and significant state-level variables which can be operationalized to construct a framework for adoption of these specific policies. In the process, analysis will show that past unexamined conceptualizations of how such policies come to fruition may be limited or contradictory when compared to available data. Circling on the principles of Policy Innovation and Policy Diffusion theory, this study looks to use variables collected via Michigan State University’s Correlates of State Policy Project, a collectively and ongoing compiled database project centered around annual variables (1900-2016) collected from all 50 states relevant to policy research. Using established variable groupings (demographic, political, social capital measurements, and educational system measurements) from the time period of 2000 to 2014 (2001 being when such policies began), one can see how this data correlates with the adoption of policies related to undocumented students and in-state college tuition. After regression analysis, the results will illuminate which variables appears significant and to what effect, as to help formulate a model upon which to explain when adoption appears to occur and when it does not. Early results have shown that traditionally held conceptions on conservative and liberal identities of the state, as they relate to the likelihood of such policies being adopted, did not fall in line with the collected data. Democratic and liberally identified states were, overall, less likely to adopt pro-undocumented higher education policies than Republican and conservatively identified states and vis versa. While further analysis is needed as to improve the model’s explanatory power, preliminary findings are showing promise in widening our understanding of policy adoption factors in this realm of policies compared to the gap of such knowledge in the publications of the field as it currently exists. The model also looks to serve as an important tool for policymakers in framing such potential policies in a way that is congruent with the relevant state-level determining factors while being sensitive to the most apparent sources of potential friction. While additional variable groups and individual variables will ultimately need to be added and controlled for, this research has already begun to demonstrate how shallow or unexamined reasoning behind policy adoption in the realm of this topic needs to be addressed or else the risk is erroneous conceptions leaking into the foundation of this growing and ever important field.

Keywords: policy adoption, in-state tuition, higher education, undocumented immigrants

Procedia PDF Downloads 109
838 Dry Reforming of Methane Using Metal Supported and Core Shell Based Catalyst

Authors: Vinu Viswanath, Lawrence Dsouza, Ugo Ravon

Abstract:

Syngas typically and intermediary gas product has a wide range of application of producing various chemical products, such as mixed alcohols, hydrogen, ammonia, Fischer-Tropsch products methanol, ethanol, aldehydes, alcohols, etc. There are several technologies available for the syngas production. An alternative to the conventional processes an attractive route of utilizing carbon dioxide and methane in equimolar ratio to generate syngas of ratio close to one has been developed which is also termed as Dry Reforming of Methane technology. It also gives the privilege to utilize the greenhouse gases like CO2 and CH4. The dry reforming process is highly endothermic, and indeed, ΔG becomes negative if the temperature is higher than 900K and practically, the reaction occurs at 1000-1100K. At this temperature, the sintering of the metal particle is happening that deactivate the catalyst. However, by using this strategy, the methane is just partially oxidized, and some cokes deposition occurs that causing the catalyst deactivation. The current research work was focused to mitigate the main challenges of dry reforming process such coke deposition, and metal sintering at high temperature.To achieve these objectives, we employed three different strategies of catalyst development. 1) Use of bulk catalysts such as olivine and pyrochlore type materials. 2) Use of metal doped support materials, like spinel and clay type material. 3) Use of core-shell model catalyst. In this approach, a thin layer (shell) of redox metal oxide is deposited over the MgAl2O4 /Al2O3 based support material (core). For the core-shell approach, an active metal is been deposited on the surface of the shell. The shell structure formed is a doped metal oxide that can undergo reduction and oxidation reactions (redox), and the core is an alkaline earth aluminate having a high affinity towards carbon dioxide. In the case of metal-doped support catalyst, the enhanced redox properties of doped CeO2 oxide and CO2 affinity property of alkaline earth aluminates collectively helps to overcome coke formation. For all of the mentioned three strategies, a systematic screening of the metals is carried out to optimize the efficiency of the catalyst. To evaluate the performance of them, the activity and stability test were carried out under reaction conditions of temperature ranging from 650 to 850 ̊C and an operating pressure ranging from 1 to 20 bar. The result generated infers that the core-shell model catalyst showed high activity and better stable DR catalysts under atmospheric as well as high-pressure conditions. In this presentation, we will show the results related to the strategy.

Keywords: carbon dioxide, dry reforming, supports, core shell catalyst

Procedia PDF Downloads 170
837 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping

Authors: Emily Rowe

Abstract:

Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.

Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables

Procedia PDF Downloads 145
836 Child Sexual Abuse Prevention: Evaluation of the Program “Sharing Mouth to Mouth: My Body, Nobody Can Touch It”

Authors: Faride Peña, Teresita Castillo, Concepción Campo

Abstract:

Sexual violence, and particularly child sexual abuse, is a serious problem all over the world, México included. Given its importance, there are several preventive and care programs done by the government and the civil society all over the country but most of them are developed in urban areas even though these problems are especially serious in rural areas. Yucatán, a state in southern México, occupies one of the first places in child sexual abuse. Considering the above, the University Unit of Clinical Research and Victimological Attention (UNIVICT) of the Autonomous University of Yucatan, designed, implemented and is currently evaluating the program named “Sharing Mouth to Mouth: My Body, Nobody Can Touch It”, a program to prevent child sexual abuse in rural communities of Yucatán, México. Its aim was to develop skills for the detection of risk situations, providing protection strategies and mechanisms for prevention through culturally relevant psycho-educative strategies to increase personal resources in children, in collaboration with parents, teachers, police and municipal authorities. The diagnosis identified that a particularly vulnerable population were children between 4 and 10 years. The program run during 2015 in primary schools in the municipality whose inhabitants are mostly Mayan. The aim of this paper is to present its evaluation in terms of its effectiveness and efficiency. This evaluation included documental analysis of the work done in the field, psycho-educational and recreational activities with children, evaluation of knowledge by participating children and interviews with parents and teachers. The results show high efficiency in fulfilling the tasks and achieving primary objectives. The efficiency shows satisfactory results but also opportunity areas that can be resolved with minor adjustments to the program. The results also show the importance of including culturally relevant strategies and activities otherwise it minimizes possible achievements. Another highlight is the importance of participatory action research in preventive approaches to child sexual abuse since by becoming aware of the importance of the subject people participate more actively; in addition to design culturally appropriate strategies and measures so that the proposal may not be distant to the people. Discussion emphasizes the methodological implications of prevention programs (convenience of using participatory action research (PAR), importance of monitoring and mediation during implementation, developing detection skills tools in creative ways using psycho-educational interactive techniques and working assessment issued by the participants themselves). As well, it is important to consider the holistic character this type of program should have, in terms of incorporating social and culturally relevant characteristics, according to the community individuality and uniqueness, consider type of communication to be used and children’ language skills considering that there should be variations strongly linked to a specific cultural context.

Keywords: child sexual abuse, evaluation, PAR, prevention

Procedia PDF Downloads 292
835 Study of White Salted Noodles Air Dehydration Assisted by Microwave as Compared to Conventional Air Dried Process

Authors: Chiun-C. R. Wang, I-Yu Chiu

Abstract:

Drying is the most difficult and critical step to control in the dried salted noodles production. Microwave drying has the specific advantage of rapid and uniform heating due to the penetration of microwaves into the body of the product. Microwave-assisted facility offers a quick and energy saving method during food dehydration as compares to the conventional air-dried method for the noodle preparation. Recently, numerous studies in the rheological characteristics of pasta or spaghetti were carried out with microwave–assisted and conventional air driers and many agricultural products were dried successfully. There is very few research associated with the evaluation of physicochemical characteristics and cooking quality of microwave-assisted air dried salted noodles. The purposes of this study were to compare the difference between conventional air and microwave-assisted air drying method on the physicochemical properties and eating quality of rice bran noodles. Three different microwave power including 0.5 KW, 0.75 KW and 1.0 KW installing with 50℃ hot air were applied for dehydration of rice bran noodles in this study. Three proportion of rice bran ranging in 0-20% were incorporated into salted noodles processing. The appearance, optimum cooking time, cooking yield and losses, textural profiles analysis, and sensory evaluation of rice bran noodles were measured in this study. The results indicated that high power (1.0 KW) microwave facility caused partially burnt and porous on the surface of rice bran noodles. However, no significant difference of noodle was appeared on the surface of noodles between low power (0.5 KW) microwave-assisted salted noodles and control set. The optimum cooking time of noodles was decreased as higher power microwave was applied or higher proportion of rice bran was incorporated in the preparation of salted noodles. The higher proportion of rice bran (20%) or higher power of microwave-assisted dried noodles obtained the higher color intensity and the higher cooking losses as compared with conventional air dried noodles. Meanwhile, the higher power of microwave-assisted air dried noodles indicated the larger air cell inside the noodles and appeared little burnt stripe on the surface of noodles. The firmness of cooked rice bran noodles slightly decreased in the cooked noodles which were dried by high power microwave-assisted method. The shearing force, tensile strength, elasticity and texture profiles of cooked rice noodles decreased with the progress of the proportion of rice bran. The results of sensory evaluation indicated conventional dried noodles obtained the higher springiness, cohesiveness and overall acceptability of cooked noodles than high power (1.0 KW) microwave-assisted dried noodles. However, low power (0.5 KW) microwave-assisted dried noodles showed the comparable sensory attributes and acceptability with conventional dried noodles. Moreover, the sensory attributes including firmness, springiness, cohesiveness decreased, but stickiness increased with the increases of rice bran proportion in the salted noodles. These results inferred that incorporation of lower proportion of rice bran and lower power microwave-assisted dried noodles processing could produce faster cooking time and more acceptable quality of cooked noodles as compared to conventional dried noodles.

Keywords: white salted noodles, microwave-assisted air drying processing, cooking yield, appearance, texture profiles, scanning electrical microscopy, sensory evaluation

Procedia PDF Downloads 491
834 Cultural Knowledge Transfer of the Inherited Karen Backstrap Weaving for the 4th Generation of a Pwo Karen Community

Authors: Suphitcha Charoen-Amornkitt, Chokeanand Bussracumpakorn

Abstract:

The tendency of the Karen backstrap weaving succession has gradually decreased due to the difficulty of weaving techniques and the relocation of the young generation. The Yang Nam Klat Nuea community, Nong Ya Plong District, Phetchaburi, is a Pwo Karen community that is seriously confronted with a lack of cultural heritage. Thus, a group of weavers was formed to revive the knowledge of weaving. However, they have been gradually confronted with culture assimilation to mainstream culture from the desire for marketing acceptance and imperative and forced the extinction of culture due to the disappearance of weaving details and techniques. Although there are practical solutions, i.e., product development, community improvement, knowledge improvement, and knowledge transfer, to inherit the Karen weaving culture, people in the community cannot fulfill their deep intention about the weaving inheritance as most solutions have focused on developing the commercial products and making the income instead of inheriting their knowledge. This research employed qualitative user research with an in-depth user interview to study communal knowledge transfer succession based on the internal involved parties, i.e., four expert weavers, three young weavers, and three 4th generation villagers. The purpose is to explore the correlation and mindset of villagers towards the culture with specific issues, including the psychology of culture, core knowledge and learning methods, cultural inheritance, and cultural engagement. As a result, the existing models of knowledge management mostly focused on tangible strategies, which can notice progress in short terms, such as direct teaching and consistent practicing. At the same time, the motivation and passion of inheritors were abolished while the research found that the young generation who profoundly connected with the textile culture will have a more significant intention to continue the culture. Therefore, this research suggests both internal and external solutions to treat the community. Regarding the internal solutions, family, weaving group, and school have an important role to participate with young villagers by encouraging activities to support the cultivating of Karen’s history, understanding their identities, and adapting the culture as a part of daily life. At the same time, collecting all of the knowledge in the archives, e.g., recorded video, instruction, and books, can crucially prevent the culture from extinction. Regarding the external solutions, this study suggests that working with social media will enhance the intimacy of textile culture, while the community should relieve the roles in marketing competition and start to drive cultural experiences to create a new market position. In conclusion, this research intends to explore the causes and motivation to support the transfer of the culture to the 4th generation villagers and to raise awareness of the diversity of culture in society. With these suggestions and the desire to improve pride and confidence in culture, the community agrees that strengthening the relationships between the young villagers and the weaving culture can bring attention and interest back to the weaving culture.

Keywords: Pwo Karen textile culture, backstrap weaving succession, cultural inheritance, knowledge transfer, knowledge management

Procedia PDF Downloads 90
833 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 262
832 Estimation of Effective Mechanical Properties of Linear Elastic Materials with Voids Due to Volume and Surface Defects

Authors: Sergey A. Lurie, Yury O. Solyaev, Dmitry B. Volkov-Bogorodsky, Alexander V. Volkov

Abstract:

The media with voids is considered and the method of the analytical estimation of the effective mechanical properties in the theory of elastic materials with voids is proposed. The variational model of the porous media is discussed, which is based on the model of the media with fields of conserved dislocations. It is shown that this model is fully consistent with the known model of the linear elastic materials with voids. In the present work, the generalized model of the porous media is proposed in which the specific surface properties are associated with the field of defects-pores in the volume of the deformed body. Unlike typical surface elasticity model, the strain energy density of the considered model includes the special part of the surface energy with the quadratic form of the free distortion tensor. In the result, the non-classical boundary conditions take modified form of the balance equations of volume and surface stresses. The analytical approach is proposed in the present work which allows to receive the simple enough engineering estimations for effective characteristics of the media with free dilatation. In particular, the effective flexural modulus and Poisson's ratio are determined for the problem of a beam pure bending. Here, the known voids elasticity solution was expanded on the generalized model with the surface effects. Received results allow us to compare the deformed state of the porous beam with the equivalent classic beam to introduce effective bending rigidity. Obtained analytical expressions for the effective properties depend on the thickness of the beam as a parameter. It is shown that the flexural modulus of the porous beam is decreased with an increasing of its thickness and the effective Poisson's ratio of the porous beams can take negative values for the certain values of the model parameters. On the other hand, the effective shear modulus is constant under variation of all values of the non-classical model parameters. Solutions received for a beam pure bending and the hydrostatic loading of the porous media are compared. It is shown that an analytical estimation for the bulk modulus of the porous material under hydrostatic compression gives an asymptotic value for the effective bulk modulus of the porous beam in the case of beam thickness increasing. Additionally, it is shown that the scale effects appear due to the surface properties of the porous media. Obtained results allow us to offer the procedure of an experimental identification of the non-classical parameters in the theory of the linear elastic materials with voids based on the bending tests for samples with different thickness. Finally, the problem of implementation of the Saint-Venant hypothesis for the transverse stresses in the porous beam are discussed. These stresses are different from zero in the solution of the voids elasticity theory, but satisfy the integral equilibrium equations. In this work, the exact value of the introduced surface parameter was found, which provides the vanishing of the transverse stresses on the free surfaces of a beam.

Keywords: effective properties, scale effects, surface defects, voids elasticity

Procedia PDF Downloads 412
831 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 101
830 Students’ Speech Anxiety in Blended Learning

Authors: Mary Jane B. Suarez

Abstract:

Public speaking anxiety (PSA), also known as speech anxiety, is innumerably persistent in any traditional communication classes, especially for students who learn English as a second language. The speech anxiety intensifies when communication skills assessments have taken their toll in an online or a remote mode of learning due to the perils of the COVID-19 virus. Both teachers and students have experienced vast ambiguity on how to realize a still effective way to teach and learn speaking skills amidst the pandemic. Communication skills assessments like public speaking, oral presentations, and student reporting have defined their new meaning using Google Meet, Zoom, and other online platforms. Though using such technologies has paved for more creative ways for students to acquire and develop communication skills, the effectiveness of using such assessment tools stands in question. This mixed method study aimed to determine the factors that affected the public speaking skills of students in a communication class, to probe on the assessment gaps in assessing speaking skills of students attending online classes vis-à-vis the implementation of remote and blended modalities of learning, and to recommend ways on how to address the public speaking anxieties of students in performing a speaking task online and to bridge the assessment gaps based on the outcome of the study in order to achieve a smooth segue from online to on-ground instructions maneuvering towards a much better post-pandemic academic milieu. Using a convergent parallel design, both quantitative and qualitative data were reconciled by probing on the public speaking anxiety of students and the potential assessment gaps encountered in an online English communication class under remote and blended learning. There were four phases in applying the convergent parallel design. The first phase was the data collection, where both quantitative and qualitative data were collected using document reviews and focus group discussions. The second phase was data analysis, where quantitative data was treated using statistical testing, particularly frequency, percentage, and mean by using Microsoft Excel application and IBM Statistical Package for Social Sciences (SPSS) version 19, and qualitative data was examined using thematic analysis. The third phase was the merging of data analysis results to amalgamate varying comparisons between desired learning competencies versus the actual learning competencies of students. Finally, the fourth phase was the interpretation of merged data that led to the findings that there was a significantly high percentage of students' public speaking anxiety whenever students would deliver speaking tasks online. There were also assessment gaps identified by comparing the desired learning competencies of the formative and alternative assessments implemented and the actual speaking performances of students that showed evidence that public speaking anxiety of students was not properly identified and processed.

Keywords: blended learning, communication skills assessment, public speaking anxiety, speech anxiety

Procedia PDF Downloads 99
829 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine

Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski

Abstract:

Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: electric vehicle, power generator, range extender, Wankel engine

Procedia PDF Downloads 153
828 Crop Breeding for Low Input Farming Systems and Appropriate Breeding Strategies

Authors: Baye Berihun Getahun, Mulugeta Atnaf Tiruneh, Richard G. F. Visser

Abstract:

Resource-poor farmers practice low-input farming systems, and yet, most breeding programs give less attention to this huge farming system, which serves as a source of food and income for several people in developing countries. The high-input conventional breeding system appears to have failed to adequately meet the needs and requirements of 'difficult' environments operating under this system. Moreover, the unavailability of resources for crop production is getting for their peaks, the environment is maltreated by excessive use of agrochemicals, crop productivity reaches its plateau stage, particularly in the developed nations, the world population is increasing, and food shortage sustained to persist for poor societies. In various parts of the world, genetic gain at the farmers' level remains low which could be associated with low adoption of crop varieties, which have been developed under high input systems. Farmers usually use their local varieties and apply minimum inputs as a risk-avoiding and cost-minimizing strategy. This evidence indicates that the conventional high-input plant breeding system has failed to feed the world population, and the world is moving further away from the United Nations' goals of ending hunger, food insecurity, and malnutrition. In this review, we discussed the rationality of focused breeding programs for low-input farming systems and, the technical aspect of crop breeding that accommodates future food needs and its significance for developing countries in the decreasing scenario of resources required for crop production. To this end, the application of exotic introgression techniques like polyploidization, pan-genomics, comparative genomics, and De novo domestication as a pre-breeding technique has been discussed in the review to exploit the untapped genetic diversity of the crop wild relatives (CWRs). Desired recombinants developed at the pre-breeding stage are exploited through appropriate breeding approaches such as evolutionary plant breeding (EPB), rhizosphere-related traits breeding, and participatory plant breeding approaches. Populations advanced through evolutionary breeding like composite cross populations (CCPs) and rhizosphere-associated traits breeding approach that provides opportunities for improving abiotic and biotic soil stress, nutrient acquisition capacity, and crop microbe interaction in improved varieties have been reviewed. Overall, we conclude that low input farming system is a huge farming system that requires distinctive breeding approaches, and the exotic pre-breeding introgression techniques and the appropriate breeding approaches which deploy the skills and knowledge of both breeders and farmers are vital to develop heterogeneous landrace populations, which are effective for farmers practicing low input farming across the world.

Keywords: low input farming, evolutionary plant breeding, composite cross population, participatory plant breeding

Procedia PDF Downloads 42
827 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording

Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen

Abstract:

It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.

Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration

Procedia PDF Downloads 177
826 Effects of a Cluster Grouping of Gifted and Twice Exceptional Students on Academic Motivation, Socio-emotional Adjustment, and Life Satisfaction

Authors: Line Massé, Claire Baudry, Claudia Verret, Marie-France Nadeau, Anne Brault-Labbé

Abstract:

Little research has been conducted on educational services adapted for twice exceptional students. Within an action research, a cluster grouping was set up in an elementary school in Quebec, bringing together gifted or doubly exceptional (2E) students (n = 11) and students not identified as gifted (n = 8) within a multilevel class (3ᵣ𝒹 and 4ₜₕ years). 2E students had either attention deficit hyperactivity disorder (n = 8, including 3 with specific learning disability) or autism spectrum disorder (n = 2). Differentiated instructions strategies were implemented, including the possibility of progressing at their own pace of learning, independent study or research projects, flexible accommodation, tutoring with older students and the development of socio-emotional learning. A specialized educator also supported the teacher in the class for behavioural and socio-affective aspects. Objectives: The study aimed to assess the impacts of the grouping on all students, their academic motivation, and their socio-emotional adaptation. Method: A mixed method was used, combining a qualitative approach with a quantitative approach. Semi-directed interviews were conducted with students (N = 18, 4 girls and 14 boys aged 8 to 9) and one of their parents (N = 18) at the end of the school year. Parents and students completed two questionnaires at the beginning and end of the school year: the Behavior Assessment System for Children-3, children or parents versions (BASC-3, Reynolds and Kampus, 2015) and the Academic Motivation in Education (Vallerand et al., 1993). Parents also completed the Multidimensional Student Life Satisfaction Scale (Huebner, 1994, adapted by Fenouillet et al., 2014) comprising three domains (school, friendships, and motivation). Mixed thematic analyzes were carried out on the data from the interviews using the N'Vivo software. Related-samples Wilcoxon rank-sums tests were conducted for the data from the questionnaires. Results: Different themes emerge from the students' comments, including a positive impact on school motivation or attitude toward school, improved school results, reduction of their behavioural difficulties and improvement of their social relations. These remarks were more frequent among 2E students. Most 2E students also noted an improvement in their academic performance. Most parents reported improvements in attitudes toward school and reductions in disruptive behaviours in the classroom. Some parents also observed changes in behaviours at home or in the socio-emotional well-being of their children, here again, particularly parents of 2E children. Analysis of questionnaires revealed significant differences at the end of the school year, more specifically pertaining to extrinsic motivation identified, problems of conduct, attention, emotional self-control, executive functioning, negative emotions, functional deficiencies, and satisfaction regarding friendships. These results indicate that this approach could benefit not only gifted and doubly exceptional students but also students not identified as gifted.

Keywords: Cluster grouping, elementary school, giftedness, mixed methods, twice exceptional students

Procedia PDF Downloads 70
825 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 283
824 Overcoming Reading Barriers in an Inclusive Mathematics Classroom with Linguistic and Visual Support

Authors: A. Noll, J. Roth, M. Scholz

Abstract:

The importance of written language in a democratic society is non-controversial. Students with physical, learning, cognitive or developmental disabilities often have difficulties in understanding information which is presented in written language only. These students suffer from obstacles in diverse domains. In order to reduce such barriers in educational as well as in out-of-school areas, access to written information must be facilitated. Readability can be enhanced by linguistic simplifications like the application of easy-to-read language. Easy-to-read language shall help people with disabilities to participate socially and politically in society. The authors state, for example, that only short simple words should be used, whereas the occurrence of complex sentences should be avoided. So far, these guidelines were not empirically proved. Another way to reduce reading barriers is the use of visual support, for example, symbols. A symbol conveys, in contrast to a photo, a single idea or concept. Little empirical data about the use of symbols to foster the readability of texts exist. Nevertheless, a positive influence can be assumed, e.g., because of the multimedia principle. It indicates that people learn better from words and pictures than from words alone. A qualitative Interview and Eye-Tracking-Study, which was conducted by the authors, gives cause for the assumption that besides the illustration of single words, the visualization of complete sentences may be helpful. Thus, the effect of photos, which illustrate the content of complete sentences, is also investigated in this study. This leads us to the main research question which was focused on: Does the use of easy-to-read language and/or enriching text with symbols or photos facilitate pupils’ comprehension of learning tasks? The sample consisted of students with learning difficulties (N = 144) and students without SEN (N = 159). The students worked on the tasks, which dealt with introducing fractions, individually. While experimental group 1 received a linguistically simplified version of the tasks, experimental group 2 worked with a variation which was linguistically simplified and furthermore, the keywords of the tasks were visualized by symbols. Experimental group 3 worked on exercises which were simplified by easy-to-read-language and the content of the whole sentences was illustrated by photos. Experimental group 4 received a not simplified version. The participants’ reading ability and their IQ was elevated beforehand to build four comparable groups. There is a significant effect of the different setting on the students’ results F(3,140) = 2,932; p = 0,036*. A post-hoc-analyses with multiple comparisons shows that this significance results from the difference between experimental group 3 and 4. The students in the group easy-to-read language plus photos worked on the exercises significantly more successfully than the students who worked in the group with no simplifications. Further results which refer, among others, to the influence of the students reading ability will be presented at the ICERI 2018.

Keywords: inclusive education, mathematics education, easy-to-read language, photos, symbols, special educational needs

Procedia PDF Downloads 150
823 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 73
822 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 72
821 Autobiographical Memory Functions and Perceived Control in Depressive Symptoms among Young Adults

Authors: Meenu S. Babu, K. Jayasankara Reddy

Abstract:

Depression is a serious mental health concern that leads to significant distress and dysfunction in an individual. Due to the high physical, psychological, social, and economic burden it causes, it is important to study various bio-psycho-social factors that influence the onset, course, duration, intensity of depressive symptoms. The study aims to explore relationship between autobiographical memory (AM) functions, perceived control over stressful events and depressive symptoms. AM functions and perceived control were both found to be protective factors for individuals against depression and were both modifiable to predict better behavioral and affective outcomes. An extensive review of literatur, with a systematic search on Google Scholar, JSTOR, Science Direct and Springer Journals database, was conducted for the purpose of this review paper. These were used for all the aforementioned databases. The time frame used for the search was 2010-2021. An additional search was conducted with no time bar to map the development of the theoretical concepts. The relevant studies with quantitative, qualitative, experimental, and quasi- experimental research designs were included for the review. Studies including a sample with a DSM- 5 or ICD-10 diagnosis of depressive disorders were excluded from the study to focus on the behavioral patterns in a non-clinical population. The synthesis of the findings that were obtained from the review indicates there is a significant relationship between cognitive variables of AM functions and perceived control and depressive symptoms. AM functions were found to be have significant effects on once sense of self, interpersonal relationships, decision making, self- continuity and were related to better emotion regulation and lower depressive symptoms. Not all the components of AM function were equally significant in their relationships with various depressive symptoms. While self and directive functions were more related to emotion regulation, anhedonia, motivation and hence mood and affect, the social function was related to perceived social support and social engagement. Perceived control was found to be another protective cognitive factor that provides individuals a sense of agency and control over one’s life outcomes which was found to be low in individuals with depression. This was also associated to the locus of control, competency beliefs, contingency beliefs and subjective well being in individuals and acted as protective factors against depressive symptoms. AM and perceived control over stressful events serve adaptive functions, hence it is imperative to study these variables more extensively. They can be imperative in planning and implementing therapeutic interventions to foster these cognitive protective factors to mitigate or alleviate depressive symptoms. Exploring AM as a determining factor in depressive symptoms along with perceived control over stress creates a bridge between biological and cognitive factors underlying depression and increases the scope of developing a more eclectic and effective treatment plan for individuals. As culture plays a crucial role in AM functions as well as certain aspects of control such as locus of control, it is necessary to study these variables keeping in mind the cultural context to tailor culture/community specific interventions for depression.

Keywords: autobiographical memories, autobiographical memory functions, perceived control, depressive symptoms, depression, young adults

Procedia PDF Downloads 97
820 Inclusion Body Refolding at High Concentration for Large-Scale Applications

Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening

Abstract:

High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.

Keywords: dialysis, inclusion body, refolding, solubilization

Procedia PDF Downloads 289
819 Beyond Personal Evidence: Using Learning Analytics and Student Feedback to Improve Learning Experiences

Authors: Shawndra Bowers, Allie Brandriet, Betsy Gilbertson

Abstract:

This paper will highlight how Auburn Online’s instructional designers leveraged student and faculty data to update and improve online course design and instructional materials. When designing and revising online courses, it can be difficult for faculty to know what strategies are most likely to engage learners and improve educational outcomes in a specific discipline. It can also be difficult to identify which metrics are most useful for understanding and improving teaching, learning, and course design. At Auburn Online, the instructional designers use a suite of data based student’s performance, participation, satisfaction, and engagement, as well as faculty perceptions, to inform sound learning and design principles that guide growth-mindset consultations with faculty. The consultations allow the instructional designer, along with the faculty member, to co-create an actionable course improvement plan. Auburn Online gathers learning analytics from a variety of sources that any instructor or instructional design team may have access to at their own institutions. Participation and performance data, such as page: views, assignment submissions, and aggregate grade distributions, are collected from the learning management system. Engagement data is pulled from the video hosting platform, which includes unique viewers, views and downloads, the minutes delivered, and the average duration each video is viewed. Student satisfaction is also obtained through a short survey that is embedded at the end of each instructional module. This survey is included in each course every time it is taught. The survey data is then analyzed by an instructional designer for trends and pain points in order to identify areas that can be modified, such as course content and instructional strategies, to better support student learning. This analysis, along with the instructional designer’s recommendations, is presented in a comprehensive report to instructors in an hour-long consultation where instructional designers collaborate with the faculty member on how and when to implement improvements. Auburn Online has developed a triage strategy of priority 1 or 2 level changes that will be implemented in future course iterations. This data-informed decision-making process helps instructors focus on what will best work in their teaching environment while addressing which areas need additional attention. As a student-centered process, it has created improved learning environments for students and has been well received by faculty. It has also shown to be effective in addressing the need for improvement while removing the feeling the faculty’s teaching is being personally attacked. The process that Auburn Online uses is laid out, along with the three-tier maintenance and revision guide that will be used over a three-year implementation plan. This information can help others determine what components of the maintenance and revision plan they want to utilize, as well as guide them on how to create a similar approach. The data will be used to analyze, revise, and improve courses by providing recommendations and models of good practices through determining and disseminating best practices that demonstrate an impact on student success.

Keywords: data-driven, improvement, online courses, faculty development, analytics, course design

Procedia PDF Downloads 54
818 Physical Contact Modulation of Macrophage-Mediated Anti-Inflammatory Response in Osteoimmune Microenvironment by Pollen-Like Nanoparticles

Authors: Qing Zhang, Janak L. Pathak, Macro N. Helder, Richard T. Jaspers, Yin Xiao

Abstract:

Introduction: Nanomaterial-based bone regeneration is greatly influenced by the immune microenvironment. Tissue-engineered nanomaterials mediate the inflammatory response of macrophages to regulate bone regeneration. Silica nanoparticles have been widely used in tissue engineering-related preclinical studies. However, the effect of topological features on the surface of silica nanoparticles on the immune response of macrophages remains unknown. Purposes: The aims of this research are to compare the influences of normal and pollen-like silica nano-surface topography on macrophage immune responses and to obtain insight into their potential regulatory mechanisms. Method: Macrophages (RAW 264.7 cells) were exposed to mesoporous silica nanoparticles with normal morphology (MSNs) and pollen-like morphology (PMSNs). RNA-seq, RT-qPCR, and LSCM were used to assess the changes in expression levels of immune response-related genes and proteins. SEM and TEM were executed to evaluate the contact and adherence of silica nanoparticles by macrophages. For the assessment of the immunomodulation-mediated osteogenic potential, BMSCs were cultured with conditioned medium (CM) from LPS pre-stimulated macrophage cultures treated with MSNs or PMSNs. Osteoimmunomodulatory potential of MSNs and PMSNs in vivo was tested in a mouse cranial bone osteolysis model. Results: The results of the RNA-seq, RT-qPCR, and LSCM assays showed that PMSNs inhibited the expression of pro-inflammatory genes and proteins in macrophages. SEM images showed distinct macrophage membrane surface binding patterns of MSNs and PMSNs. MSNs were more evenly dispersed across the macrophage cell membrane, while PMSNs were aggregated. PMSNs-induced macrophage anti-inflammatory response was associated with upregulation of the cell surface receptor CD28 and inhibition of ERK phosphorylation. TEM images showed that both MSNs and PMSNs could be phagocytosed by macrophages, and inhibiting nanoparticle phagocytosis did not affect the expression of anti-inflammatory genes and proteins. Moreover, PMSNs-induced conditioned medium from macrophages enhanced BMP-2 expression and osteogenic differentiation mBMSCs. Similarly, PMSNs prevented LPS-induced bone resorption via downregulation of inflammatory reaction. Conclusions: PMSNs can promote bone regeneration by modulating osteoimmunological processes through surface topography. The study offers insights into how surface physical contact cues can modulate the regulation of osteoimmunology and provides a basis for the application of nanoparticles with pollen-like morphology to affect immunomodulation in bone tissue engineering and regeneration.

Keywords: physical contact, osteoimmunology, macrophages, silica nanoparticles, surface morphology, membrane receptor, osteogenesis, inflammation

Procedia PDF Downloads 56