Search results for: experimental technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13166

Search results for: experimental technique

386 Dynamic Thermomechanical Behavior of Adhesively Bonded Composite Joints

Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Benyahia

Abstract:

Composite materials are increasingly being used as a substitute for metallic materials in many technological applications like aeronautics, aerospace, marine and civil engineering applications. For composite materials, the thermomechanical response evolves with the strain rate. The energy balance equation for anisotropic, elastic materials includes heat source terms that govern the conversion of some of the kinetic work into heat. The remainder contributes to the stored energy creating the damage process in the composite material. In this paper, we investigate the bulk thermomechanical behavior of adhesively-bonded composite assemblies to quantitatively asses the temperature rise which accompanies adiabatic deformations. In particular, adhesively bonded joints in glass/vinylester composite material are subjected to in-plane dynamic loads under a range of strain rates. Dynamic thermomechanical behavior of this material is investigated using compression Split Hopkinson Pressure Bars (SHPB) coupled with a high speed infrared camera and a high speed camera to measure in real time the dynamic behavior, the damage kinetic and the temperature variation in the material. The interest of using high speed IR camera is in order to view in real time the evolution of heat dissipation in the material when damage occurs. But, this technique does not produce thermal values in correlation with the stress-strain curves of composite material because of its high time response in comparison with the dynamic test time. For this reason, the authors revisit the application of specific thermocouples placed on the surface of the material to ensure the real thermal measurements under dynamic loading using small thermocouples. Experiments with dynamically loaded material show that the thermocouples record temperatures values with a short typical rise time as a result of the conversion of kinetic work into heat during compression test. This results show that small thermocouples can be used to provide an important complement to other noncontact techniques such as the high speed infrared camera. Significant temperature rise was observed in in-plane compression tests especially under high strain rates. During the tests, it has been noticed that sudden temperature rise occur when macroscopic damage occur. This rise in temperature is linked to the rate of damage. The more serve the damage is, a higher localized temperature is detected. This shows the strong relationship between the occurrence of damage and induced heat dissipation. For the case of the in plane tests, the damage takes place more abruptly as the strain rate is increased. The difference observed in the obtained thermomechanical response in plane compression is explained only by the difference in the damage process being active during the compression tests. In this study, we highlighted the dependence of the thermomechanical response on the strain rate of bonded specimens. The effect of heat dissipation of this material cannot hence be ignored and should be taken into account when defining damage models during impact loading.

Keywords: adhesively-bonded composite joints, damage, dynamic compression tests, energy balance, heat dissipation, SHPB, thermomechanical behavior

Procedia PDF Downloads 213
385 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy

Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini

Abstract:

The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.

Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering

Procedia PDF Downloads 224
384 Comparative Studies on the Needs and Development of Autotronic Maintenance Training Modules for the Training of Automobile Independent Workshop Service Technicians in North – Western Region, Nigeria

Authors: Muhammad Shuaibu Birniwa

Abstract:

Automobile Independent Workshop Service Technicians (popularly called roadside mechanics) are technical personals that repairs most of the automobile vehicles in Nigeria. Majority of these mechanics acquired their skills through apprenticeship training. Modern vehicle imported into the country posed greater challenges to the present automobile technicians particularly in the area of carrying out maintenance repairs of these latest automobile vehicles (autotronics vehicle) due to their inability to possessed autotronic skills competency. To source for solution to the above mentioned problems, therefore a research is carried out in North – Western region of Nigeria to produce a suitable maintenance training modules that can be used to train the technicians for them to upgrade/acquire the needed competencies for successful maintenance repair of the autotronic vehicles that were running everyday on the nation’s roads. A cluster sampling technique is used to obtain a sample from the population. The population of the study is all autotronic inclined lecturers, instructors and independent workshop service technicians that are within North – Western region of Nigeria. There are seven states (Jigawa, Kaduna, Kano, Katsina, Kebbi, Sokoto and Zamfara) in the study area, these serves as clusters in the population. Five (5) states were randomly selected to serve as the sample size. The five states are Jigawa, Kano, Katsina, Kebbi and Zamfara, the entire population of the five states which serves as clusters is (183), lecturers (44), instructors (49) and autotronic independent workshop service technicians (90), all of them were used in the study because of their manageable size. 183 copies of autotronic maintenance training module questionnaires (AMTMQ) with 174 and 149 question items respectively were administered and collected by the researcher with the help of an assistants, they are administered to 44 Polytechnic lecturers in the department of mechanical engineering, 49 instructors in skills acquisition centres/polytechnics and 90 master craftsmen of an independent workshops that are autotronic inclined. Data collected for answering research questions 1, 3, 4 and 5 were analysed using SPSS software version 22, Grand Mean and standard deviation were used to answer the research questions. Analysis of Variance (ANOVA) was used to test null hypotheses one (1) to three (3) and t-test statistical tool is used to analyzed hypotheses four (4) and five (5) all at 0.05 level of significance. The research conducted revealed that; all the objectives, contents/tasks, facilities, delivery systems and evaluation techniques contained in the questionnaire were required for the development of the autotronic maintenance training modules for independent workshop service technicians in the north – western zone of Nigeria. The skills upgrade training conducted by federal government in collaboration with SURE-P, NAC and SMEDEN was not successful because the educational status of the target population was not considered in drafting the needed training modules. The mode of training used does not also take cognizance of the theoretical aspect of the trainees, especially basic science which rendered the programme ineffective and insufficient for the tasks on ground.

Keywords: autotronics, roadside, mechanics, technicians, independent

Procedia PDF Downloads 73
383 A Short Dermatoscopy Training Increases Diagnostic Performance in Medical Students

Authors: Magdalena Chrabąszcz, Teresa Wolniewicz, Cezary Maciejewski, Joanna Czuwara

Abstract:

BACKGROUND: Dermoscopy is a clinical tool known to improve the early detection of melanoma and other malignancies of the skin. Over the past few years melanoma has grown into a disease of socio-economic importance due to the increasing incidence and persistently high mortality rates. Early diagnosis remains the best method to reduce melanoma and non-melanoma skin cancer– related mortality and morbidity. Dermoscopy is a noninvasive technique that consists of viewing pigmented skin lesions through a hand-held lens. This simple procedure increases melanoma diagnostic accuracy by up to 35%. Dermoscopy is currently the standard for clinical differential diagnosis of cutaneous melanoma and for qualifying lesion for the excision biopsy. Like any clinical tool, training is required for effective use. The introduction of small and handy dermoscopes contributed significantly to the switch of dermatoscopy toward a first-level useful tool. Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. AIM: The aim of this randomized study was to determine whether the adjunct of dermoscopy to the standard fourth year medical curriculum improves the ability of medical students to distinguish between benign and malignant lesions and assess acceptability and satisfaction with the intervention. METHODS: We performed a prospective study in 2 cohorts of fourth-year medical students at Medical University of Warsaw. Groups having dermatology course, were randomly assigned to:  cohort A: with limited access to dermatoscopy from their teacher only – 1 dermatoscope for 15 people  Cohort B: with a full access to use dermatoscopy during their clinical classes:1 dermatoscope for 4 people available constantly plus 15-minute dermoscopy tutorial. Students in both study arms got an image-based test of 10 lesions to assess ability to differentiate benign from malignant lesions and postintervention survey collecting minimal background information, attitudes about the skin cancer examination and course satisfaction. RESULTS: The cohort B had higher scores than the cohort A in recognition of nonmelanocytic (P < 0.05) and melanocytic (P <0.05) lesions. Medical students who have a possibility to use dermatoscope by themselves have also a higher satisfaction rates after the dermatology course than the group with limited access to this diagnostic tool. Moreover according to our results they were more motivated to learn dermatoscopy and use it in their future everyday clinical practice. LIMITATIONS: There were limited participants. Further study of the application on clinical practice is still needed. CONCLUSION: Although the use of dermatoscope in dermatology as a specialty is widely accepted, sufficiently validated clinical tools for the examination of potentially malignant skin lesions are lacking in general practice. Introducing medical students to dermoscopy in their fourth year curricula of medical school may improve their ability to differentiate benign from malignant lesions. It can can also encourage students to use dermatoscopy in their future practice which can significantly improve early recognition of malignant lesions and thus decrease melanoma mortality.

Keywords: dermatoscopy, early detection of melanoma, medical education, skin cancer

Procedia PDF Downloads 115
382 Variability and Stability of Bread and Durum Wheat for Phytic Acid Content

Authors: Gordana Branković, Vesna Dragičević, Dejan Dodig, Desimir Knežević, Srbislav Denčić, Gordana Šurlan-Momirović

Abstract:

Phytic acid is a major pool in the flux of phosphorus through agroecosystems and represents a sum equivalent to > 50% of all phosphorus fertilizer used annually. Nutrition rich in phytic acid can substantially decrease micronutrients apsorption as calcium, zink, iron, manganese, copper due to phytate salts excretion by human and non-ruminant animals as poultry, swine and fish, having in common very scarce phytase activity, and consequently the ability to digest and utilize phytic acid, thus phytic acid derived phosphorus in animal waste contributes to water pollution. The tested accessions consisted of 15 genotypes of bread wheat (Triticum aestivum L. ssp. vulgare) and of 15 genotypes of durum wheat (Triticum durum Desf.). The trials were sown at the three test sites in Serbia: Rimski Šančevi (RS) (45º19´51´´N; 19º50´59´´E), Zemun Polje (ZP) (44º52´N; 20º19´E) and Padinska Skela (PS) (44º57´N 20º26´E) during two vegetation seasons 2010-2011 and 2011-2012. The experimental design was randomized complete block design with four replications. The elementary plot consisted of 3 internal rows of 0.6 m2 area (3 × 0.2 m × 1 m). Grains were grinded with Laboratory Mill 120 Perten (“Perten”, Sweden) (particles size < 500 μm) and flour was used for the analysis. Phytic acid grain content was determined spectrophotometrically with the Shimadzu UV-1601 spectrophotometer (Shimadzu Corporation, Japan). Objectives of this study were to determine: i) variability and stability of the phytic acid content among selected genotypes of bread and durum wheat, ii) predominant source of variation regarding genotype (G), environment (E) and genotype × environment interaction (GEI) from the multi-environment trial, iii) influence of climatic variables on the GEI for the phytic acid content. Based on the analysis of variance it had been determined that the variation of phytic acid content was predominantly influenced by environment in durum wheat, while the GEI prevailed for the variation of the phytic acid content in bread wheat. Phytic acid content expressed on the dry mass basis was in the range 14.21-17.86 mg g-1 with the average of 16.05 mg g-1 for bread wheat and 14.63-16.78 mg g-1 with the average of 15.91 mg g-1 for durum wheat. Average-environment coordination view of the genotype by environment (GGE) biplot was used for the selection of the most desirable genotypes for breeding for low phytic acid content in the sense of good stability and lower level of phytic acid content. The most desirable genotypes of bread and durum wheat for breeding for phytic acid were Apache and 37EDUYT /07 No. 7849. Models of climatic factors in the highest percentage (> 91%) were useful in interpreting GEI for phytic acid content, and included relative humidity in June, sunshine hours in April, mean temperature in April and winter moisture reserves for genotypes of bread wheat, as well as precipitation in June and April, maximum temperature in April and mean temperature in June for genotypes of durum wheat.

Keywords: genotype × environment interaction, phytic acid, stability, variability

Procedia PDF Downloads 395
381 A Novel Nanocomposite Membrane Designed for the Treatment of Oil/Gas Produced Water

Authors: Zhaoyang Liu, Detao Qin, Darren Delai Sun

Abstract:

The onshore production of oil and gas (for example, shale gas) generates large quantities of wastewater, referred to be ‘produced water’, which contains high contents of oils and salts. The direct discharge of produced water, if not appropriately treated, can be toxic to the environment and human health. Membrane filtration has been deemed as an environmental-friendly and cost-effective technology for treating oily wastewater. However, conventional polymeric membranes have their drawbacks of either low salt rejection rate or high membrane fouling tendency when treating oily wastewater. Recent years, forward osmosis (FO) membrane filtration has emerged as a promising technology with its unique advantages of low operation pressure and less membrane fouling tendency. However, until now there is still no report about FO membranes specially designed and fabricated for treating the oily and salty produced water. In this study, a novel nanocomposite FO membrane was developed specially for treating oil- and salt-polluted produced water. By leveraging the recent advance of nanomaterials and nanotechnology, this nanocomposite FO membrane was designed to be made of double layers: an underwater oleophobic selective layer on top of a nanomaterial infused polymeric support layer. Wherein, graphene oxide (GO) nanosheets were selected to add into the polymeric support layer because adding GO nanosheets can optimize the pore structures of the support layer, thus potentially leading to high water flux for FO membranes. In addition, polyvinyl alcohol (PVA) hydrogel was selected as the selective layer because hydrated and chemically-crosslinked PVA hydrogel is capable of simultaneously rejecting oil and salt. After nanocomposite FO membranes were fabricated, the membrane structures were systematically characterized with the instruments of TEM, FESEM, XRD, ATR-FTIR, surface zeta-potential and Contact angles (CA). The membrane performances for treating produced waters were tested with the instruments of TOC, COD and Ion chromatography. The working mechanism of this new membrane was also analyzed. Very promising experimental results have been obtained. The incorporation of GO nanosheets can reduce internal concentration polarization (ICP) effect in the polymeric support layer. The structural parameter (S value) of the new FO membrane is reduced by 23% from 265 ± 31 μm to 205 ± 23 μm. The membrane tortuosity (τ value) is decreased by 20% from 2.55 ± 0.19 to 2.02 ± 0.13 μm, which contributes to the decrease of S value. Moreover, the highly-hydrophilic and chemically-cross-linked hydrogel selective layer present high antifouling property under saline oil/water emulsions. Compared with commercial FO membrane, this new FO membrane possesses three times higher water flux, higher removal efficiencies for oil (>99.9%) and salts (>99.7% for multivalent ions), and significantly lower membrane fouling tendency (<10%). To our knowledge, this is the first report of a nanocomposite FO membrane with the combined merits of high salt rejection, high oil repellency and high water flux for treating onshore oil/gas produced waters. Due to its outstanding performance and ease of fabrication, this novel nanocomposite FO membrane possesses great application potential in wastewater treatment industry.

Keywords: nanocomposite, membrane, polymer, graphene oxide

Procedia PDF Downloads 250
380 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport

Authors: Aditya Purohit, Neha Bansal

Abstract:

Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.

Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport

Procedia PDF Downloads 198
379 Formulation of Lipid-Based Tableted Spray-Congealed Microparticles for Zero Order Release of Vildagliptin

Authors: Hend Ben Tkhayat , Khaled Al Zahabi, Husam Younes

Abstract:

Introduction: Vildagliptin (VG), a dipeptidyl peptidase-4 inhibitor (DPP-4), was proven to be an active agent for the treatment of type 2 diabetes. VG works by enhancing and prolonging the activity of incretins which improves insulin secretion and decreases glucagon release, therefore lowering blood glucose level. It is usually used with various classes, such as insulin sensitizers or metformin. VG is currently only marketed as an immediate-release tablet that is administered twice daily. In this project, we aim to formulate an extended-release with a zero-order profile tableted lipid microparticles of VG that could be administered once daily ensuring the patient’s convenience. Method: The spray-congealing technique was used to prepare VG microparticles. Compritol® was heated at 10 oC above its melting point and VG was dispersed in the molten carrier using a homogenizer (IKA T25- USA) set at 13000 rpm. VG dispersed in the molten Compritol® was added dropwise to the molten Gelucire® 50/13 and PEG® (400, 6000, and 35000) in different ratios under manual stirring. The molten mixture was homogenized and Carbomer® amount was added. The melt was pumped through the two-fluid nozzle of the Buchi® Spray-Congealer (Buchi B-290, Switzerland) using a Pump drive (Master flex, USA) connected to a silicone tubing wrapped with silicone heating tape heated at the same temperature of the pumped mix. The physicochemical properties of the produced VG-loaded microparticles were characterized using Mastersizer, Scanning Electron Microscope (SEM), Differential Scanning Calorimeter (DSC) and X‐Ray Diffractometer (XRD). VG microparticles were then pressed into tablets using a single punch tablet machine (YDP-12, Minhua pharmaceutical Co. China) and in vitro dissolution study was investigated using Agilent Dissolution Tester (Agilent, USA). The dissolution test was carried out at 37±0.5 °C for 24 hours in three different dissolution media and time phases. The quantitative analysis of VG in samples was realized using a validated High-Pressure Liquid Chromatography (HPLC-UV) method. Results: The microparticles were spherical in shape with narrow distribution and smooth surface. DSC and XRD analyses confirmed the crystallinity of VG that was lost after being incorporated into the amorphous polymers. The total yields of the different formulas were between 70% and 80%. The VG content in the microparticles was found to be between 99% and 106%. The in vitro dissolution study showed that VG was released from the tableted particles in a controlled fashion. The adjustment of the hydrophilic/hydrophobic ratio of excipients, their concentration and the molecular weight of the used carriers resulted in tablets with zero-order kinetics. The Gelucire 50/13®, a hydrophilic polymer was characterized by a time-dependent profile with an important burst effect that was decreased by adding Compritol® as a lipophilic carrier to retard the release of VG which is highly soluble in water. PEG® (400,6000 and 35 000) were used for their gelling effect that led to a constant rate delivery and achieving a zero-order profile. Conclusion: Tableted spray-congealed lipid microparticles for extended-release of VG were successfully prepared and a zero-order profile was achieved.

Keywords: vildagliptin, spray congealing, microparticles, controlled release

Procedia PDF Downloads 122
378 From Indigeneity to Urbanity: A Performative Study of Indian Saang (Folk Play) Tradition

Authors: Shiv Kumar

Abstract:

In the shifting scenario of postmodern age that foregrounds the multiplicity of meanings and discourses, the present research article seeks to investigate various paradigm shift of contemporary performances concerning Haryanvi Saangs, so-called folk plays, which are being performed widely in the regional territory of Haryana, a northern state of India. Folk arts cannot be studied efficiently by using the tools of literary criticism because it differs from the literature in many aspects. One of the most essential differences is that literary works invariably have an author. Folk works, on the contrary, never have an author. The situation is quite clear: either we acknowledge the presence of folk art as a phenomenon in the social and cultural history of people, or we do not acknowledge it and argue it is a poetical or art of fiction. This paper is an effort to understand the performative tradition of Saang which is traditionally known as Saang, Swang or Svang became a popular source for instruction and entertainment in the region and neighbouring states. Scholars and critics have long been debating about the origin of the word swang/svang/saang and their relationship to the Sanskrit word –Sangit, which means singing and music. But in the cultural context of Haryana, the word Saang means ‘to impersonate’ or ‘to imitate’ or ‘to copy someone or something’. The stories they portray are derived for the most part from the same myths, tales, epics and from the lives of Indian religious and folk heroes. Literally, the use of poetic sense, the implication of prose style and elaborate figurative technique are worthwhile to compile the productivity of a performance. All use music and song as an integral part of the performance so that it is also appropriate to call them folk opera. These folk plays are performed strictly by aboriginal people in the state. These people, sometimes denominated as Saangi, possess a culture distinct from the rest of Indian folk performances. The concerned form is also known with various other names like Manch, Khayal, Opera, Nautanki. The group of such folk plays can be seen as a dynamic activity and performed in the open space of the theatre. Nowadays, producers contributed greatly in order to create a rapidly growing musical outlet for budding new style of folk presentation and give rise to the electronic focus genre utilizing many musicians and performers who had to become precursors of the folk tradition in the region. Moreover, the paper proposes to examine available sources relative to this article, and it is believed to draw some different conclusions. For instance, to be a spectator of ongoing performances will contribute to providing enough guidance to move forward on this root. In this connection, the paper focuses critically upon the major performative aspects of Haryanvi Saang in relation to several inquiries such as the study of these plays in the context of Indian literary scenario, gender visualization and their dramatic representation, a song-music tradition in folk creativity and development of Haryanvi dramatic art in the contemporary socio-political background.

Keywords: folk play, indigenous, performance, Saang, tradition

Procedia PDF Downloads 162
377 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present

Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Philippe Blanchard, Simon Richir

Abstract:

Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.

Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving

Procedia PDF Downloads 75
376 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 158
375 Laboratory Assessment of Electrical Vertical Drains in Composite Soils Using Kaolin and Bentonite Clays

Authors: Maher Z. Mohammed, Barry G. Clarke

Abstract:

As an alternative to stone column in fine grained soils, it is possible to create stiffened columns of soils using electroosmosis (electroosmotic piles). This program of this research is to establish the effectiveness and efficiency of the process in different soils. The aim of this study is to assess the capability of electroosmosis treatment in a range of composite soils. The combined electroosmotic and preloading equipment developed by Nizar and Clarke (2013) was used with an octagonal array of anodes surrounding a single cathode in a nominal 250mm diameter 300mm deep cylinder of soil and 80mm anode to cathode distance. Copper coiled springs were used as electrodes to allow the soil to consolidate either due to an external vertical applied load or electroosmosis. The equipment was modified to allow the temperature to be monitored during the test. Electroosmotic tests were performed on China Clay Grade E kaolin and calcium bentonite (Bentonex CB) mixed with sand fraction C (BS 1881 part 131) at different ratios by weight; (0, 23, 33, 50 and 67%) subjected to applied voltages (5, 10, 15 and 20). The soil slurry was prepared by mixing the dry soil with water to 1.5 times the liquid limit of the soil mixture. The mineralogical and geotechnical properties of the tested soils were measured before the electroosmosis treatment began. In the electroosmosis cell tests, the settlement, expelled water, variation of electrical current and applied voltage, and the generated heat was monitored during the test time for 24 osmotic tests. Water content was measured at the end of each test. The electroosmotic tests are divided into three phases. In Phase 1, 15 kPa was applied to simulate a working platform and produce a uniform soil which had been deposited as a slurry. 50 kPa was used in Phase 3 to simulate a surcharge load. The electroosmotic treatment was only performed during Phase 2 where a constant voltage was applied through the electrodes in addition to the 15 kPa pressure. This phase was stopped when no further water was expelled from the cell, indicating the electroosmotic process had stopped due to either the degradation of the anode or the flow due to the hydraulic gradient exactly balanced the electroosmotic flow resulting in no flow. Control tests for each soil mixture were carried out to assess the behaviour of the soil samples subjected to only an increase of vertical pressure, which is 15kPa in Phase 1 and 50kPa in Phase 3. Analysis of the experimental results from this study showed a significant dewatering effect on the soil slurries. The water discharged by the electroosmotic treatment process decreased as the sand content increased. Soil temperature increased significantly when electrical power was applied and drops when applied DC power turned off or when the electrode degraded. The highest increase in temperature was found in pure clays at higher applied voltage after about 8 hours of electroosmosis test.

Keywords: electrokinetic treatment, electrical conductivity, electroosmotic consolidation, electroosmosis permeability ratio

Procedia PDF Downloads 167
374 Synaesthetic Metaphors in Persian: a Cognitive Corpus Based and Comparative Perspective

Authors: A. Afrashi

Abstract:

Introduction: Synaesthesia is a term denoting the perception or description of the perception of one sense modality in terms of another. In literature, synaesthesia refers to a technique adopted by writers to present ideas, characters or places in such a manner that they appeal to more than one sense like hearing, seeing, smell etc. at a given time. In everyday language too we find many examples of synaesthesia. We commonly hear phrases like ‘loud colors’, ‘frozen silence’ and ‘warm colors’, ‘bitter cold’ etc. Empirical cognitive studies have proved that synaesthetic representations both in literature and everyday languages are constrained ie. they do not map randomly among sensory domains. From the beginning of the 20th century Synaesthesia has been a research domain both in literature and structural linguistics. However the exploration of cognitive mechanisms motivating synaesthesia, have made it an important topic in 21st century cognitive linguistics and literary studies. Synaesthetic metaphors are linguistic representations of those mental mechanisms, the study of which reveals invaluable facts about perception, cognition and conceptualization. According to the main tenets of cognitive approach to language and literature, unified and similar cognitive mechanisms are active both in everyday language and literature, and synaesthesia is one of those cognitive mechanisms. Main objective of the present research is to answer the following questions: What types of sense transfers are accessible in Persian synaesthetic metaphors. How are these types of sense transfers cognitively explained. What are the results of cross-linguistic comparative study of synaestetic metaphors based on the existing observations? Methodology: The present research employs a cognitive - corpus based method, and the theoretical framework adopted to analyze linguistic synaesthesia is the contemporary theory of metaphor, where conceptual metaphor is the result of systemic mappings across cognitive domains. Persian Language Data- base (PLDB) in the Institute for Humanities and Cultural Studies which consists mainly of Persian modern prose, is searched for synaesthetic metaphors. Then for each metaphorical structure, the source and target domains are determined. Then sense transfers are identified and the types of synaesthetic metaphors recognized. Findings: Persian synaesthetic metaphors conform to the hierarchical distribution principle, according to which transfers tend to go from touch to taste to smell to sound and to sight, not vice versa. In other words mapping from more accessible or basic concepts onto less accessible or less basic ones seems more natural. Furthermore the most frequent target domain in Persian synaesthetic metaphors is sound. Certain characteristics of Persian synaesthetic metaphors are comparable with existing related researches carried on English, French, Hungarian and Chinese synaesthetic metaphors. Conclusion: Cognitive corpus based approaches to linguistic synaesthesia, are applicable to stylistics and literary criticism and this recent research domain is an efficient approach to study cross linguistic variations to find out which of the five senses is dominant cross linguistically and cross culturally as the target domain in metaphorical mappings , and so forth receiving dominance in conceptualizations.

Keywords: cognitive semantics, conceptual metaphor, synaesthesia, corpus based approach

Procedia PDF Downloads 563
373 A Comparison of Two and Three Dimensional Motion Capture Methodologies in the Analysis of Underwater Fly Kicking Kinematics

Authors: Isobel M. Thompson, Dorian Audot, Dominic Hudson, Martin Warner, Joseph Banks

Abstract:

Underwater fly kick is an essential skill in swimming, which can have a considerable impact upon overall race performance in competition, especially in sprint events. Reduced wave drags acting upon the body under the surface means that the underwater fly kick will potentially be the fastest the swimmer is travelling throughout the race. It is therefore critical to understand fly kicking techniques and determining biomechanical factors involved in the performance. Most previous studies assessing fly kick kinematics have focused on two-dimensional analysis; therefore, the three-dimensional elements of the underwater fly kick techniques are not well understood. Those studies that have investigated fly kicking techniques using three-dimensional methodologies have not reported full three-dimensional kinematics for the techniques observed, choosing to focus on one or two joints. There has not been a direct comparison completed on the results obtained using two-dimensional and three-dimensional analysis, and how these different approaches might affect the interpretation of subsequent results. The aim of this research is to quantify the differences in kinematics observed in underwater fly kicks obtained from both two and three-dimensional analyses of the same test conditions. In order to achieve this, a six-camera underwater Qualisys system was used to develop an experimental methodology suitable for assessing the kinematics of swimmer’s starts and turns. The cameras, capturing at a frequency of 100Hz, were arranged along the side of the pool spaced equally up to 20m creating a capture volume of 7m x 2m x 1.5m. Within the measurement volume, error levels were estimated at 0.8%. Prior to pool trials, participants completed a landside calibration in order to define joint center locations, as certain markers became occluded once the swimmer assumed the underwater fly kick position in the pool. Thirty-four reflective markers were placed on key anatomical landmarks, 9 of which were then removed for the pool-based trials. The fly-kick swimming conditions included in the analysis are as follows: maximum effort prone, 100m pace prone, 200m pace prone, 400m pace prone, and maximum pace supine. All trials were completed from a push start to 15m to ensure consistent kick cycles were captured. Both two-dimensional and three-dimensional kinematics are calculated from joint locations, and the results are compared. Key variables reported include kick frequency and kick amplitude, as well as full angular kinematics of the lower body. Key differences in these variables obtained from two-dimensional and three-dimensional analysis are identified. Internal rotation (up to 15º) and external rotation (up to -28º) were observed using three-dimensional methods. Abduction (5º) and adduction (15º) were also reported. These motions are not observed in the two-dimensional analysis. Results also give an indication of different techniques adopted by swimmers at various paces and orientations. The results of this research provide evidence of the strengths of both two dimensional and three dimensional motion capture methods in underwater fly kick, highlighting limitations which could affect the interpretation of results from both methods.

Keywords: swimming, underwater fly kick, performance, motion capture

Procedia PDF Downloads 136
372 Ultrasound Disintegration as a Potential Method for the Pre-Treatment of Virginia Fanpetals (Sida hermaphrodita) Biomass before Methane Fermentation Process

Authors: Marcin Dębowski, Marcin Zieliński, Mirosław Krzemieniewski

Abstract:

As methane fermentation is a complex series of successive biochemical transformations, its subsequent stages are determined, to a various extent, by physical and chemical factors. A specific state of equilibrium is being settled in the functioning fermentation system between environmental conditions and the rate of biochemical reactions and products of successive transformations. In the case of physical factors that influence the effectiveness of methane fermentation transformations, the key significance is ascribed to temperature and intensity of biomass agitation. Among the chemical factors, significant are pH value, type, and availability of the culture medium (to put it simply: the C/N ratio) as well as the presence of toxic substances. One of the important elements which influence the effectiveness of methane fermentation is the pre-treatment of organic substrates and the mode in which the organic matter is made available to anaerobes. Out of all known and described methods for organic substrate pre-treatment before methane fermentation process, the ultrasound disintegration is one of the most interesting technologies. Investigations undertaken on the ultrasound field and the use of installations operating on the existing systems result principally from very wide and universal technological possibilities offered by the sonication process. This physical factor may induce deep physicochemical changes in ultrasonicated substrates that are highly beneficial from the viewpoint of methane fermentation processes. In this case, special role is ascribed to disintegration of biomass that is further subjected to methane fermentation. Once cell walls are damaged, cytoplasm and cellular enzymes are released. The released substances – either in dissolved or colloidal form – are immediately available to anaerobic bacteria for biodegradation. To ensure the maximal release of organic matter from dead biomass cells, disintegration processes are aimed to achieve particle size below 50 μm. It has been demonstrated in many research works and in systems operating in the technical scale that immediately after substrate supersonication the content of organic matter (characterized by COD, BOD5 and TOC indices) was increasing in the dissolved phase of sedimentation water. This phenomenon points to the immediate sonolysis of solid substances contained in the biomass and to the release of cell material, and consequently to the intensification of the hydrolytic phase of fermentation. It results in a significant reduction of fermentation time and increased effectiveness of production of gaseous metabolites of anaerobic bacteria. Because disintegration of Virginia fanpetals biomass via ultrasounds applied in order to intensify its conversion is a novel technique, it is often underestimated by exploiters of agri-biogas works. It has, however, many advantages that have a direct impact on its technological and economical superiority over thus far applied methods of biomass conversion. As for now, ultrasound disintegrators for biomass conversion are not produced on the mass-scale, but by specialized groups in scientific or R&D centers. Therefore, their quality and effectiveness are to a large extent determined by their manufacturers’ knowledge and skills in the fields of acoustics and electronic engineering.

Keywords: ultrasound disintegration, biomass, methane fermentation, biogas, Virginia fanpetals

Procedia PDF Downloads 369
371 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 618
370 Soybean Lecithin Based Reverse Micellar Extraction of Pectinase from Synthetic Solution

Authors: Sivananth Murugesan, I. Regupathi, B. Vishwas Prabhu, Ankit Devatwal, Vishnu Sivan Pillai

Abstract:

Pectinase is an important enzyme which has a wide range of applications including textile processing and bioscouring of cotton fibers, coffee and tea fermentation, purification of plant viruses, oil extraction etc. Selective separation and purification of pectinase from fermentation broth and recover the enzyme form process stream for reuse are cost consuming process in most of the enzyme based industries. It is difficult to identify a suitable medium to enhance enzyme activity and retain its enzyme characteristics during such processes. The cost effective, selective separation of enzymes through the modified Liquid-liquid extraction is of current research interest worldwide. Reverse micellar extraction, globally acclaimed Liquid-liquid extraction technique is well known for its separation and purification of solutes from the feed which offers higher solute specificity and partitioning, ease of operation and recycling of extractants used. Surfactant concentrations above critical micelle concentration to an apolar solvent form micelles and addition of micellar phase to water in turn forms reverse micelles or water-in-oil emulsions. Since, electrostatic interaction plays a major role in the separation/purification of solutes using reverse micelles. These interaction parameters can be altered with the change in pH, addition of cosolvent, surfactant and electrolyte and non-electrolyte. Even though many chemical based commercial surfactant had been utilized for this purpose, the biosurfactants are more suitable for the purification of enzymes which are used in food application. The present work focused on the partitioning of pectinase from the synthetic aqueous solution within the reverse micelle phase formed by a biosurfactant, Soybean Lecithin dissolved in chloroform. The critical micelle concentration of soybean lecithin/chloroform solution was identified through refractive index and density measurements. Effect of surfactant concentrations above and below the critical micelle concentration was considered to study its effect on enzyme activity, enzyme partitioning within the reverse micelle phase. The effect of pH and electrolyte salts on the partitioning behavior was studied by varying the system pH and concentration of different salts during forward and back extraction steps. It was observed that lower concentrations of soybean lecithin enhanced the enzyme activity within the water core of the reverse micelle with maximizing extraction efficiency. The maximum yield of pectinase of 85% with a partitioning coefficient of 5.7 was achieved at 4.8 pH during forward extraction and 88% yield with a partitioning coefficient of 7.1 was observed during backward extraction at a pH value of 5.0. However, addition of salt decreased the enzyme activity and especially at higher salt concentrations enzyme activity declined drastically during both forward and back extraction steps. The results proved that reverse micelles formed by Soybean Lecithin and chloroform may be used for the extraction of pectinase from aqueous solution. Further, the reverse micelles can be considered as nanoreactors to enhance enzyme activity and maximum utilization of substrate at optimized conditions, which are paving a way to process intensification and scale-down.

Keywords: pectinase, reverse micelles, soybean lecithin, selective partitioning

Procedia PDF Downloads 374
369 Implementation of Real-World Learning Experiences in Teaching Courses of Medical Microbiology and Dietetics for Health Science Students

Authors: Miriam I. Jimenez-Perez, Mariana C. Orellana-Haro, Carolina Guzman-Brambila

Abstract:

As part of microbiology and dietetics courses, students of medicine and nutrition analyze the main pathogenic microorganisms and perform dietary analyzes. The course of microbiology describes in a general way the main pathogens including bacteria, viruses, fungi, and parasites, as well as their interaction with the human species. We hypothesize that lack of practical application of the course causes the students not to find the value and the clinical application of it when in reality it is a matter of great importance for healthcare in our country. The courses of the medical microbiology and dietetics are mostly theoretical and only a few hours of laboratory practices. Therefore, it is necessary the incorporation of new innovative techniques that involve more practices and community fieldwork, real cases analysis and real-life situations. The purpose of this intervention was to incorporate real-world learning experiences in the instruction of medical microbiology and dietetics courses, in order to improve the learning process, understanding and the application in the field. During a period of 6 months, medicine and nutrition students worked in a community of urban poverty. We worked with 90 children between 4 and 6 years of age from low-income families with no access to medical services, to give an infectious diagnosis related to nutritional status in these children. We expect that this intervention would give a different kind of context to medical microbiology and dietetics students improving their learning process, applying their knowledge and laboratory practices to help a needed community. First, students learned basic skills in microbiology diagnosis test during laboratory sessions. Once, students acquired abilities to make biochemical probes and handle biological samples, they went to the community and took stool samples from children (with the corresponding informed consent). Students processed the samples in the laboratory, searching for enteropathogenic microorganism with RapID™ ONE system (Thermo Scientific™) and parasites using Willis and Malloy modified technique. Finally, they compared the results with the nutritional status of the children, previously measured by anthropometric indicators. The anthropometric results were interpreted by the OMS Anthro software (WHO, 2011). The microbiological result was interpreted by ERIC® Electronic RapID™ Code Compendium software and validated by a physician. The results were analyses of infectious outcomes and nutritional status. Related to fieldwork community learning experiences, our students improved their knowledge in microbiology and were capable of applying this knowledge in a real-life situation. They found this kind of learning useful when they translate theory to a real-life situation. For most of our students, this is their first contact as health caregivers with real population, and this contact is very important to help them understand the reality of many people in Mexico. In conclusion, real-world or fieldwork learning experiences empower our students to have a real and better understanding of how they can apply their knowledge in microbiology and dietetics and help a much- needed population, this is the kind of reality that many people live in our country.

Keywords: real-world learning experiences, medical microbiology, dietetics, nutritional status, infectious status.

Procedia PDF Downloads 134
368 Detection of Triclosan in Water Based on Nanostructured Thin Films

Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo

Abstract:

Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.

Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue

Procedia PDF Downloads 253
367 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 447
366 Evaluation in Vitro and in Silico of Pleurotus ostreatus Capacity to Decrease the Amount of Low-Density Polyethylene Microplastics Present in Water Sample from the Middle Basin of the Magdalena River, Colombia

Authors: Loren S. Bernal., Catalina Castillo, Carel E. Carvajal, José F. Ibla

Abstract:

Plastic pollution, specifically microplastics, has become a significant issue in aquatic ecosystems worldwide. The large amount of plastic waste carried by water tributaries has resulted in the accumulation of microplastics in water bodies. The polymer aging process caused by environmental influences such as photodegradation and chemical degradation of additives leads to polymer embrittlement and properties change that require degradation or reduction procedures in rivers. However, there is a lack of such procedures for freshwater entities that develop over extended periods. The aim of this study is evaluate the potential of Pleurotus ostreatus a fungus, in reducing lowdensity polyethylene microplastics present in freshwater samples collected from the middle basin of the Magdalena River in Colombia. The study aims to evaluate this process both in vitro and in silico by identifying the growth capacity of Pleurotus ostreatus in the presence of microplastics and identifying the most likely interactions of Pleurotus ostreatus enzymes and their affinity energies. The study follows an engineering development methodology applied on an experimental basis. The in vitro evaluation protocol applied in this study focused on the growth capacity of Pleurotus ostreatus on microplastics using enzymatic inducers. In terms of in silico evaluation, molecular simulations were conducted using the Autodock 1.5.7 program to calculate interaction energies. The molecular dynamics were evaluated by using the myPresto Portal and GROMACS program to calculate radius of gyration and Energies.The results of the study showed that Pleurotus ostreatus has the potential to degrade low-density polyethylene microplastics. The in vitro evaluation revealed the adherence of Pleurotus ostreatus to LDPE using scanning electron microscopy. The best results were obtained with enzymatic inducers as a MnSO4 generating the activation of laccase or manganese peroxidase enzymes in the degradation process. The in silico modelling demonstrated that Pleurotus ostreatus was able to interact with the microplastics present in LDPE, showing affinity energies in molecular docking and molecular dynamics shown a minimum energy and the representative radius of gyration between each enzyme and its substract. The study contributes to the development of bioremediation processes for the removal of microplastics from freshwater sources using the fungus Pleurotus ostreatus. The in silico study provides insights into the affinity energies of Pleurotus ostreatus microplastic degrading enzymes and their interaction with low-density polyethylene. The study demonstrated that Pleurotus ostreatus can interact with LDPE microplastics, making it a good agent for the development of bioremediation processes that aid in the recovery of freshwater sources. The results of the study suggested that bioremediation could be a promising approach to reduce microplastics in freshwater systems.

Keywords: bioremediation, in silico modelling, microplastics, Pleurotus ostreatus

Procedia PDF Downloads 115
365 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas

Authors: Weijing Zhou, Yuting Lei, Francis Nolan

Abstract:

In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.

Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas

Procedia PDF Downloads 100
364 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 89
363 Growth Mechanism and Sensing Behaviour of Sn Doped ZnO Nanoprisms Prepared by Thermal Evaporation Technique

Authors: Sudip Kumar Sinha, Saptarshi Ghosh

Abstract:

While there’s a perpetual buzz around zinc oxide (ZnO) superstructures for their unique optical features, the versatile material has been constantly utilized to manifest tailored electronic properties through rendition of distinct morphologies. And yet, the unorthodox approach of implementing the novel 1D nanostructures of ZnO (pristine or doped) for volatile sensing applications has ample scope to accommodate new unconventional morphologies. In the last two decades, solid-state sensors have attracted much curiosity for their relevance in identifying pollutant, toxic and other industrial gases. In particular gas sensors based on metal oxide semiconducting (wide Eg) nanomaterials have recently attracted intensive attention owing to their high sensitivity and fast response and recovery time. These materials when exposed to air, the atmospheric O2 dissociates and get absorb on the surface of the sensors by trapping the outermost shell electrons. Finally a depleted zone on the surface of the sensors is formed, that enhances the potential barrier height at grain boundary . Once a target gas is exposed to the sensor, the chemical interaction between the chemisorbed oxygen and the specific gas liberates the trapped electrons. Therefore altering the amount of adsorbate is a considerable approach to improve the sensitivity of any target gas/vapour molecule. Likewise, this study presents a spontaneous but self catalytic creation of Sn-doped ZnO hexagonal nanoprisms on Si (100) substrates through thermal evaporation-condensation method, and their subsequent deployment for volatile sensing. In particular, the sensors were utilized to detect molecules of ethanol, acetone and ammonia below their permissible exposure limits which returned sensitivities of around 85%, 80% and 50% respectively. The influence of Sn concentration on the growth, microstructural and optical properties of the nanoprisms along with its role in augmenting the sensing parameters has been detailed. The single-crystalline nanostructures have a typical diameter ranging from 300 to 500 nm and a length that extends up to few micrometers. HRTEM images confirmed the hexagonal crystallography for the nanoprisms, while SAED pattern asserted the single crystalline nature. The growth habit is along the low index <0001>directions. It has been seen that the growth mechanism of the as-deposited nanostructures are directly influenced by varying supersaturation ratio, fairly high substrate temperatures, and specified surface defects in certain crystallographic planes, all acting cooperatively decide the final product morphology. Room temperature photoluminescence (PL) spectra of this rod like structures exhibits a weak ultraviolet (UV) emission peak at around 380 nm and a broad green emission peak in the 505 nm regime. An estimate of the sensing parameters against dispensed target molecules highlighted the potential for the nanoprisms as an effective volatile sensing material. The Sn-doped ZnO nanostructures with unique prismatic morphology may find important applications in various chemical sensors as well as other potential nanodevices.

Keywords: gas sensor, HRTEM, photoluminescence, ultraviolet, zinc oxide

Procedia PDF Downloads 240
362 Enhancing Engineering Students Educational Experience: Studying Hydrostatic Pumps Association System in Fluid Mechanics Laboratories

Authors: Alexandre Daliberto Frugoli, Pedro Jose Gabriel Ferreira, Pedro Americo Frugoli, Lucio Leonardo, Thais Cavalheri Santos

Abstract:

Laboratory classes in Engineering courses are essential for students to be able to integrate theory with practical reality, by handling equipment and observing experiments. In the researches of physical phenomena, students can learn about the complexities of science. Over the past years, universities in developing countries have been reducing the course load of engineering courses, in accordance with cutting cost agendas. Quality education is the object of study for researchers and requires educators and educational administrators able to demonstrate that the institutions are able to provide great learning opportunities at reasonable costs. Didactic test benches are indispensable equipment in educational activities related to turbo hydraulic pumps and pumping facilities study, which have a high cost and require long class time due to measurements and equipment adjustment time. In order to overcome the aforementioned obstacles, aligned with the professional objectives of an engineer, GruPEFE - UNIP (Research Group in Physics Education for Engineering - Universidade Paulista) has developed a multi-purpose stand for the discipline of fluid mechanics which allows the study of velocity and flow meters, loads losses and pump association. In this work, results obtained by the association in series and in parallel of hydraulic pumps will be presented and discussed, mainly analyzing the repeatability of experimental procedures and their agreement with the theory. For the association in series two identical pumps were used, consisting of the connection of the discharge of a pump to the suction of the next one, allowing the fluid to receive the power of all machines in the association. The characteristic curve of the set is obtained from the curves of each of the pumps, by adding the heads corresponding to the same flow rates. The same pumps were associated in parallel. In this association, the discharge piping is common to the two machines together. The characteristic curve of the set was obtained by adding to each value of H (head height), the flow rates of each pump. For the tests, the input and output pressure of each pump were measured. For each set there were three sets of measurements, varying the flow rate in range from 6.0 to 8.5 m 3 / h. For the two associations, the results showed an excellent repeatability with variations of less than 10% between sets of measurements and also a good agreement with the theory. This variation agrees with the instrumental uncertainty. Thus, the results validate the use of the fluids bench designed for didactic purposes. As a future work, a digital acquisition system is being developed, using differential sensors of extremely low pressures (2 to 2000 Pa approximately) for the microcontroller Arduino.

Keywords: engineering education, fluid mechanics, hydrostatic pumps association, multi-purpose stand

Procedia PDF Downloads 222
361 Structural Characterization and Hot Deformation Behaviour of Al3Ni2/Al3Ni in-situ Core-shell intermetallic in Al-4Cu-Ni Composite

Authors: Ganesh V., Asit Kumar Khanra

Abstract:

An in-situ powder metallurgy technique was employed to create Ni-Al3Ni/Al3Ni2 core-shell-shaped aluminum-based intermetallic reinforced composites. The impact of Ni addition on the phase composition, microstructure, and mechanical characteristics of the Al-4Cu-xNi (x = 0, 2, 4, 6, 8, 10 wt.%) in relation to various sintering temperatures was investigated. Microstructure evolution was extensively examined using X-ray diffraction (XRD), scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX), and transmission electron microscopy (TEM) techniques. Initially, under sintering conditions, the formation of "Single Core-Shell" structures was observed, consisting of Ni as the core with Al3Ni2 intermetallic, whereas samples sintered at 620°C exhibited both "Single Core-Shell" and "Double Core-Shell" structures containing Al3Ni2 and Al3Ni intermetallics formed between the Al matrix and Ni reinforcements. The composite achieved a high compressive yield strength of 198.13 MPa and ultimate strength of 410.68 MPa, with 24% total elongation for the sample containing 10 wt.% Ni. Additionally, there was a substantial increase in hardness, reaching 124.21 HV, which is 2.4 times higher than that of the base aluminum. Nanoindentation studies showed hardness values of 1.54, 4.65, 21.01, 13.16, 5.52, 6.27, and 8.39GPa corresponding to α-Al matrix, Ni, Al3Ni2, Ni and Al3Ni2 interface, Al3Ni, and their respective interfaces. Even at 200°C, it retained 54% of its room temperature strength (90.51 MPa). To investigate the deformation behavior of the composite material, experiments were conducted at deformation temperatures ranging from 300°C to 500°C, with strain rates varying from 0.0001s-1 to 0.1s-1. A sine-hyperbolic constitutive equation was developed to characterize the flow stress of the composite, which exhibited a significantly higher hot deformation activation energy of 231.44 kJ/mol compared to the self-diffusion of pure aluminum. The formation of Al2Cu intermetallics at grain boundaries and Al3Ni2/Al3Ni within the matrix hindered dislocation movement, leading to an increase in activation energy, which might have an adverse effect on high-temperature applications. Two models, the Strain-compensated Arrhenius model and the Artificial Neural Network (ANN) model, were developed to predict the composite's flow behavior. The ANN model outperformed the Strain-compensated Arrhenius model with a lower average absolute relative error of 2.266%, a smaller root means square error of 1.2488 MPa, and a higher correlation coefficient of 0.9997. Processing maps revealed that the optimal hot working conditions for the composite were in the temperature range of 420-500°C and strain rates between 0.0001s-1 and 0.001s-1. The changes in the composite microstructure were successfully correlated with the theory of processing maps, considering temperature and strain rate conditions. The uneven distribution in the shape and size of Core-shell/Al3Ni intermetallic compounds influenced the flow stress curves, leading to Dynamic Recrystallization (DRX), followed by partial Dynamic Recovery (DRV), and ultimately strain hardening. This composite material shows promise for applications in the automobile and aerospace industries.

Keywords: core-shell structure, hot deformation, intermetallic compounds, powder metallurgy

Procedia PDF Downloads 22
360 Effects of Tramadol Administration on the Ovary of Adult Rats and the Possible Recovery after Tramadol Withdrawal: A Light and Electron Microscopic Study

Authors: Heba Kamal Mohamed

Abstract:

Introduction: Tramadol is a weak -opioid receptor agonist with an analgesic effect because of the inhibition of uptake of norepinephrine and serotonin. Nowadays, tramadol hydrochloride is frequently used as a pain reliever. Tramadol is recommended for the management of acute and chronic pain of moderate to severe intensity associated with a variety of diseases or problems, including osteoarthritis, diabetic neuropathy, neuropathic pain, and even perioperative pain in human patients. In obstetrics and gynecology, tramadol is used extensively to treat postoperative pain. Aim of the study: This study was undertaken to investigate the histological (light and electron microscopic) and immunohistochemical effects of long term tramadol treatment on the ovary of adult rats and the possible recovery after tramadol withdrawal. Design: Experimental study. Materials and methods: Thirty adult female albino rats were used in this study. They were classified into three main groups (10 rats each). Group I served as the control group. Group II, rats were subcutaneously injected with tramadol 40 mg/kg three times per week for 8 weeks. Group III, rats were subcutaneously injected with tramadol 40 mg/kg three times per week for 8 weeks then were kept for another 8 weeks without treatment for recovery. At the end of the experiment rats were sacrificed and bilateral oophorectomy was carried out; the ovaries were processed for histological study (light and electron microscopic) and immunohistochemical reaction for caspase-3 (apoptotic protein). Results: Examination of the ovary of tramadol-treated rats (group II) revealed many atretic ovarian follicles, some follicles showed detachment of the oocyte from surrounding granulosa cells and others showed loss of the oocyte. Many follicles revealed degenerated vacuolated oocytes and vacuolated theca folliculi cells. Granulosa cells appeared shrunken, disrupted and loosely attached with vacuolated cytoplasm and pyknotic nuclei. Some follicles showed separation of granulosa cells from the theca folliculi layer. The ultrastructural study revealed the presence of granulosa cells with electron dense indented nuclei, damaged mitochondria and granular vacuolated cytoplasm. Other cells showed accumulation of large amount of lipid droplets in their cytoplasm. Some follicles revealed rarifaction of the cytoplasm of oocytes and absent zona pellucida. Moreover, apoptotic changes were detected by immunohistochemical staining in the form of increased staining intensity to caspase-3 (apoptotic protein). With Masson's Trichrome stain, there was an increased collagen fibre deposition in the ovarian cortical stroma. The wall of blood vessels appeared thickened. In the withdrawal group (group III), there was a little improvement in the histological and immunohistochemical changes. Conclusion: Tramadol had serious deleterious effects on ovarian structure. Thus, it should be used with caution, especially when a long term treatment is indicated. Withdrawal of tramadol led to a little improvement in the structural impairment of the ovary.

Keywords: tramadol, ovary, withdrawal, rats

Procedia PDF Downloads 293
359 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 153
358 Nature as a Human Health Asset: An Extensive Review

Authors: C. Sancho Salvatierra, J. M. Martinez Nieto, R. García Gonzalez-Gordon, M. I. Martinez Bellido

Abstract:

Introduction: Nature could act as an asset for human health protecting against possible diseases and promoting the state of both physical and mental health. Goals: This paper aims to determine which natural elements present evidence that show positive influence on human health, on which particular aspects and how. It also aims to determine the best biomarkers to measure such influence. Method: A systematic literature review was carried out. First, a general free text search was performed in databases, such as Scopus, PubMed or PsychInfo. Secondly, a specific search was performed combining keywords in order of increasing complexity. Also the Snowballing technique was used and it was consulted in the CSIC’s (The Spanish National Research Council). Databases: Of the 130 articles obtained and reviewed, 80 referred to natural elements that influenced health. These 80 articles were classified and tabulated according to the nature elements found, the health aspects studied, the health measurement parameters used and the measurement techniques used. In this classification the results of the studies were codified according to whether they were positive, negative or neutral both for the elements of nature and for the aspects of health studied. Finally, the results of the 80 selected studies were summarized and categorized according to the elements of nature that showed the greatest positive influence on health and the biomarkers that had shown greater reliability to measure said influence. Results: Of the 80 articles studied, 24 (30.0%) were reviews and 56 (70.0%) were original research articles. Among the 24 reviews, 18 (75%) found positive results of natural elements on health, and 6 (25%) both positive and negative effects. Of the 56 original articles, 47 (83.9%) showed positive results, 3 (5.4%) both positive and negative, 4 (7.1%) negative effects, and 2 (3.6%) found no effects. The results reflect positive effects of different elements of nature on the following pathologies: diabetes, high blood pressure, stress, attention deficit hyperactivity disorder, psychotic, anxiety and affective disorders. They also show positive effects on the following areas: immune system, social interaction, recovery after illness, mood, decreased aggressiveness, concentrated attention, cognitive performance, restful sleep, vitality and sense of well-being. Among the elements of nature studied, those that show the greatest positive influence on health are forest immersion, natural views, daylight, outdoor physical activity, active transport, vegetation biodiversity, natural sounds and the green residences. As for the biomarkers used that show greater reliability to measure the effects of natural elements are the levels of cortisol (both in blood and saliva), vitamin D levels, serotonin and melatonin, blood pressure, heart rate, muscle tension and skin conductance. Conclusions: Nature is an asset for health, well-being and quality of life. Awareness programs, education and health promotion are needed based on the elements that nature brings us, which in turn generate proactive attitudes in the population towards the protection and conservation of nature. The studies related to this subject in Spain are very scarce. Aknowledgements. This study has been promoted and partially financed by the Environmental Foundation Jaime González-Gordon.

Keywords: health, green areas, nature, well-being

Procedia PDF Downloads 279
357 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 78