Search results for: X-ray Image detection
371 Biosensor for Determination of Immunoglobulin A, E, G and M
Authors: Umut Kokbas, Mustafa Nisari
Abstract:
Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.Keywords: biosensor, immunosensor, immunoglobulin, infection
Procedia PDF Downloads 104370 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer
Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang
Abstract:
In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide
Procedia PDF Downloads 316369 Disaggregating Communities and the Making of Factional States: Evidence from Joint Forest Management in Sundarban, India
Authors: Amrita Sen
Abstract:
In the face of a growing insurgent movement and the perceived failure of the state and the market towards sustainable resource management, a range of decentralized forest management policies was formulated in the last two decades, which recognized the need for community representations within the statutory methods of forest management. The recognition conceded on the virtues of ecological sustainability and traditional environmental knowledge, which were considered to be the principal repositories of the forest dependent communities. The present study, in the light of empirical insights, reflects on the contemporary disjunctions between the preconceived communitarian ethic in environmentalism and the lived reality of forest based life-worlds. Many of the popular as well as dominant ideologies, which have historically shaped the conceptual and theoretical understanding of sociology, needs further perusal in the context of the emerging contours of empirical knowledge, which lends opportunities for substantive reworking and analysis. The image of the community appears to be one of those concepts, an identity which has for long defined perspectives and processes associated with people living together harmoniously in small physical spaces. Through an ethnographic account of the implementation of Joint Forest Management (JFM) in a forest fringe village in Sundarban, the study explores the ways in which the idea of ‘community’ gets transformed through the process of state-making, rendering the necessity of its departure from the standard, conventional definition of homogeneity and internal equity. The study necessitates an attention towards the anthropology of micro-politics, disaggregating an essentially constructivist anthropology of ‘collective identities’, which can render the visibility of political mobilizations plausible within the seemingly culturalist production of communities. The two critical questions that the paper seeks to ask in this context are: how the ‘local’ is constituted within community based conservation practices? Within the efforts of collaborative forest management, how accurately does the depiction of ‘indigenous environmental knowledge’, subscribe to its role of sustainable conservation practices? Reflecting on the execution of JFM in Sundarban, the study critically explores the ways in which the state ceases to be ‘trans-national’ and interacts with the rural life-worlds through its local factions. Simultaneously, the study attempts to articulate the scope of constructing a competing representation of community, shaped by increasing political negotiations and bureaucratic alignments which strains against the usual preoccupations with tradition primordiality and non material culture as well as the amorous construction of indigeneity.Keywords: community, environmentalism, JFM, state-making, identities, indigenous
Procedia PDF Downloads 198368 Call-Back Laterality and Bilaterality: Possible Screening Mammography Quality Metrics
Authors: Samson Munn, Virginia H. Kim, Huija Chen, Sean Maldonado, Michelle Kim, Paul Koscheski, Babak N. Kalantari, Gregory Eckel, Albert Lee
Abstract:
In terms of screening mammography quality, neither the portion of reports that advise call-back imaging that should be bilateral versus unilateral nor how much the unilateral call-backs may appropriately diverge from 50–50 (left versus right) is known. Many factors may affect detection laterality: display arrangement, reflections preferentially striking one display location, hanging protocols, seating positions with respect to others and displays, visual field cuts, health, etc. The call-back bilateral fraction may reflect radiologist experience (not in our data) or confidence level. Thus, laterality and bilaterality of call-backs advised in screening mammography reports could be worthy quality metrics. Here, laterality data did not reveal a concern until drilling down to individuals. Bilateral screening mammogram report recommendations by five breast imaging, attending radiologists at Harbor-UCLA Medical Center (Torrance, California) 9/1/15--8/31/16 and 9/1/16--8/31/17 were retrospectively reviewed. Recommended call-backs for bilateral versus unilateral, and for left versus right, findings were counted. Chi-square (χ²) statistic was applied. Year 1: of 2,665 bilateral screening mammograms, reports of 556 (20.9%) recommended call-back, of which 99 (17.8% of the 556) were for bilateral findings. Of the 457 unilateral recommendations, 222 (48.6%) regarded the left breast. Year 2: of 2,106 bilateral screening mammograms, reports of 439 (20.8%) recommended call-back, of which 65 (14.8% of the 439) were for bilateral findings. Of the 374 unilateral recommendations, 182 (48.7%) regarded the left breast. Individual ranges of call-backs that were bilateral were 13.2–23.3%, 10.2–22.5%, and 13.6–17.9%, by year(s) 1, 2, and 1+2, respectively; these ranges were unrelated to experience level; the two-year mean was 15.8% (SD=1.9%). The lowest χ² p value of the group's sidedness disparities years 1, 2, and 1+2 was > 0.4. Regarding four individual radiologists, the lowest p value was 0.42. However, the fifth radiologist disfavored the left, with p values of 0.21, 0.19, and 0.07, respectively; that radiologist had the greatest number of years of experience. There was a concerning, 93% likelihood that bias against left breast findings evidenced by one of our radiologists was not random. Notably, very soon after the period under review, he retired, presented with leukemia, and died. We call for research to be done, particularly by large departments with many radiologists, of two possible, new, quality metrics in screening mammography: laterality and bilaterality. (Images, patient outcomes, report validity, and radiologist psychological confidence levels were not assessed. No intervention nor subsequent data collection was conducted. This uncomplicated collection of data and simple appraisal were not designed, nor had there been any intention to develop or contribute, to generalizable knowledge (per U.S. DHHS 45 CFR, part 46)).Keywords: mammography, screening mammography, quality, quality metrics, laterality
Procedia PDF Downloads 162367 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran
Authors: Mahshid Arabi
Abstract:
In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.Keywords: facial recognition, FaceMatch software, Iran, university entrance exam
Procedia PDF Downloads 46366 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China
Abstract:
With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture
Procedia PDF Downloads 164365 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing
Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl
Abstract:
Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.Keywords: remote sensing, landsat 8, nasser lake, water quality
Procedia PDF Downloads 92364 The Influence of Gender on Itraconazole Pharmacokinetic Parameters in Healthy Adults
Authors: Milijana N. Miljkovic, Viktorija M. Dragojevic-Simic, Nemanja K. Rancic, Vesna M. Jacevic, Snezana B. Djordjevic, Momir M. Mikov, Aleksandra M. Kovacevic
Abstract:
Itraconazole (ITZ) is a weak base and extremely lipophilic compound, with water solubility as a rate-limiting step in its absorption from the gastrointestinal tract. Its absolute bioavailability, about 55%, is maximal when its oral formulation, capsules, are taken immediately after a full meal. Peak plasma concentrations (Cmax) are reached within 2 to 5 hrs after their administration. ITZ undergoes extensive hepatic metabolism by human CYP3A4 isoenzyme and more than 30 different metabolites have been identified. One of the main ones is hydroxyitraconazole (HITZ), in which plasma concentrations are almost twice higher than those of ITZ. Gender differences in drug PK (Pharmacokinetics) have already been recognized, but variations in metabolism are believed to be their major cause. The aim of the study was to investigate the influence of gender on ITZ PK parameters after administration of oral capsule formulation, following 100 mg single dosing in healthy adult volunteers under fed conditions. The single-center, open-label PK study was performed. PK analyses included PK parameters obtained after a single 100 mg dose administration of itraconazole capsules to 48 females and 66 males. Blood samples were collected at pre-dose and up to 72.0 h after administration (1.0, 2.0, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 7.0, 9.0, 12.0, 24.0, 36.0 and 72.0 hrs). The calculated pharmacokinetic parameters, based on the plasma concentrations of itraconazole and hydroxyitraconazole, were Cmax, AUClast, and AUCtot. Plasma concentrations of ITZ and HITZ were determined using a validated liquid chromatographic method with mass spectrometric detection, while pharmacokinetic parameters were estimated using non-compartmental methods. The pharmacokinetic analyses were performed using Kinetica software version 5.0. The mean value of ITZ Cmaxmen was 74.79 ng/ml, and Cmaxwomen was 51.291 ng/ml (independent samples test; p = 0.005). Hydroxyitraconazole had a mean value of Cmaxmen 106.37 ng/ml, and the mean value Cmaxwomen was 70.05 ng/ml. Women had, on average, lower AUClast and Cmax than men. AUClastmen for ITZ was 736.02 ng/mL*h and AUClastwomen was 566.62 ng/mL*h, while AUClastmen for HITZ was 1154.80 was ng/mL*h and AUClastwomen for HITZ was 708.12 ng/mL*h (independent samples test; p = 0.033). The mean values of ITZ AUCtotmen were 884.73 ng/mL*h and AUCtotwomen was 685.10 ng/mL*h. AUCtotmen for HITZ was 1290.41 ng/mL*h, while AUCtotwomen for HIZT was 788.60 ng/mL*h (p < 0.001). The results could point out to lower oral bioavailability of ITZ in women, since values of Cmax, AUClast, and AUCtot of both ITZ and HITZ were significantly lower in women than in men, respectively. The reason may be higher expression and activity of CYP3A4 in women than in men, but there also may be differences in other PK parameters. High variability of both ITZ and HITZ concentrations in both genders confirmed that ITZ is a highly variable drug. Further examinations of its PK are needed to justify strategies for therapeutic drug monitoring in patients treated by this antifungal agent.Keywords: itraconazole, gender, hydroxyitraconazole, pharmacokinetics
Procedia PDF Downloads 137363 A Qualitative Exploration of the Beliefs and Experiences of HIV-Related Self-Stigma Amongst Young Adults Living with HIV in Zimbabwe
Authors: Camille Rich, Nadine Ferris France, Ann Nolan, Webster Mavhu, Vongai Munatsi
Abstract:
Background and Aim: Zimbabwe has one of the highest HIV rates in the world, with a 12.7% adult prevalence rate. Young adults are a key group affected by HIV, and one-third of all new infections in Zimbabwe are amongst people ages 18-24 years. Stigma remains one of the main barriers to managing and reducing the HIV crisis, especially for young adults. There are several types of stigma, including enacted stigma, the outward discrimination towards someone and self-stigma, the negative self-judgments one has towards themselves. Self-stigma can have severe consequences, including feelings of worthlessness, shame, suicidal thoughts, and avoidance of medical help. This can have detrimental effects on those living with HIV. However, the unique beliefs and impacts of self-stigma amongst key groups living with HIV have not yet been explored. Therefore, the focus of this study is on the beliefs and experiences of HIV-related self-stigma, as experienced by young adults living in Harare, Zimbabwe. Research Methods: A qualitative approach was taken for this study, using sixteen semi-structured interviews with young adults (18-24 years) who are living with HIV in Harare. Participants were conveniently and purposefully sampled as members of Africa, an organization dedicated to young people living with HIV. Interviews were conducted over Zoom due to the COVID-19 pandemic, recorded and then coded using the software NVivo. The data was analyzed using both inductive and deductive Thematic Analysis to find common themes. Results: All of the participants experienced HIV-related self-stigma, and both beliefs and experiences were explored. These negative self-perceptions included beliefs of worthlessness, hopelessness, and negative body image. The young adults described believing they were not good enough to be around HIV negative people or that they could never be loved due to their HIV status. Developing self-stigmatizing thoughts came from internalizing negative cultural values, stereotypes about people living with HIV, and adverse experiences. Three main themes of self-stigmatizing experiences emerged: disclosure difficulties, relationship complications, and being isolated. Fear of telling someone their status, rejection in a relationship, and being excluded by others due to their HIV status contributed to their self-stigma. These experiences caused feelings of loneliness, sadness, shame, fear, and low self-worth. Conclusions: This study explored the beliefs and experiences of HIV-related self-stigma of these young adults. The emergence of negative self-perceptions demonstrated deep-rooted beliefs of HIV-related self-stigma that adversely impact the participants. The negative self-perceptions and self-stigmatizing experiences caused the participants to feel worthless, hopeless, shameful, and alone-negatively impacting their physical and mental health, personal relationships, and sense of self-identity. These results can now be used to pursue interventions to target the specific beliefs and experiences of young adults living with HIV and reduce the adverse consequences of self-stigma.Keywords: beliefs, HIV, self-stigma, stigma, Zimbabwe
Procedia PDF Downloads 115362 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 128361 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 186360 Identification of Viruses Infecting Garlic Plants in Colombia
Authors: Diana M. Torres, Anngie K. Hernandez, Andrea Villareal, Magda R. Gomez, Sadao Kobayashi
Abstract:
Colombian Garlic crops exhibited mild mosaic, yellow stripes, and deformation. This group of symptoms suggested a viral infection. Several viruses belonging to the genera Potyvirus, Carlavirus and Allexivirus are known to infect garlic and lower their yield worldwide, but in Colombia, there are no studies of viral infections in this crop, only leek yellow stripe virus (LYSV) has been reported to our best knowledge. In Colombia, there are no management strategies for viral diseases in garlic because of the lack of information about viral infections on this crop, which is reflected in (i) high prevalence of viral related symptoms in garlic fields and (ii) high dispersal rate. For these reasons, the purpose of the present study was to evaluate the viral status of garlic in Colombia, which can represent a major threat on garlic yield and quality for this country 55 symptomatic leaf samples were collected for virus detection by RT-PCR and mechanical inoculation. Total RNA isolated from infected samples were subjected to RT-PCR with primers 1-OYDV-G/2-OYDV-G for Onion yellow dwarf virus (OYDV) (expected size 774pb), 1LYSV/2LYSV for LYSV (expected size 1000pb), SLV 7044/SLV 8004 for Shallot latent virus (SLV) (expected size 960pb), GCL-N30/GCL-C40 for Garlic common latent virus (GCLV) (expected size 481pb) and EF1F/EF1R for internal control (expected size 358pb). GCLV, SLV, and LYSV were detected in infected samples; in 95.6% of the analyzed samples was detected at least one of the viruses. GCLV and SLV were detected in single infection with low prevalence (9.3% and 7.4%, respectively). Garlic generally becomes coinfected with several types of viruses. Four viral complexes were identified: three double infection (64% of analyzed samples) and one triple infection (15%). The most frequent viral complex was SLV + GCLV infecting 48.1% of the samples. The other double complexes identified had a prevalence of 7% (GCLV + LYSV and SLV + LYSV) and 5.6% of the samples were free from these viruses. Mechanical transmission experiments were set up using leaf tissues of collected samples from infected fields, different test plants were assessed to know the host range, but it was restricted to C. quinoa, confirming the presence of detected viruses which have limited host range and were detected in C. quinoa by RT-PCR. The results of molecular and biological tests confirm the presence of SLV, LYSV, and GCLV; this is the first report of SLV and LYSV in garlic plants in Colombia, which can represent a serious threat for this crop in this country.Keywords: SLV, GCLV, LYSV, leek yellow stripe virus, Allium sativum
Procedia PDF Downloads 148359 Health-Related Problems of International Migrant Groups in Eskisehir, Turkey
Authors: Temmuz Gönç Şavran
Abstract:
Migration is a multidimensional and health-related concept that has important consequences for both migrants and the host society. Due to past conflicts and poor living conditions that lead to migration, the dangerous and difficult journey, and the problems they face upon arrival in the destination country, migrants are at higher risk for poor health. Health is a human right, and all societies and communities, including migrant groups, must receive adequate health care. In addition, the health of migrants must be improved to protect the health of the host society and ensure social integration. The main determinants of health are employment, income, education, good housing, and adequate nutrition. It can be said that migrants are among the most vulnerable groups in society in these respects, and migrant health is negatively affected by this situation. Rigid immigration policies or financial constraints in destination countries, the complexity and bureaucracy of health systems, the low health literacy of migrant groups, and the inadequate provision of translation services in health facilities are among the other main factors affecting migrant health. Migrants are also at risk of stigma, exclusion, detection, and deportation when seeking medical care. Based on data from a qualitative study with a descriptive case study design, this paper aims to highlight and sociologically assess the health-related problems of international migrants in Eskisehir, Turkey. The sample consists of 30 international migrants living in Eskisehir, two-thirds of whom are from Syria, Iraq, Afghanistan, and Pakistan. Those who are citizens of the Republic of Turkey are excluded from the study; otherwise, the legal status of the participants is not considered in the selection of the sample. This makes it possible to distinguish the different needs and problems of subgroups and to consider migrant health as a comprehensive concept. The research is supported by Anadolu University in Eskisehir, and data will be collected through semi-structured interviews between November 2022 and February 2023. With holistic sociology of health approach, this study considers migrant health as a comprehensive sociological concept. It aims to reveal the health-related resources and needs of the international migrant groups living in the center of Eskisehir, the problems they encounter in meeting these needs, and the strategies they use to solve these problems. The results are expected to show that the health of migrants is not only influenced by legislation but is shaped by many processes, from housing conditions to cultural habits. It is expected that the results will also raise awareness of discrimination, exclusion, marginalization, and hate speech in migrants’ access to health services.Keywords: migrant health, sociology of health, sociology of migration, Turkey, refugees
Procedia PDF Downloads 79358 The Effects of Myelin Basic Protein Charge Isomers on the Methyl Cycle Metabolites in Glial Cells
Authors: Elene Zhuravliova, Tamar Barbakadze, Irina Kalandadze, Elnari Zaalishvili, Lali Shanshiashvili, David Mikeladze
Abstract:
Background: Multiple sclerosis (MS) is an inflammatory, neurodegenerative disease, which is accompanied by demyelination and autoimmune response to myelin proteins. Among post-translational modifications, which mediate the modulation of inflammatory pathways during MS, methylation is the main one. The methylation of DNA, also amino acids lysine and arginine, occurs in the cell. It was found that decreased trans-methylation is associated with neuroinflammatory diseases. Therefore, abnormal regulation of the methyl cycle could induce demyelination through the action on PAD (peptidyl-arginine-deiminase) gene promoter. PAD takes part in protein citrullination and targets myelin basic protein (MBP), which is affected during demyelination. To determine whether MBP charge isomers are changing the methyl cycle, we have estimated the concentrations of methyl cycle metabolites in MBP-activated primary astrocytes and oligodendrocytes. For this purpose, the action of the citrullinated MBP- C8 and the most cationic MBP-C1 isomers on the primary cells were investigated. Methods: Primary oligodendrocyte and astrocyte cell cultures were prepared from whole brains of 2-day-old Wistar rats. The methyl cycle metabolites, including homocysteine, S-adenosylmethionine (SAM), and S-adenosylhomocysteine (SAH), were estimated by HPLC analysis using fluorescence detection and prior derivatization. Results: We found that the action of MBP-C8 and MBP-C1 induces a decrease in the concentration of both methyl cycle metabolites, S-adenosylmethionine (SAM) and S-adenosylhomocysteine (SAH), in astrocytes compared to the control cells. As for oligodendrocytes, the concentration of SAM was increased by the addition of MBP-C1, while MBP-C8 has no significant effect. As for SAH, its concentration was increased compared to the control cells by the action of both MBP-C1 and MBP-C8. A significant increase in homocysteine concentration was observed by the action of the MBP-C8 isomer in both oligodendrocytes and astrocytes. Conclusion: These data suggest that MBP charge isomers change the concentration of methyl cycle metabolites. MBP-C8 citrullinated isomer causes elevation of homocysteine in astrocytes and oligodendrocytes, which may be the reason for decreased astrocyte proliferation and increased oligodendrocyte cell death which takes place in neurodegenerative processes. Elevated homocysteine levels and subsequent abnormal regulation of methyl cycles in oligodendrocytes possibly change the methylation of DNA that activates PAD gene promoter and induces the synthesis of PAD, which in turn provokes the process of citrullination, which is the accompanying process of demyelination. Acknowledgment: This research was supported by the SRNSF Georgia RF17_534 grant.Keywords: myelin basic protein, astrocytes, methyl cycle metabolites, homocysteine, oligodendrocytes
Procedia PDF Downloads 156357 Mobile Learning and Student Engagement in English Language Teaching: The Case of First-Year Undergraduate Students at Ecole Normal Superieur, Algeria
Authors: I. Tiahi
Abstract:
The aim of the current paper is to explore educational practices in contemporary Algeria. Researches explain such practices bear traditional approach and the overlooks modern teaching methods such as mobile learning. That is why the research output of examining student engagement in respect of mobile learning was obtained from the following objectives: (1) To evaluate the current practice of English language teaching within Algerian higher education institutions, (2) To explore how social constructivism theory and m-learning help students’ engagement in the classroom and (3) To explore the feasibility and acceptability of m-learning amongst institutional leaders. The methodology underpins a case study and action research. For the case study, the researcher engaged with 6 teachers, 4 institutional leaders, and 30 students subjected for semi-structured interviews and classroom observations to explore the current teaching methods for English as a foreign language. For the action research, the researcher applied an intervention course to investigate the possibility and implications for future implementation of mobile learning in higher education institutions. The results were deployed using thematic analysis. The research outcome showed that the disengagement of students in English language learning has many aspects. As seen from the interviews from the teachers, the researcher found that they do not have enough resources except for using ppt for some teacher. According to them, the teaching method they are using is mostly communicative and competency-based approach. Teachers informed that students are disengaged because they have psychological barriers. In classroom setting, the students are conscious about social approval from the peer, and thus if they are to face negative reinforcement which would damage their image, it is seen as a preventive mechanism to be scared of committing mistakes. This was also very reflective in this finding. A lot of other arguments can be given for this claim; however, in Algerian setting, it is usual practice where teachers do not provide positive reinforcement which is open up students for possible learning. Thus, in order to overcome such a psychological barrier, proper measures can be taken. On a conclusive remark, it is evident that teachers, students, and institutional leaders provided positive feedback for using mobile learning. It is not only motivating but also engaging in learning processes. Apps such as Kahoot, Padlet and Slido were well received and thus can be taken further to examine its higher impact in Algerian context. Thus, in the future, it will be important to implement m-learning effectively in higher education to transform the current traditional practices into modern, innovative and active learning. Persuasion for this change for stakeholder may be challenging; however, its long-term benefits can be reflective from the current research paper.Keywords: Algerian context, mobile learning, social constructivism, student engagement
Procedia PDF Downloads 137356 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks
Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba
Abstract:
Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN
Procedia PDF Downloads 54355 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286354 Kitchen Bureaucracy: The Preparation of Banquets for Medieval Japanese Royalty
Authors: Emily Warren
Abstract:
Despite the growing body of research on Japanese food history, little has been written about the attitudes and perspectives premodern Japanese people held about their food, even on special celebratory days. In fact, the overall image that arises from the literature is one of ambivalence: that the medieval nobility of the Heian and Kamakura periods (795-1333) did not much care about what they ate and for that reason, food seems relatively scarce in certain historical records. This study challenges this perspective by analyzing the manuals written to guide palace management and feast preparation for royals, introducing two of the sources into English for the first time. This research is primarily based on three manuals that address different aspects of royal food culture and preparation. The Chujiruiki, or Record of the Palace Kitchens (1295), is a fragmentary manual written by a bureaucrat in charge of the main palace kitchen office. This document collection details the utensils, furnishing, and courses that officials organized for the royals’ two daily meals in the morning (asagarei gozen) and in the afternoon (hiru gozen) when they enjoyed seven courses, each one carefully cooked and plated. The orchestration of daily meals and frequent banquets would have been complicated affairs for those preparing the tableware and food, thus requiring texts like the Chûjiruiki, as well as another manual, the Nicchûgyôji (11th c.), or The Daily Functions. Because of the complex coordination between various kitchen-related bureaucratic offices, kitchen officials endeavored to standardize the menus and place settings depending on the time of year, religious abstinence days, and available ingredients flowing into the capital as taxes. For the most important annual banquets and rites celebrating deities and the royal family, kitchen officials would likely refer to the Engi Shiki (927), or Protocols of the Engi Era, for details on offerings, servant payments, and menus. This study proposes that many of the great feast events, and indeed even daily meals at the palace, were so standardized and carefully planned for repetition that there would have been little need for the contents of such feasts to be detailed in diaries or novels—places where historians have noted a lack of the mention of food descriptions. These descriptions were not included for lack of interest on the part of the nobility, but rather because knowledge of what would be served at banquets and feasts would be considered a matter-of-course in the same way that a modern American would likely not need to state the menu of a traditional Thanksgiving meal to an American audience. Where food was concerned, novelty more so than tradition prompted a response in personal records, like diaries.Keywords: banquets, bureaucracy, Engi shiki, Japanese food
Procedia PDF Downloads 111353 Characterization of Mycoplasma Pneumoniae Causing Exacerbation of Asthma: A Prototypical Finding from Sri Lanka
Authors: Lakmini Wijesooriya, Vicki Chalker, Jessica Day, Priyantha Perera, N. P. Sunil-Chandra
Abstract:
M. pneumoniae has been identified as an etiology for exacerbation of asthma (EQA), although viruses play a major role in EOA. M. pneumoniae infection is treated empirically with macrolides, and its antibiotic sensitivity is not detected routinely. Characterization of the organism by genotyping and determination of macrolide resistance is important epidemiologically as it guides the empiric antibiotic treatment. To date, there is no such characterization of M. pneumoniae performed in Sri Lanka. The present study describes the characterization of M. pneumoniae detected from a child with EOA following a screening of 100 children with EOA. Of the hundred children with EOA, M. pneumoniae was identified only in one child by Real-Time polymerase chain reaction (PCR) test for identifying the community-acquired respiratory distress syndrome (CARDS) toxin nucleotide sequences. The M. pneumoniae identified from this patient underwent detection of macrolide resistance via conventional PCR, amplifying and sequencing the region of the 23S rDNA gene that contains single nucleotide polymorphisms that confer resistance. Genotyping of the isolate was performed via nested Multilocus Sequence Typing (MLST) in which eight (8) housekeeping genes (ppa, pgm, gyrB, gmk, glyA, atpA, arcC, and adk) were amplified via nested PCR followed by gene sequencing and analysis. As per MLST analysis, the M. pneumoniae was identified as sequence type 14 (ST14), and no mutations that confer resistance were detected. Resistance to macrolides in M. pneumoniae is an increasing problem globally. Establishing surveillance systems is the key to informing local prescriptions. In the absence of local surveillance data, antibiotics are started empirically. If the relevant microbiological samples are not obtained before antibiotic therapy, as in most occasions in children, the course of antibiotic is completed without a microbiological diagnosis. This happens more frequently in therapy for M. pneumoniae which is treated with a macrolide in most patients. Hence, it is important to understand the macrolide sensitivity of M. pneumoniae in the setting. The M. pneumoniae detected in the present study was macrolide sensitive. Further studies are needed to examine a larger dataset in Sri Lanka to determine macrolide resistance levels to inform the use of macrolides in children with EOA. The MLST type varies in different geographical settings, and it also provides a clue to the existence of macrolide resistance. The present study enhances the database of the global distribution of different genotypes of M. pneumoniae as this is the first such characterization performed with the increased number of samples to determine macrolide resistance level in Sri Lanka. M. pneumoniae detected from a child with exacerbation of asthma in Sri Lanka was characterized as ST14 by MLST and no mutations that confer resistance were detected.Keywords: mycoplasma pneumoniae, Sri Lanka, characterization, macrolide resistance
Procedia PDF Downloads 186352 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 177351 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience
Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore
Abstract:
INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)
Procedia PDF Downloads 71350 Radar Cross Section Modelling of Lossy Dielectrics
Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit
Abstract:
Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation
Procedia PDF Downloads 240349 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing
Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel
Abstract:
There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.Keywords: climate change, resilience, remote sensing, demographic and health surveys
Procedia PDF Downloads 165348 Motivation of Doctors and its Impact on the Quality of Working Life
Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin
Abstract:
At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure
Procedia PDF Downloads 356347 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans
Authors: Rene Hellmuth
Abstract:
Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.Keywords: building information modeling, digital factory model, factory planning, restructuring
Procedia PDF Downloads 114346 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator
Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur
Abstract:
Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature
Procedia PDF Downloads 109345 Risk Assessment and Haloacetic Acids Exposure in Drinking Water in Tunja, Colombia
Authors: Bibiana Matilde Bernal Gómez, Manuel Salvador Rodríguez Susa, Mildred Fernanda Lemus Perez
Abstract:
In chlorinated drinking water, Haloacetic acids have been identified and are classified as disinfection byproducts originating from reaction between natural organic matter and/or bromide ions in water sources. These byproducts can be generated through a variety of chemical and pharmaceutical processes. The term ‘Total Haloacetic Acids’ (THAAs) is used to describe the cumulative concentration of dichloroacetic acid, trichloroacetic acid, monochloroacetic acid, monobromoacetic acid, and dibromoacetic acid in water samples, which are usually measured to evaluate water quality. Chronic presence of these acids in drinking water has a risk of cancer in humans. The detection of THAAs for the first time in 15 municipalities of Boyacá was accomplished in 2023. Aim is to describe the correlation between the levels of THAAs and digestive cancer in Tunja, a city in Colombia with higher rates of digestive cancer and to compare the risk across 15 towns, taking into account factors such as water quality. A research project was conducted with the aim of comparing water sources based on the geographical features of the town, describing the disinfection process in 15 municipalities, and exploring physical properties such as water temperature and pH level. The project also involved a study of contact time based on habits documented through a survey, and a comparison of socioeconomic factors and lifestyle, in order to assess the personal risk of exposure. Data on the levels of THAAs were obtained after characterizing the water quality in urban sectors in eight months of 2022. This, based on the protocol described in the Stage 2 DBP of the United States Environmental Protection Agency (USEPA) from 2006, which takes into account the size of the population being supplied. A cancer risk assessment was conducted to evaluate the likelihood of an individual developing cancer due to exposure to pollutants THAAs. The assessment considered exposure methods like oral ingestion, skin absorption, and inhalation. The chronic daily intake (CDI) for these exposure routes was calculated using specific equations. The lifetime cancer risk (LCR) was then determined by adding the cancer risks from the three exposure routes for each HAA. The risk assessment process involved four phases: exposure assessment, toxicity evaluation, data gathering and analysis, and risk definition and management. The results conclude that there is a cumulative higher risk of digestive cancer due to THAAs exposure in drinking water.Keywords: haloacetic acids, drinking water, water quality, cancer risk assessment
Procedia PDF Downloads 57344 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly
Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto
Abstract:
Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau
Procedia PDF Downloads 76343 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval
Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle
Abstract:
Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval
Procedia PDF Downloads 131342 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 296